# Self-supervised Learning

Momentum Contrast for Unsupervised Visual Representation Learning(MoCo) Review
1. Abstract & Introduction 이 논문은 dynamic dictionary(queue)와 contrastive loss를 이용한 이미지에서의 unsupervised learning을 실행하였다. 여기서 unsupervised learning은 dyna
Unsupervised Feature Learning via Non-Parametric Instance Discrimination(NPID) Review
이 논문의 주제는 ImageNet의 object recognition에서 나왔다고 한다. Figure 1에서 볼 수 있듯이 레오파드에 대한 top-5 classification error를 보면, 레오파드와 비슷하게 생긴 제규어나, 치타와 같은 동물들의 softmax값
[논문 리뷰] DeepCluster: Deep Clustering for Unsupervised Learning of Visual Features
SSL(Self-Supervised Learning) 이해하기 세번째!
[논문 리뷰] DAC: Deep Adaptive Image Clustering
SSL(Self-Supervised Learning) 이해하기 두번째!
[논문 리뷰] DEC: Unsupervised Deep Embedding for Clustering Analysis
SSL(Self-Supervised Learning) 이해하기 첫번째!
[간단정리]Adversarial Self-Supervised Contrastive Learning(NIPS 2020)
Some notes on Adversarial, Self-Supervised, and Contrastive Learning

[논문리뷰]BEIT : Pre-Training of Image Transformer
Title : BEIT : Pre-Training of Image Transformer Date : 15 Jun 2021 Keywords : Autoencoder,Self-Supervsied, Vision Transformer, BERT, Tokenize Title

When Does Contrastive Visual Representation Learning Work?[.,2021]
최근 self-supervised learning기술이 발전하면서 supervised/unsupervised learning간의 성능 차이는 좁혀지고 있다. 본 연구에서는 여러 large-scale 데이터셋에 대해 대조적 self-supervised-learning을

Emerging Properties in Self-Supervised Vision[.,2021] Transformers
Summary Self attention from a VIT with 8 $\times$ 8 patches trained with no supervision 본 연구에서는 Vision Transformer(VIT)의 feature를 self-supervised-le
%20-%20%EB%B3%B5%EC%82%AC%EB%B3%B8_22.png)
[강의슬라이드] Zero Shot Text to Image Generation: DALL-E(DALLE)
DALL-E 관련 강의 슬라이드입니다.

Self-supervised Learning in Generative Model
Generative Model 기반 Self-supervised Learning에 대해

SuperPoint: Self-Supervised Interest Point Detection and Description (CVPRW 2018)
Homographic Adaptation을 제안하여 multi-scale에 대한 interest point detection을 잘해보겠다. 기존 Homography estimation의 모델의 수준을 능가하였다. Interest point: MVG에서 Image mat

[논문리뷰]AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE : Vi-T(Vision Transformer)
Paper review for Vision Transformer

Self Supervised Representation Learning in NLP
Language Models have existed since the 90’s even before the phrase “self-supervised learning” was termed.At the core of these self-supervised methods

Self-Supervised Learning 관련 논문리뷰(OPGAN,MatchGAN)
Self-Supervised Learning 관련 논문 리뷰 [작성자 : 김민경]