2012 AlexNet
2013 DQN
2014 Encoder/Decoder, Adam
2015 GAN, ResNet
2016
2017 Transformer
2018 Bert
2019 Big Launguage Models (GPT-X)
2020 Self-Supervised Learning
ImageNet 우승
알파고
강화학습
딥마인드
Neural Machine Translation
why deep learning? 네트워크를 깊게 쌓기 때문
이전에 layer를 깊게 쌓아서 생긴 문제를 개선
Attention Is All You Need
fine-tuned NLP models
Bidirectional Encoder Representations from Transformers
GPT-3, an autoregressive language model with 175 billion parameters
SimCLR
a simple framework for contrastive learnign of visual representation
주어진 학습 데이터 외에 라벨을 모르는 unsupervised data 함께 사용