AI 필독 논문 정리

신정안·2024년 1월 3일

CV

  • Classification
    • CNN
      • AlexNet
        • ImageNet Classification with Deep Convolutional Neural Network [Krizhevsky et al., NeurIPS 2012]
      • VGG
        • Very Deep Convolutional Networks for Large-Scale Image Recognition [Simonyan et al., ICLR 2015]
      • GoogLeNet
        • Going Deeper with Convolutions [Szegedy et al., CVPR 2015]
      • ResNet
        • Deep Residual Learning for Image Recognition [He et al., CVPR 2016]
    • ViT
      • Transformer
        • Attention is All You Need [Vaswani et al., NeurIPS 2017]
      • Vision Transformer (ViT)
        • An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [Dosovitsky et al., ICLR 2021]
  • Generation
    • GAN
      • GAN
        • Generative Adversarial Networks [Goodfellow et al., NeurIPS 2014]
      • DCGAN
        • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [Radford et al., ICLR 2016]
      • PGGAN
        • Progressive Growing of GANs for Improved Quality, Stability, and Variation [Karras et al., ICLR 2018]
      • StyleGAN
        • A Style-Based Generator Architecture for Generative Adversarial Networks [Karras et al., CVPR 2019]
      • StyleGAN2
        • Analyzing and Improving the Image Quality of StyleGAN [Karras et al., CVPR 2020]
    • Diffusion Models
      • Denoising Diffusion Probabilistic Models (DDPM)
        • Denoising Diffusion Probabilistic Models [Ho et al., NeurIPS 2020]
      • Latent Diffusion (Stable Diffusion)
        • High-Resolution Image Synthesis with Latent Diffusion Models [Rombach et al., CVPR 2022]

NLP

  • Base
    • Attention is All You Need
  • RNN base Model
    • RNN
      • Recurrent neural network based language model
    • LSTM
      • Long Short Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling
    • GRU
      • Learning Phrase Representation using RNN Encoder-Decoder for Stistical Machine Translation
    • Seq2Seq
      • Sequence to Sequence Learning with Neural Networks
  • Attention Mechanism
    • Attention
      • Neural Machine Translation by Jointly Learning to Align and Translate
      • Transformer -> Attention is All You Need
  • Word Embedding
    • Word2Vec
      • Efficient Estimation of Word Representations in Vector Space
    • GloVe
      • Global Vectors for Word Representation
    • FastText
      • Enriching Word Vectors with Subword Information
    • ELMo
      • Deep contextualized word representations
  • Transformer Architecture 기반의 Pretrained Language Model
    • BART
      • Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
    • GPT
      • GPT-1
        • Improving Language Understanding by Generative Pre-Training
      • GPT-2
        • Language Models are Unsupervised Multitask Learners
    • BERT
      • BERT
        • Pre-training of Deep Bidirectional Transformers for Language Understanding
      • RoBERTa
        • A Robustly Optimized BERT Pretraining Approach
      • ALBERT
        • A Lite BERT for Self-supervised Learning of Language Representations
    • ELECTRA
      • Pre-training Text Encoders as Discriminators Rather Than Generators
  • LLM
    • GPT-3
      • Language Models are Few-Shot Learners
    • InstuctGPT
      • Training language models to follow instructions with human feedback

Top AI Conferences

  • Computer vision
    - IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
    - International Conference on Computer Vision (ICCV)
    - European Conference on Computer Vision (ECCV)

  • Artificial intelligence / Machine learning
    - Neural Information Processing Systems (NeurIPS)
    - International Conference on Learning Representations (ICLR)
    - International Conference on Machine Learning (ICML)

  • Natural language processing
    - Annual Meeting of the Assosication for Computational Linguistics (ACL)
    - Conference on Empirical Methods in Natural Language Processing (ENMLP)

profile
To be Y&R

0개의 댓글