논문? 리뷰?

1.Attention Is All You Need[Transformer]

post-thumbnail

3.BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

post-thumbnail

5.NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE

post-thumbnail

6.RoBERTa: A Robustly Optimized BERT Pretraining Approach

post-thumbnail

7.Efficient Estimation of Word Representations in Vector Space

post-thumbnail

8.Sequence to Sequence Learning with Neural Networks

post-thumbnail

9.LLaMA: Open and Efficient Foundation Language Models

post-thumbnail

10.EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks

post-thumbnail

11.GloVe: Global Vectors for Word Representation

post-thumbnail