논문리딩

1.Attention Is All You Need

post-thumbnail

2.Improving Language Understanding by Generative Pre-Training

post-thumbnail

3.BERT: Pre-training of Deep Bidirectional Transformers forLanguage Understanding

post-thumbnail

4.Deep contextualized word representations

post-thumbnail

5.Language Models are Unsupervised Multitask Learners

post-thumbnail

6.Language Models are Few-Shot Learners

post-thumbnail