논문들

1.Distributed Representations of Words and Phrases and their Compositionality

post-thumbnail

2.Attention Is All You Need

post-thumbnail

3.BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

post-thumbnail

4.RoBERTa: A Robustly Optimized BERT Pretraining Approach

post-thumbnail

5.ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

post-thumbnail

6.BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

post-thumbnail

7.Language Models are Unsupervised Multitask Learners

post-thumbnail

8.Language Models are Few-Shot Learners

post-thumbnail

9.ERNIE: Enhanced Language Representation with Informative Entities

post-thumbnail

10.Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

post-thumbnail

11.Video-guided machine translation via dual-level back-translation

post-thumbnail

12.Vision talks: Visual relationship-enhanced transformer for video-guided machine translation

post-thumbnail

13.Incorporating Global Visual Features into Attention-Based Neural Machine Translation

post-thumbnail