NLP 논문 리뷰 및 구현

DYN.kim·2023년 12월 28일

NLP 논문 리뷰

목록 보기
1/3

NLP 관련 논문들을 읽고 리뷰한 후 이중 몇몇 논문들은 pytorch를 이용하여 구현하기로 했다.

논문리스트는 다음과 같다.

Neural Architectures for Named Entity Recognition (2016), G. Lample et al. 
Exploring the limits of language modeling (2016), R. Jozefowicz et al. 
Teaching machines to read and comprehend (2015), K. Hermann et al. 
Effective approaches to attention-based neural machine translation (2015), M. Luong et al. 
Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana. 
Memory networks (2014), J. Weston et al. 
Neural turing machines (2014), A. Graves et al. 
Sequence to sequence learning with neural networks (2014), I. Sutskever et al. 
Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. 
A convolutional neural network for modeling sentences (2014), N. Kalchbrenner et al. 
Convolutional neural networks for sentence classification (2014), Y. Kim 
Glove: Global vectors for word representation (2014), J. Pennington et al. 
Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov 
Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. 
Efficient estimation of word representations in vector space (2013), T. Mikolov et al. 
Recursive deep models for semantic compositionality over a sentiment treebank(2013), R. Socher et al.
Generating sequences with recurrent neural networks(2013), A. Graves.   
Neural machine translation by jointly learning to align and translate(2014), D. Bahdanau et al.
Attention Is All You Need  
KLUE: Korean Language Understanding Evaluation 
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding  
RoBERTa: A Robustly Optimized BERT Pretraining Approach 
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators 
Longformer: The Long-Document Transformer [PAPER] 
An Improved Baseline for Sentence-level Relation Extraction 
Improving Language Understanding by Generative Pre-Training 
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations 
XLNET 
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension  
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks 
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks 
FEW-SHOT LEARNING WITH GRAPH NEURAL NETWORKS 
Active Learning: Problem Settings and Recent Developments 
profile
AI 개발자를 목표로 하고 있는 꿈 많은 공대생입니다. a deo vocatus rite paratus

0개의 댓글