읽을 논문의 목록을 정리
목록의 논문을 가감
수식은 과감히 생략한다.
이해가 안되는 부분은 빼고 전체적으로 읽는다.
저자가 뭘 하고 싶어 했는가?
이 연구의 접근에서 중요한 요소는 무엇인가?
독자는 이 논문을 스스로 이용할 수 있는가?
독자는 참고하고 싶은 다른 레퍼런스에는 어떤 것이 있는가?
Generating sequences with recurrent neural networks (2013), A. Graves. pdf
Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. pdf
Efficient estimation of word representations in vector space (2013), T. Mikolov et al. pdf
Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. pdf
Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov pdf
Glove: Global vectors for word representation (2014), J. Pennington et al. pdf
Convolutional neural networks for sentence classification (2014), Y. Kim pdf
A convolutional neural network for modeling sentences (2014), N. Kalchbrenner et al. pdf
Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. pdf
Sequence to sequence learning with neural networks (2014), I. Sutskever et al. pdf
Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. pdf
Neural turing machines (2014), A. Graves et al. pdf
Memory networks (2014), J. Weston et al. pdf
Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana. pdf
Effective approaches to attention-based neural machine translation (2015), M. Luong et al. pdf
Teaching machines to read and comprehend (2015), K. Hermann et al. pdf
Exploring the limits of language modeling (2016), R. Jozefowicz et al. pdf
Neural Architectures for Named Entity Recognition (2016), G. Lample et al. pdf