NLP Papers

1.COCO-LM : Correcting and Contrasting Text Sequences for Language Model Pretraining

post-thumbnail

2.HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization

post-thumbnail

3.NORMFORMER

post-thumbnail

4.MPNet: Masked and Permuted Pre-training for Language Understanding

post-thumbnail

5.TABLEFORMER: Robust Transformer Modeling for Table-Text Encoding

post-thumbnail

6.Contextual Representation Learning beyond Masked Language Modeling

post-thumbnail

7.ClusterFormer: Neural Clustering Attention

post-thumbnail

8.XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

post-thumbnail

10.Funnel-Transformer : Filtering out Sequential Redundancy for Efficient Language Processing

post-thumbnail

11.ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding

post-thumbnail

12.PCL: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings

post-thumbnail

13.RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

post-thumbnail

14.Query2doc: Query Expansion with Large Language Models

post-thumbnail

15.Precise Zero-Shot Dense Retrieval without Relevance Labels

post-thumbnail

16.Large Dual Encoders Are Generalizable Retrievers

post-thumbnail

17.Self-attention Does Not Need O(n^2) Memory

post-thumbnail

18.Zero-Shot & Few-Shot Open-Domain QA

post-thumbnail

19.Visual Instruction Tuning

post-thumbnail

20.DOLA: DECODING BY CONTRASTING LAYERS IMPROVES FACTUALITY IN LARGE LANGUAGE MODELS

post-thumbnail

21.FunSearch: Making new discoveries in mathematical sciences using Large Language Models

post-thumbnail

22.Corrective Retrieval Augmented Generation

post-thumbnail