Paper

1.[논문] Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data

post-thumbnail

2.[논문] What Does BERT Look At? An Analysis of BERT's Attention

post-thumbnail

3.[논문] Platypus: Quick, Cheap, and Powerful Refinement of LLMs

post-thumbnail

4.[논문] MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning

post-thumbnail