Language Model : ELMo GPT

Ko Hyejung·2021년 12월 3일
0

2021 SKT AI

목록 보기
8/12
post-custom-banner

ELMo

LM pre-training using deep bidirectional LSTM(2 layers)
Contextualized word embedding by linear combinatoin of hidden states

GPT: Transformer Decoder LM

Pretrain large 12 layer left-to-right decoder transformer
Uni-directoinal (forward) LM

GPT: Supervised Fine Tuning

Supervised fine-tuning for each NLP tasks(classification, similarity, multiple choice)
Sentence representation from last token output of last transformer layer

post-custom-banner

0개의 댓글