TIL

1.[NLP | TIL] Negative Sampling과 Hierarchical Softmax, Distributed Representation 그리고 n-gram

post-thumbnail

2.[NLP | TIL] Word2Vec(CBOW, Skip-gram)

post-thumbnail

3.[ML | TIL] Label Smoothing에 대해 알아보기 (feat. When Does Label Smoothing Help? 논문)

post-thumbnail

4.GloVe(글로브) 개념정리(feat. GloVe: Global Vectors for Word Representation 논문)

post-thumbnail

5.[TIL] Data Manifold 학습이란?

post-thumbnail

6.[TIL] Language Model (feat. Markov Assumption)

post-thumbnail

7.[TIL] Auto-regressive model (feat. NADE, PixelRNN)

post-thumbnail

8.Transfer learning and fine-tuning

post-thumbnail

9.Automatic Mixed Precision

post-thumbnail

10.Self Training Methods란?

post-thumbnail

11.Transfer Learning과 Fine-Tuning

post-thumbnail

12.Cross-Encoder와 Bi-Encoder (feat. SentenceBERT)

post-thumbnail

13.How to implement Dynamic Masking(feat. RoBERTa)

post-thumbnail

14.[TIL] Inductive Bias 란?

post-thumbnail

15.[TIL] Focal Loss, Class-Balanced Loss

post-thumbnail

16. [TIL] Contrastive Learning

post-thumbnail