Natural Language Processing(NLP) 기초 내용, 실습, 논문 정리

1.[NLP] Word Embedding, Word2Vec

post-thumbnail

2.[NLP] RNN (Recurrent Neural Net), gradient vanishing/exploding 현상과 long-term dependency 문제

post-thumbnail

3.[NLP] Gate를 활용해 RNN의 단점을 극복한 모델: LSTM, GRU

post-thumbnail

4.[NLP] Encoder∙Decoder 구조와 Seq2Seq, Seq2Seq with Attention

post-thumbnail

5.[NLP] Transformer와 Self-Attention

post-thumbnail

6.[NLP] NLTK, spaCy, torchtext를 이용하여 영어 토큰화(English Tokenization)작업 수행하기

post-thumbnail

7.[NLP] torchtext, spaCy를 이용하여 Vocab 만들기

post-thumbnail

8.[NLP] 자연어처리에 사용되는 Dataset(데이터셋), Dataloader 만들기

post-thumbnail

9.Formal Algorithms for Transformers 요약

post-thumbnail

10.Huggingface 🤗 Transformers 소개와 설치

post-thumbnail

11.[Huggingface 🤗 Transformers Tutorial] 1. Pipelines for inference

post-thumbnail

12.[Huggingface 🤗 Transformers Tutorial] 2. Load pretrained instances with an AutoClass

post-thumbnail

13.[Huggingface 🤗 Transformers Tutorial] 3. Preprocess

post-thumbnail

14.[Huggingface 🤗 Transformers Tutorial] 4. Fine-tune a pretrained model

post-thumbnail

15.Chain-of-Thought Prompting Elicits Reasoning in Large Language Models 정리

post-thumbnail