profile
๐ŸŒ’ Don't be a knew-it-all, Be a Learn-it-all
post-thumbnail

AI์™€ ์‚ด์•„๊ฐ€๊ธฐ

๐Ÿ”— AI๊ฐ€ ์ธ๊ฐ„์„ ๋Œ€์ฒดํ•œ๋‹ค๊ณ ? ๐Ÿค– ๋‡Œ๊ณผํ•™์ž๊ฐ€ ๋ถ„์„ํ•œ ChatGPT์˜ ๋ชจ๋“  ๊ฒƒ'์กฐ์Šน์—ฐ์˜ ํƒ๊ตฌ์ƒํ™œ' ์ฑ„๋„์—์„œ ChatGPT์— ๋Œ€ํ•ด '์–ด๋–ป๊ฒŒ ์ž˜ ์‚ฌ์šฉํ•  ๊ฒƒ์ธ๊ฐ€?', '์•ž์œผ๋กœ ์–ด๋–ป๊ฒŒ ๋Œ€์ฒ˜ํ•ด์•ผํ•˜๋Š”๊ฐ€?'๋ผ๋Š” ์ฒ ํ•™์ ์ธ ์ ‘๊ทผ์œผ๋กœ ์ด์•ผ๊ธฐ๋ฅผ ๋‚˜๋ˆด๋Š”๋ฐ, ํ•œ๋ฒˆ์ฏค ์ƒ๊ฐํ•ด๋ด์•ผํ•  ์ฃผ์ œ๊ฐ€ ์•„๋‹

2023๋…„ 3์›” 15์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

List of Open Sourced LLM like ChatGPT

ChatGPT์™€ ์œ ์‚ฌํ•œ chat๊ธฐ๋ฐ˜์˜ ์˜คํ”ˆ์†Œ์Šค ๋ชจ๋ธ๋“ค์ด ์ตœ๊ทผ ์†์† ๊ณต๊ฐœ๋˜๊ณ  ์žˆ๋‹ค. (Dall-E ์œ ๋ฃŒ ๊ณต๊ฐœ ํ›„, Stable Diffusion์ด๋ผ๋Š” ์˜คํ”ˆ์†Œ์Šค๊ฐ€ ๋“ฑ์žฅํ•œ ๊ฒƒ๊ณผ ์œ ์‚ฌํ•œ ๋งฅ๋ฝ์ธ ๋“ฏ)์ตœ๊ทผ LLaMA 65B๋ฅผ 4๋น„ํŠธ ์–‘์žํ™”ํ•ด์„œ ๊ฐœ์ธ ๋žฉํƒ‘์—์„œ๋„ ๋Œ๋ฆฌ๋˜๋ฐ(https&

2023๋…„ 3์›” 15์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Paper Review] SetFit : Efficient Few-Shot Learning Without Prompts

ChatGPT, BARD, LLaMa ๋“ฑ๊ณผ ๊ฐ™์€ LLM์€ ๋‹ค์žฌ๋‹ค๋Šฅํ•ฉ๋‹ˆ๋‹ค. In-context learning(ICL)์˜ ์„ฑ๋Šฅ์„ ์ฆ๋ช…ํ•œ GPT ๋ชจ๋ธ์˜ ๋“ฑ์žฅ์œผ๋กœ ์ธํ•ด, ๋ฐ์ดํ„ฐ ๋ผ๋ฒจ๋ง์ด ํ•„์š”ํ•œ fine-tuning ๋ฐฉ๋ฒ•๋ก ์€ ์‹œ๊ฐ„๊ณผ ๋น„์šฉ ์ธก๋ฉด์—์„œ ๋น„ํšจ์œจ์ ์ธ ๋ฐฉ๋ฒ•๋ก ์ด ๋˜์–ด๋ฒ„๋ ธ์Šต

2023๋…„ 3์›” 12์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

๐ŸคLangCon 2023๐Ÿฅ ํ›„๊ธฐ

KoELECTRA๋กœ ์œ ๋ช…ํ•˜์‹  ๋ฐ•์žฅ์›๋‹˜, EleutherAI์—์„œ ํ™œ๋™ํ•˜์‹œ๋ฉด์„œ Polyglot, olso๋“ฑ ๊ฐœ๋ฐœํ•˜์‹  ํŠœ๋‹™์˜ย  ๊ณ ํ˜„์›…๋‹˜, ํ† ์Šค์˜ ๊ณ ์„ํ˜„๋‹˜, ์ด๋ฃจ๋‹ค๋ฅผ ๊ฐœ๋ฐœ์ค‘์ด์‹  ์Šค์บํ„ฐ๋žฉ ์ด์ฃผํ™๋‹˜ ๋“ฑ NLP์—…๊ณ„์—์„œ ์˜คํ”ˆ ์†Œ์Šค ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งŽ์€ ๊ธฐ์—ฌ๋ฅผ ํ•ด์ฃผ๊ณ  ๊ณ„์‹œ๋Š” ๋ถ„๋“ค์ด ์Šคํ”ผ

2023๋…„ 2์›” 19์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[MLOps] Multi-Model ์„œ๋น™์„ ์œ„ํ•œ RedisAI Cluster ๊ตฌ์ถ•ํ•˜๊ธฐ 2ํŽธ - How to build RedisAI Cluster?

์ง€๋‚œ ๊ธ€์—์„œ RedisAI๊ฐ€ ๋ฌด์—‡์ธ์ง€ ๊ทธ๋ฆฌ๊ณ  RedisAI์™€ FastAPI๋ฅผ ํ™œ์šฉํ•œ ๊ฐ„๋‹จํ•œ ์ถ”๋ก  ์„œ๋ฒ„๋ฅผ ๊ตฌ์„ฑํ•ด๋ณด์•˜์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์šด์˜ํ™˜๊ฒฝ์—์„œ ์–ธ์ œ ๋Š˜์–ด๋‚ ์ง€ ๋ชจ๋ฅผ(์ •๋ง ์–ธ์ œ ๋Š˜์–ด๋‚ ์ง€ ๋ชจ๋ฅธ๋‹ค๊ณ  ํ•œ๋‹ค..๐Ÿฅน) ํŠธ๋ž˜ํ”ฝ์„ ๊ฐ๋‹นํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ™•์žฅ์„ฑ์„ ๊ณ ๋ คํ•œ ์Šค์ผ€์ผ ์ธ/์•„์›ƒ์ด ๊ฐ€๋Šฅ

2022๋…„ 7์›” 1์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[MLOps] Multi-Model ์„œ๋น™์„ ์œ„ํ•œ RedisAI Cluster ๊ตฌ์ถ•ํ•˜๊ธฐ 1ํŽธ - What is RedisAI ?

์ตœ๊ทผ ํŒ€์—์„œ ์ž์ฒด NLU ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๋ฉฐ Multi-model Serving์— ๋Œ€ํ•œ ์ˆ˜์š”๊ฐ€ ์ƒ๊ฒจ๋‚ฌ์Šต๋‹ˆ๋‹ค. ๊ฐ ๊ณ ๊ฐ(์—์ด์ „ํŠธ)๋งˆ๋‹ค

2022๋…„ 6์›” 28์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] Google Cloud-TPU์™€ KoBigBird๋ชจ๋ธ์„ ํ™œ์šฉํ•œ KorQuAD2.0 Fine-tuning

NLP ์—…๊ณ„๋ฅผ ๋ณด๊ณ  ์žˆ์ž๋ฉด ์šฐ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์•Œ๋งŒํ•œ ๋‚ด๋†“๋ผ ํ•˜๋Š” ๊ธฐ์—…๋“ค์€ ์„œ๋กœ ์•ž๋‹คํˆฌ์–ด ๊ฑฐ๋Œ€์–ธ์–ด๋ชจ๋ธ(LLM)์„ ๋ฐœํ‘œํ•˜๊ธฐ ๋ฐ”์œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์–ผ๋งˆ ์ „ ๊ตฌ๊ธ€์—์„œ ๊ณต๊ฐœ๋œ PaLM(Pathways Language Model)์€ GPT-3(1,750์–ต๊ฐœ)๋ณด๋‹ค ์•ฝ 3๋ฐฐ๋‚˜ ํฐ ํŒŒ๋ผ๋ฏธํ„ฐ(

2022๋…„ 4์›” 19์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] sentence-transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•œ SBERT ํ•™์Šต ๋ฐฉ๋ฒ•

Intro ์ด์ „ ํฌ์ŠคํŠธ์—์„œ ์†Œ๊ฐœํ•œ SentenceBERT๋ฅผ ์–ด๋–ป๊ฒŒ ํ•™์Šตํ•˜๋Š”์ง€ ๋…ผ๋ฌธ ๋ฐ sentence-transformers ๊ณต์‹ ๊นƒํ—™์„ ๊ธฐ์ค€์œผ๋กœ ๋ช‡ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ณ  ์–ด๋–ค ๋ฐฉ๋ฒ•์ด ๊ฐ€์žฅ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ด์—ˆ๋А์ง€ ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. 1. SBERT ํ•™์Šต ๋ฐ์ดํ„ฐ SBERT

2022๋…„ 2์›” 28์ผ
ยท
6๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

WSL2 ์ดˆ๊ฐ„๋‹จ ์„ค์น˜ ๋ฐ CUDA(GPU) ์„ค์ • ๋ฐฉ๋ฒ•

๊ทธ ๋™์•ˆ ์œˆ๋„์šฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ ๋งฅ์œผ๋กœ ๊ฐˆ์•„ํƒˆ๊นŒ ๊ณ ๋ฏผํ•˜๋‹ค๊ฐ€ ๋‚จ์ด์žˆ๋˜ ์ด์œ ๊ฐ€ ๋ฐ”๋กœ WSL(Windows Subsystem for Linux) ๋•Œ๋ฌธ์ด์—ˆ๋‹ค.์‚ฌ์‹ค ์œˆ๋„์šฐ์—์„œ ๊ฐœ๋ฐœ์„ ํ•œ๋‹ค๋Š” ๊ฒƒ์€ ์‹œ๊ฐ„์ , ์ •์‹ ์  ์—๋„ˆ์ง€ ์†Œ๋ชจ๊ฐ€ ํฌ๋‹ค๊ณ  ์ƒ๊ฐํ–ˆ๋‹ค. ์œˆ๋„์šฐ์—์„œ ๊ฐœ๋ฐœ ํ›„ ๊ฐœ๋ฐœ์„œ๋ฒ„์— ํ…Œ์ŠคํŠธ ๋ฐ

2021๋…„ 12์›” 5์ผ
ยท
2๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Paper Review] Sentence-BERT: Sentence Embedding using Siamese BERT-Networks

Intro ๋ฌธ์žฅ ๊ฐ„(ํ˜น์€ ๋ฌธ์„œ ๊ฐ„) ์œ ์‚ฌ๋„ ๋ถ„์„์—์„œ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ด๊ณ  ์žˆ๋Š” Sentence-BERT์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๋ ค๊ณ  ํ•œ๋‹ค. ๋…ผ๋ฌธ ์›์ œ๋Š” Sentence-BERT: Sentence Embedding using Siamese BERT-Networks์ด๋ฉฐ, ์ตœ๊ทผ ์„ฑ๋Šฅ์ด

2021๋…„ 10์›” 10์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] HuggingFace์— ๋‚ด ๋ชจ๋ธ ํฌํŒ…ํ•˜๊ธฐ

์ง€๋‚œ ํฌ์ŠคํŠธ(Transformers์™€ Tensorflow๋ฅผ ํ™œ์šฉํ•œ BERT Fine-tuning)์— ์ด์–ด, ์ด๋ฒˆ์—๋Š” HuggingFace Model Hub์— ํ•™์Šต๋œ ๋ชจ๋ธ์„ ํฌํŒ…ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค.HuggingFace Model Hub๋Š” ์ฝ”๋“œ ๊ณต์œ  ์ €์žฅ์†Œ์ธ gi

2021๋…„ 8์›” 7์ผ
ยท
3๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] Transformers์™€ Tensorflow๋ฅผ ํ™œ์šฉํ•œ BERT Fine-tuning

์ด๋ฒˆ ํฌ์ŠคํŠธ์—์„œ๋Š” ๐Ÿค—HuggingFace์˜ Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ Tensorflow๋ฅผ ํ†ตํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ BERT๋ชจ๋ธ์„ Fine-tuningํ•˜์—ฌ Multi-Class Text Classification์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ณ ์ž ํ•œ๋‹ค. ํŠนํžˆ ์ด๋ฒˆ

2021๋…„ 8์›” 6์ผ
ยท
1๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

Docker ์„ค์น˜ ๋ฐ ๊ธฐ๋ณธ ๋ช…๋ น์–ด(commands)

docker service ์‹œ์ž‘sudo service docker start๋™์ž‘์ค‘์ธ ์ปจํ…Œ์ด๋„ˆ ํ™•์ธdocker ps์ •์ง€๋œ ์ปจํ…Œ์ด๋„ˆ ํ™•์ธdocker ps -a์ปจํ…Œ์ด๋„ˆ ์‚ญ์ œdocker rm \[container id]๋ณต์ˆ˜์˜ ์ปจํ…Œ์ด๋„ˆ ์‚ญ์ œdocker rm \[container

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

Apache Kafka(์•„ํŒŒ์น˜ ์นดํ”„์นด)๋ž€ ๋ฌด์—‡์ธ๊ฐ€?

๊ธฐ์กด ๋งํฌ๋“œ์ธ์˜ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ์‹œ์Šคํ…œ์€ ๊ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ํŒŒํŽธํ™”๋˜๊ณ  ์‹œ์Šคํ…œ ๋ณต์žก๋„๊ฐ€ ๋†’์•„ ์ƒˆ๋กœ์šด ์‹œ์Šคํ…œ์„ ํ™•์žฅํ•˜๊ธฐ ์–ด๋ ค์šด ์ƒํ™ฉ์ด์˜€์Œ๊ธฐ์กด ๋ฉ”์‹œ์ง• ํ ์‹œ์Šคํ…œ์ธ ActiveMQ๋ฅผ ์‚ฌ์šฉํ–ˆ์ง€๋งŒ, ๋งํฌ๋“œ์ธ์˜ ์ˆ˜๋งŽ์€ ํŠธ๋ž˜ํ”ฝ๊ณผ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ์—๋Š” ํ•œ๊ณ„๊ฐ€ ์žˆ์—ˆ์Œ์ด๋กœ ์ธํ•ด ์ƒˆ๋กœ์šด ์‹œ์Šคํ…œ์˜

2021๋…„ 7์›” 18์ผ
ยท
3๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] Transformer (Attention Is All You Need)

Intro์ง€๋‚œ ํฌ์ŠคํŠธ์ธ Sequence-to-Sequence with Attention์—์„œ sequence-to-sequence ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ RNN ๊ณ„์—ด์˜ ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง์„ ์‚ฌ์šฉํ•จ์œผ๋กœ ์ธํ•ด ์ž…๋ ฅ ์‹œํ€€์Šค๊ฐ€ ๊ธธ์–ด์งˆ ์ˆ˜ ๋ก ํ•˜๋‚˜์˜ Context Vector์— ๋ชจ๋“  ์ •๋ณด๋ฅผ ๋‹ด๊ธฐ

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Basic NLP] Sequence-to-Sequence with Attention

Intro์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„ Transformer ๋ชจ๋ธ์˜ ๋“ฑ์žฅ ์ดํ›„ BERT, GPT, RoBERTa, XLNet, ELECTRA, BART ๋“ฑ๊ณผ ๊ฐ™์€ ์–ธ์–ด ๋ชจ๋ธ(Language Model)์ด ๋งคํ•ด ์ƒˆ๋กœ์šด SOTA๋ฅผ ๋‹ฌ์„ฑํ•˜๋ฉฐ ๋“ฑ์žฅํ•˜๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ์–ธ์–ด๋ชจ๋ธ์˜ ๊ฒฝ์šฐ self-s

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

[Paper Review] PEGASUS:Pre-training with Extracted Gap-sentences for Abstractive Summarization

Intro์ตœ๊ทผ NLP์˜ downstream tasks ์ค‘ ํ•˜๋‚˜์ธ Summarization๋ถ„์•ผ์— "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization"์ด๋ผ๋Š” ์ƒˆ๋กœ์šด ๋…ผ๋ฌธ(๋ฉ‹์ง„ ์ด

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

Basic Object-Detection

IntroInflearn์˜ ๋”ฅ๋Ÿฌ๋‹ ์ปดํ“จํ„ฐ ๋น„์ „ ์™„๋ฒฝ ๊ฐ€์ด๋“œ๋ฅผ ์ˆ˜๊ฐ•ํ•˜๋ฉฐ ๊ณต๋ถ€ ๋ชฉ์ ์œผ๋กœ ์ •๋ฆฌํ•œ ๊ธ€์ž…๋‹ˆ๋‹ค.Classification(๋ถ„๋ฅ˜) : ์ด๋ฏธ์ง€์— ์žˆ๋Š” object๊ฐ€ ๋ฌด์—‡์ธ์ง€๋งŒ ํŒ๋ณ„, ์œ„์น˜ ๊ณ ๋ ค xLocalization(๋ฐœ๊ฒฌ) : object ํŒ๋ณ„ ๋ฐ ๋‹จ ํ•˜๋‚˜์˜ obj

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

LSTM Autoencoder for Anomaly Detection

Intro์ง€๋‚œ ํฌ์ŠคํŒ…(Autoencoder์™€ LSTM Autoencoder)์— ์ด์–ด LSTM Autoencoder๋ฅผ ํ†ตํ•ด Anomaly Detectionํ•˜๋Š” ๋ฐฉ์•ˆ์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. Autoencoder์˜ ๊ฒฝ์šฐ ๋ณดํ†ต ์ด๋ฏธ์ง€์˜ ์ƒ์„ฑ์ด๋‚˜ ๋ณต์›์— ๋งŽ์ด ์‚ฌ์šฉ๋˜๋ฉฐ ์ด๋Ÿฌํ•œ

2021๋…„ 7์›” 18์ผ
ยท
4๊ฐœ์˜ ๋Œ“๊ธ€
ยท
post-thumbnail

Autoencoder์™€ LSTM Autoencoder

Intro๋Œ€ํ‘œ์ ์ธ ์ž๊ธฐ ์ง€๋„ ํ•™์Šต์ธ Autoencoder์™€ Autoencoder์— LSTM cell์„ ์ ์šฉํ•ด ์‹œํ€€์Šค ํ•™์Šต์ด ๊ฐ€๋Šฅํ•œ LSTM Autoencoder์— ๋Œ€ํ•ด ์†Œ๊ฐœํ•œ๋‹ค. ์ดํ›„ ๋‹ค์Œ ํฌ์ŠคํŒ…์—๋Š” LSTM Autoencoder๋ฅผ ํ†ตํ•ด ๋ฏธ๋ž˜์— ๋ฐœ์ƒ ํ•  ๊ณ ์žฅ์ด๋‚˜ ์ด์ƒ์‹ 

2021๋…„ 7์›” 18์ผ
ยท
0๊ฐœ์˜ ๋Œ“๊ธ€
ยท