post-thumbnail

[Transformer] Attention Is All You Need

Attention Is All You Need, NIPS 2017

약 19시간 전
·
0개의 댓글
·

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Sparks of Artificial General Intelligence: Early experiments with GPT-4, arXiv 2023

2일 전
·
0개의 댓글
·
post-thumbnail

HuggingFace Deep RL Course - 5. Unity ML-Agent

Deep Reinforcement Learning - 5강 Unity ML-Agent

4일 전
·
0개의 댓글
·
post-thumbnail

HuggingFace Deep RL Course - 4. Policy Gradient

Deep Reinforcement Learning - 4강 Policy Gradient

6일 전
·
0개의 댓글
·
post-thumbnail

TempLM: Distilling Language Models into Template-Based Generators

TempLM: Distilling Language Models into Template-Based Generators, arXiv 2022

2023년 5월 23일
·
0개의 댓글
·
post-thumbnail

Guiding Generative Language Models for Data Augmentation in Few-Shot Text Classification

Guiding Generative Language Models for Data Augmentation in Few-Shot Text Classification , EMNLP 2022

2023년 5월 23일
·
0개의 댓글
·
post-thumbnail

PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks

PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks, ACL 2022

2023년 5월 23일
·
0개의 댓글
·

프롬프트 엔지니어링 강의 by DLAI - 3. 반복적인 프롬프트 개선

딥러닝AI: 프롬프트 엔지니어링 강의 - 3. 반복적 프롬프트 개선

2023년 5월 22일
·
0개의 댓글
·
post-thumbnail

HuggingFace Deep RL Course - 3. Deep Q-Learning

Deep Reinforcement Learning - 3강 Deep Q-Learning

2023년 5월 22일
·
0개의 댓글
·
post-thumbnail

Towards Continual Knowledge Learning of Language Models

Towards Continual Knowledge Learning of Language Models, ICLR 2022

2023년 5월 22일
·
0개의 댓글
·
post-thumbnail

Editing Factual Knowledge in Language Models

Editing Factual Knowledge in Language Models, EMNLP 2021

2023년 5월 22일
·
0개의 댓글
·
post-thumbnail

TEMPERA: Test-Time Prompt Editing via Reinforcement Learning

TEMPERA: Test-Time Prompt Editing via Reinforcement Learning, ICLR 2023

2023년 5월 22일
·
0개의 댓글
·
post-thumbnail

RLPROMPT: Optimizing Discrete Text Prompts with Reinforcement Learning

RLPROMPT: Optimizing Discrete Text Prompts with Reinforcement Learning, EMNLP 2022

2023년 5월 18일
·
0개의 댓글
·
post-thumbnail

MetaPrompting: Learning to Learn Better Prompts

MetaPrompting: Learning to Learn Better Prompts, COLING 2022

2023년 5월 16일
·
0개의 댓글
·
post-thumbnail

Knowledge Neurons in Pretrained Transformers

Knowledge Neurons in Pretrained Transformers, ACL 2022

2023년 5월 15일
·
0개의 댓글
·

Mind the Gap: Assessing Temporal Generalization in Neural Language Models

Mind the Gap: Assessing Temporal Generalization in Neural Language Models, NeurIPS 2021 Spotlight

2023년 5월 15일
·
0개의 댓글
·

PromptGen: Automatically Generate Prompts using Generative Models

PromptGen: Automatically Generate Prompts using Generative Models, NAACL 2023

2023년 5월 12일
·
0개의 댓글
·
post-thumbnail

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning, NeurIPS 2022

2023년 5월 12일
·
0개의 댓글
·
post-thumbnail

Generative Prompt Tuning for Relation Classification

Generative Prompt Tuning for Relation Classification, EMNLP 2022

2023년 5월 12일
·
0개의 댓글
·

Large Language Models are Human-Level Prompt Engineers

Large Language Models are Human-Level Prompt Engineers, ICLR 2023

2023년 5월 10일
·
0개의 댓글
·