Exciting areas in NLP (2022)
To be updated regularly
- Evaluating and improving models for something other than accuracy
- Doing empirical work looking at what large pre-trained models have learned
- Working out how to get knowledge and good task performance from large models for particular tasks without much data (e.g., transfer learning)
- Looking at the bias, trustworthiness, and explainability of large models
- Working on how to augment the data for models to improve performance
- Looking at low resource languages or problems
- Improving performance on the tail of rare stuff, addressing bias
- Scaling models up and down
- Building really BIG models (GPT-3, GPT-3...)
- Building small, performant models, which is still big
- Looking to achieve more advanced functionalities
- compositionality, systematic generalization, fast learning (e.g., meta-learning) on smaller problems and amounts of data, and more quickly
- Measure sample efficiency (BabyAI)