UROP

2.[UROP #2] SCOTT: Self-Consistent Chain-of-Thought Distillation

post-thumbnail

3.[UROP #3] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models (1)

post-thumbnail

4.[UROP #4] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models (2)

post-thumbnail

7.[UROP #7] Specializing Smaller Language Models towards Multi-Step ReasoningEncoders

post-thumbnail

8.[UROP #8] Deep Mutual Learning (1)

post-thumbnail

9.[UROP #9] Deep Mutual Learning (2)

post-thumbnail

10.[UROP #10] Dynamic Model Pruning with FeedBack

post-thumbnail

11.[UROP #11] Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation (2)

post-thumbnail

12.[UROP #12] Movement Pruning: Adaptive Sparsity by Fine-Tuning

post-thumbnail

13.[UROP #13] Block Pruning For Faster Transformers

post-thumbnail