UROP

2.[UROP #2] SCOTT: Self-Consistent Chain-of-Thought Distillation

post-thumbnail

3.[UROP #3] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models (1)

post-thumbnail

4.[UROP #4] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models (2)

post-thumbnail

7.[UROP #7] Specializing Smaller Language Models towards Multi-Step ReasoningEncoders

post-thumbnail

8.[UROP #8] Deep Mutual Learning (1)

post-thumbnail

9.[UROP #9] Deep Mutual Learning (2)

post-thumbnail

10.[UROP #10] Dynamic Model Pruning with FeedBack

post-thumbnail

11.[UROP #11] Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation (2)

post-thumbnail

12.[UROP #12] Movement Pruning: Adaptive Sparsity by Fine-Tuning

post-thumbnail

13.[UROP #13] Block Pruning For Faster Transformers

post-thumbnail

15.FedBABU: Towards Enhanced Representation for Federated Image Classification

post-thumbnail

16.Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

post-thumbnail

17.Sparse Model Soups: A recipe for Improved Pruning via Model Averaging

post-thumbnail

18.Complement Sparsification: Low-Overhead Model Pruning for Federated Learning

post-thumbnail