[Paper Review] Model Compression - Knowledge Distillation

1.Learning Efficient Object Detection Models with Knowledge Distillation

post-thumbnail

2.[Simple Review] MSSD: multi-scale self-distillation for object detection

post-thumbnail

3.[중단] LGD: Label-guided Self-distillation for Object Detection

post-thumbnail

4.Smooth and Stepwise Self-Distillation for Object Detection

post-thumbnail

5.[중단] PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient

post-thumbnail

6.CrossKD: Cross-Head Knowledge Distillation for Object Detection

post-thumbnail

7.$D^3$ETR: Decoder Distillation for Detection Transformer

post-thumbnail

8.DETRDistill: A Universal Knowledge Distillation Framework for DETR-families

post-thumbnail

9.KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling

post-thumbnail

10.[Simple Review] Knowledge distillation: A good teacher is patient and consistent

post-thumbnail

11.[중단] Residual error based knowledge distillation

post-thumbnail