[Paper Review] Efficient and Scalable

1.[2016 ICLR] Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding

post-thumbnail

2.[2017 ICLR] Pruning Filters for Efficient Convnets

post-thumbnail

3.[CVPR 2017][ResNeXt] Aggregated Residual Transformations for Deep Neural Networks

post-thumbnail

4.[2017 ICLR] OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER

post-thumbnail

5.[2017 CVPR] Hard Mixtures of Experts for Large Scale Weakly Supervised Vision

post-thumbnail

6.[2018 CVPR] BlockDrop: Dynamic Inference Paths in Residual Networks

post-thumbnail

7.[2018 CVPR]ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

post-thumbnail

8.[2019 ICLR] Slimmable Neural Networks

post-thumbnail

9.[2019 ICCV] Universally Slimmable Networks and Improved Training Techniques

post-thumbnail

10.[2019 NeurIPS] CondConv: Conditionally Parameterized Convolutions for Efficient Inference

post-thumbnail

11.[2020 CVPR] Resolution Adaptive Networks for Efficient Inference

post-thumbnail

12.[2020 ICLR] AutoSlim: Towards One-Shot Architecture Search for Channel Numbers

post-thumbnail

13.[2020 CVPR] EfficientDet: Scalable and Efficient Object Detection

post-thumbnail

14.[2020 PMLR] Deep Mixture of Experts via Shallow Embedding

post-thumbnail

15.[2021 NeurIPS] Scaling Vision with Sparse Mixture of Experts

post-thumbnail

16.[2021 NeurIPS] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

post-thumbnail

17.[Simple Review] [2022 CVPR] Restormer: Efficient Transformer for High-Resolution Image Restoration

post-thumbnail

18.[2022 ICLR] COSFORMER: Rethinking Softmax In Attention

post-thumbnail

19.[2022 NeurIPS] EfficientFormer: Vision Transformers at MobileNet Speed

post-thumbnail

20.[2022 CVPR] (중단) A-ViT: Adaptive Tokens for Efficient Vision Transformer

post-thumbnail

21.[2022 ICLR][EViT] Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations

post-thumbnail

22.[2022 JMLR][simple review] Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity

post-thumbnail

23.[2023 CVPR] DynamicDet: A Unified Dynamic Architecture for Object Detection

post-thumbnail

24.[2023 CVPR] Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning

post-thumbnail

25.[2023 ICCV] EfficientViT: Lightweight Multi-Scale Attention for High-Resolution Dense Prediction

post-thumbnail

26.(2023ICCV)EfficientViT, (2022ICLR)CosFormer를 읽고 이해한 Linear Attention

post-thumbnail

27.[2023 ICCV][중단] Robust Mixture-of-Expert Training for Convolutional Neural Networks

post-thumbnail

28.[2024 NeurIPS] Adaptive Depth Networks with Skippable Sub-Paths

post-thumbnail

29.[2024 AICAS] An Efficient and Fast Filter Pruning Method for Object Detection in Embedded Systems

post-thumbnail

30.[2024 WACV][Simple Review][SViT] Revisiting Token Pruning for Object Detection and Instance Segmentation

post-thumbnail