The paper proposes DeepArc, a framework for modularizing neural networks, which allows for the extraction of the semantic architecture of a DNN and facilitates tasks such as model restructure and enhancement.
The modularization of network layers in DeepArc has several benefits, including preserving fitting, robustness, and linear separability of the model.
The paper demonstrates that tasks like model compression, enhancement, and repair can greatly benefit from the modularity of network layers provided by DeepArc.
The experiments conducted in the paper show that DeepArc can significantly boost the runtime efficiency of state-of-the-art model compression techniques and achieve faster training performance while maintaining similar model prediction performance.
The authors suggest that future work will explore whether the DeepArc framework can facilitate modular reuse.