(https://arxiv.org/pdf/1409.1556v6)
논문 코드 구현 github
### 1. Title
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
(큰규격이미지 인식을 위한 매우 깊은 컨벌루젼 네트워크)

Table1. ConvNet configurations (shown in columns). The depth of the configurations increases from the left (A) to the right (E), as more layers are added (the added layers are shown in bold). The convolutional layer parameters are denoted as “conv⟨receptive field size⟩-⟨number of channels⟩”. The ReLU activation function is not shown for brevity.
Table1. 테이블의 열들은 ConvNet 구성들이다.

Table 7: Comparison with the state of the art in ILSVRC classification. Our method is denoted as “VGG”. Only the results obtained without outside training data are reported.
Table7. ILSVRC 분류에서 최신 모델들과 비교. (논문의 모델은 VGG로 표시)
In this paper, we address another important aspect of ConvNet architecture design – its depth. To this end, we fix other parameters of the architecture, and steadily increase the depth of the network by adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convolution filters in all layers.
In this work we evaluated very deep convolutional networks (up to 19 weight layers) for large- scale image classification. It was demonstrated that the representation depth is beneficial for the classification accuracy, and that state-of-the-art performance on the ImageNet challenge dataset can be achieved using a conventional ConvNet architecture (LeCun et al., 1989; Krizhevsky et al., 2012) with substantially increased depth. In the appendix, we also show that our models generalise well to a wide range of tasks and datasets, matching or outperforming more complex recognition pipelines built around less deep image representations. Our results yet again confirm the importance of depth in visual representations.
이해 파악
1. 저자가 뭘 해내고 싶어했는가?
small convolutional filters을 깊게 한 network model도 성능을 낼 수 있다는 것을 보여줌.
2. 이 연구의 접근에서 중요한 요소는 무엇인가?
small convolutional filters를 계층마다 최대한 깊게 쌓음.