1) deep CNN의 minimal training data 제한 , issue 제공.
New 3 phase model for decision support of CNN transfer learning (Learning mechanism), data augmentation (Data pre-processing), hyperparamter optimization (Learning mechanism).
1) Data augmentation
AlexNet, VGG16, CNN, and MLP models (뭘 사용했는지 확인)
2) optimal combination of paramters (HPO)
A random-based search strategy
Introduction
1) ImageNet의 큰 개념의 dataset에서는 dropout, deep CNN
CNN의 단점
(1) weight parameter learning, labeled training sample들이 필요하다는 점
1) Optimal classification results of hyperparamterrs for differnt models; AlexNet, VGG16, CNN, and multilayer perceptron (MLP).
2) Data augementatino technique to enrich the training datasets
3) MPII human pose dataset
MPII human pose dataset
25,000 images from online videos
Data-set split (75%, 15%, 10%)
1) Data cycle over in mini-batch in each optimizatin iteration (Determied by the batch size)
Model
AlexNet

VGG16

System Architecture
1) RandomSearch hyper-parameter optimization
2) Data augmentation techniques
The hyper-parameter tuning 자체가 training 과 performance에서 큰 영향을 주고 있다는 것을 설명.
HPO (Hyper paramter optimization)
Critical distinction
The key impact features in network
initial learning rate, learning rate decay factor, number of hidden neurons, regularizatoin strength.
Model (MLP + HPO)
AlexNet model & VGG16 model), IDA보다는 HPO가 훨씬 더 효과적으로 training accuracy 랑 testing accuracy 증가