[딥러닝] VGG16 TransferLearning 실습

syEON·2023년 11월 2일
0

VGG16 TransferLearning

데이터
VGG16를 사용하여 이미지 이진분류하는데 사용하였습니다.

이미 학습된 VGG16 모델 불러오기
가중치 변경 불가하도록 trainable=False지정

from keras.applications.vgg16 import VGG16, preprocess_input
from keras.preprocessing import image

base_model = VGG16(weights='imagenet', input_shape=(280,280,3), include_top = False)
base_model.trainable = False

만들어진 모델 확인해보기

for i , layer in enumerate(base_model.layers):
    print(i, layer.name)

출력 결과
0 input_13
1 block1_conv1
2 block1_conv2
3 block1_pool
4 block2_conv1
5 block2_conv2
6 block2_pool
7 block3_conv1
8 block3_conv2
9 block3_conv3
10 block3_pool
11 block4_conv1
12 block4_conv2
13 block4_conv3
14 block4_pool
15 block5_conv1
16 block5_conv2
17 block5_conv3
18 block5_pool

모델링

방법1

inputs = Input(shape=(280,280,3))
x = base_model(inputs, training=False)
h = Flatten()(x)
h = Dense(1024, activation='relu')(h)
h = Dropout(0.2)(h)
h = BatchNormalization()(h)
outputs = Dense(1, activation='sigmoid')(h)
model = Model(inputs,outputs)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(X_train_arr, y_train, epochs=10, validation_data=(X_valid_arr,y_valid), callbacks=[es])

방법2

# dataaugentaion 으로 학습 
inputs = Input(shape=(280,280,3))
x = base_model(inputs, training=False)
h = Flatten()(x)
h = Dense(1024, activation='relu')(h)
h = Dropout(0.2)(h)
h = BatchNormalization()(h)
outputs = Dense(1, activation='sigmoid')(h)
model = Model(inputs,outputs)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(flow_dir_trainIDG, epochs=5, validation_data=flow_dir_valIDG, callbacks=[es])

방법1 평가하기

y_pred = model.predict(X_test_arr)
y_test_pred = np.where(y_pred>0.5, 1, 0)
print(f1_score(y_test, y_test_pred))
print(classification_report(y_test, y_test_pred))

model.evaluate(X_test_arr, y_test)

방법2 평가하기

model.evaluate(modelign2_X_test_arr, modeling2_y_test)

0개의 댓글