แ„‚ ๐Ÿ˜„ [6 ์ผ์ฐจ] : EXPLORATION 01. ์ˆซ์ž์†๊ธ€์”จ ๋ฐ์ดํ„ฐ ์…‹

๋ฐฑ๊ฑดยท2022๋…„ 1์›” 21์ผ
0

MNIST ์ˆซ์ž ์†๊ธ€์”จ ๋ฐ์ดํ„ฐ ์…‹

MNIST Dataset

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import os


print(tf.__version__)   # Tensorflow์˜ ๋ฒ„์ „์„ ์ถœ๋ ฅ

mnist = keras.datasets.mnist # MNIST ๋ฐ์ดํ„ฐ๋ฅผ ๋กœ๋“œ. ๋‹ค์šด๋กœ๋“œํ•˜์ง€ ์•Š์•˜๋‹ค๋ฉด ๋‹ค์šด๋กœ๋“œ๊นŒ์ง€ ์ž๋™์œผ๋กœ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. 
(x_train, y_train), (x_test, y_test) = mnist.load_data()   

print(len(x_train))  # x_train ๋ฐฐ์—ด์˜ ํฌ๊ธฐ๋ฅผ ์ถœ๋ ฅ
2.6.0
60000
  • x์—๋Š” ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๊ฐ€ ๋“ค์–ด์žˆ์Œ
  • y์—๋Š” ์ด๋ฏธ์ง€์˜ ์ •๋‹ต์ด ๋“ค์–ด์žˆ์Œ -> y_train[0]

๋ฌธ์ œ์— ํ•ด๋‹นํ•˜๋Š” ์ •๋ณด๋ฅผ ํ™•์ธํ•  ๋•Œ

  • ๋ฌธ์ œ๊ฐ€ ์ด๋ฏธ์ง€๋ผ๋ฉด

plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()

plt.imshow(x_train[0],cmap=plt.cm.binary)
plt.show()

plt.imshow(x_train[1],cmap=plt.cm.binary)
plt.show()

๋‹ต์— ํ•ด๋‹นํ•˜๋Š” ์ •๋ณด๋ฅผ ํ™•์ธํ•  ๋•Œ

print(y_train[0])

print(y_train[0])
5

๋ฌธ์ œ์™€ ๋‹ต์„ ๊ฐ™์ด ๋ณด๊ณ  ์‹ถ์„ ๋•Œ.

index=10050     
plt.imshow(x_train[index],cmap=plt.cm.binary)
plt.show()
print('๋ฌธ์ œ :', (index+1), '๋ฒˆ์งธ ์ด๋ฏธ์ง€  ','  ๋‹ต : ์ˆซ์ž',  y_train[index])

๋ฌธ์ œ : 10051 ๋ฒˆ์งธ ์ด๋ฏธ์ง€     ๋‹ต : ์ˆซ์ž 7

์ž…๋ ฅ ๋ฐ›์•„์„œ ํ™•์ธํ•˜๊ณ  ์‹ถ์„๋–„

x = int(input('1~ 60000 ์‚ฌ์ด์˜ ์ˆซ์ž๋ฅผ ๋„ฃ์–ด๋ผ ->'))
index =int(x - 1)    
plt.imshow(x_train[index],cmap=plt.cm.binary)
plt.show()
print('๋ฌธ์ œ :', (index+1), '๋ฒˆ์งธ ์ด๋ฏธ์ง€  ','  ๋‹ต : ์ˆซ์ž',  y_train[index])
1~ 60000 ์‚ฌ์ด์˜ ์ˆซ์ž๋ฅผ ๋„ฃ์–ด๋ผ ->2

๋ฌธ์ œ : 2 ๋ฒˆ์งธ ์ด๋ฏธ์ง€     ๋‹ต : ์ˆซ์ž 0

ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์˜ ํฌ๊ธฐ์™€ ๋ชจ์–‘์„ ํ™•์ธํ•ด๋ณด์ž (์žฅ์ˆ˜, ํ”ฝ์…€)

print(x_train.shape)
(60000, 28, 28)
# 60,000์žฅ๊ณผ 28x28 ํ”ฝ์…€

์‹œํ—˜์šฉ ๋ฐ์ดํ„ฐ๋„ ํ™•์ธํ•ด๋ณด์ž

print(x_test.shape)
(10000, 28, 28)
# 10,000์žฅ 28*28ํ”ฝ์…€.

์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๋ณธ๋ž˜ ํ”ฝ์…€๊ฐ’์„ ํ™•์ธํ•ด๋ณด์ž.

print('์ตœ์†Œ๊ฐ’:',np.min(x_train), ' ์ตœ๋Œ€๊ฐ’:',np.max(x_train))
์ตœ์†Œ๊ฐ’: 0  ์ตœ๋Œ€๊ฐ’: 255

์ •๊ทœํ™” : ํ›ˆ๋ จ์‹œ ์ž…๋ ฅ๊ฐ’์€ 0~1๋กœ ์ •๊ทœํ™” ์‹œ์ผœ์ฃผ๋Š”๊ฒŒ ์ข‹๋ด๋‹ค.๊ทผ๋ฐ ์™œ??

x_train_norm, x_test_norm = x_train / 255.0, x_test / 255.0
print('์ตœ์†Œ๊ฐ’:',np.min(x_train_norm), ' ์ตœ๋Œ€๊ฐ’:',np.max(x_train_norm))
์ตœ์†Œ๊ฐ’: 0.0  ์ตœ๋Œ€๊ฐ’: 1.0

์ด์ œ ๋ฐ์ดํ„ฐ ์ค€๋น„๋Š” ๋๋‚ฌ์œผ๋‹ˆ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์‹œ์ผœ๋ณด์ž.

ํ…์„œํ”Œ๋กœ์šฐ ์ผ€๋ผ์Šค(tf.keras)์—์„œ Sequential API๋ผ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด๋ณด์ž

  • Sequential API๋Š” ๊ฐœ๋ฐœ์˜ ์ž์œ ๋„๋Š” ๋งŽ์ด ๋–จ์–ด์ง€์ง€๋งŒ, ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ๋งŒ๋“ค์–ด๋‚ผ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•

tf.keras์˜ Sequential API๋ฅผ ์ด์šฉํ•˜์—ฌ LeNet์ด๋ผ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์„ค๊ณ„ํ•œ ์˜ˆ

model=keras.models.Sequential()
model.add(keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(28,28,1)))
model.add(keras.layers.MaxPool2D(2,2))
model.add(keras.layers.Conv2D(32, (3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D((2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(32, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))

print('Model์— ์ถ”๊ฐ€๋œ Layer ๊ฐœ์ˆ˜: ', len(model.layers))
Model์— ์ถ”๊ฐ€๋œ Layer ๊ฐœ์ˆ˜:  7

  • Conv2D ๋ ˆ์ด์–ด์˜ ์ฒซ ๋ฒˆ์งธ ์ธ์ž๋Š” ์‚ฌ์šฉํ•˜๋Š” ์ด๋ฏธ์ง€ ํŠน์ง•์˜ ์ˆ˜์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” 16๊ณผ 32๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๋จผ์ € 16๊ฐœ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•์„, ๊ทธ ๋’ค์— 32๊ฐœ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•์”ฉ์„ ๊ณ ๋ คํ•˜๊ฒ ๋‹ค๋Š” ๋œป์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ์ˆซ์ž ์ด๋ฏธ์ง€๋Š” ์‚ฌ์‹ค ๋งค์šฐ ๋‹จ์ˆœํ•œ ํ˜•ํƒœ์˜ ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๊ฐ•์•„์ง€ ์–ผ๊ตด ์‚ฌ์ง„์ด ์ž…๋ ฅ ์ด๋ฏธ์ง€๋ผ๋ฉด ํ›จ์”ฌ ๋””ํ…Œ์ผํ•˜๊ณ  ๋ณต์žกํ•œ ์˜์ƒ์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿด ๊ฒฝ์šฐ์—๋Š” ์ด ํŠน์ง• ์ˆซ์ž๋ฅผ ๋Š˜๋ ค์ฃผ๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  • Dense ๋ ˆ์ด์–ด์˜ ์ฒซ ๋ฒˆ์งธ ์ธ์ž๋Š” ๋ถ„๋ฅ˜๊ธฐ์— ์‚ฌ์šฉ๋˜๋Š” ๋‰ด๋Ÿฐ์˜ ์ˆซ์ž ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ’์ด ํด์ˆ˜๋ก ๋ณด๋‹ค ๋ณต์žกํ•œ ๋ถ„๋ฅ˜๊ธฐ๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 10๊ฐœ์˜ ์ˆซ์ž๊ฐ€ ์•„๋‹Œ ์•ŒํŒŒ๋ฒณ์„ ๊ตฌ๋ถ„ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๋Œ€๋ฌธ์ž 26๊ฐœ, ์†Œ๋ฌธ์ž 26๊ฐœ๋กœ ์ด 52๊ฐœ์˜ ํด๋ž˜์Šค๋ฅผ ๋ถ„๋ฅ˜ํ•ด ๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ 32๋ณด๋‹ค ํฐ 64, 128 ๋“ฑ์„ ๊ณ ๋ คํ•ด ๋ณผ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค.
  • ๋งˆ์ง€๋ง‰ Dense ๋ ˆ์ด์–ด์˜ ๋‰ด๋Ÿฐ ์ˆซ์ž๋Š” ๊ฒฐ๊ณผ์ ์œผ๋กœ ๋ถ„๋ฅ˜ํ•ด ๋‚ด์•ผ ํ•˜๋Š” ํด๋ž˜์Šค ์ˆ˜๋กœ ์ง€์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ˆซ์ž ์ธ์‹๊ธฐ์—์„œ๋Š” 10, ์•ŒํŒŒ๋ฒณ ์ธ์‹๊ธฐ์—์„œ๋Š” 52๊ฐ€ ๋˜๊ฒ ์ง€์š”

๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ์„ ๋งŒ๋“ค์—ˆ์œผ๋‹ˆ ํ™•์ธํ•ด๋ณด์ž

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 26, 26, 16)        160       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 11, 11, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 32)          0         
_________________________________________________________________
flatten (Flatten)            (None, 800)               0         
_________________________________________________________________
dense (Dense)                (None, 32)                25632     
_________________________________________________________________
dense_1 (Dense)              (None, 10)                330       
=================================================================
Total params: 30,762
Trainable params: 30,762
Non-trainable params: 0
_________________________________________________________________

์ด๊ฒŒ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ์ด๋‹ค
์ž…๋ ฅ์˜ ํ˜•ํƒœ๋Š” (๋ฐ์ดํ„ฐ ๊ฐฏ์ˆ˜, ์ด๋ฏธ์ง€ ํฌ๊ธฐ x, ์ด๋ฏธ์ง€ ํฌ๊ธฐ y, ์ฑ„๋„์ˆ˜)์˜ ํ˜•ํƒœ์ด๋‹ค.
์ด๊ฑธ input_shape=(28,28,1)๋กœ ์ง€์ •ํ•œ ๊ฑฐ๋‹ค
๊ทผ๋ฐ print(x_train.shape)๋ฅผ ํ•ด๋ณด๋ฉด (60000, 28, 28)๋กœ ์ฑ„๋„์ˆ˜๊ฐ€ ์—†๋‹ค

๊ทธ๋Ÿฌ๋‹ˆ๊นŒ ์ฑ„๋„๋„ ๋„ฃ์–ด์ฃผ์–ด์•ผ ํ•œ๋‹ค -> ์ฑ„๋„์€ ์ƒ‰๊ตฌ์„ฑ์„ ๋งํ•œ๋‹ค ํ‘๋ฐฑ:1 ์นผ๋ผRGB : 3

print("Before Reshape - x_train_norm shape: {}".format(x_train_norm.shape))
print("Before Reshape - x_test_norm shape: {}".format(x_test_norm.shape))

x_train_reshaped=x_train_norm.reshape( -1, 28, 28, 1)  # ๋ฐ์ดํ„ฐ๊ฐฏ์ˆ˜์— -1์„ ์“ฐ๋ฉด reshape์‹œ ์ž๋™๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค.
x_test_reshaped=x_test_norm.reshape( -1, 28, 28, 1)

print("After Reshape - x_train_reshaped shape: {}".format(x_train_reshaped.shape))
print("After Reshape - x_test_reshaped shape: {}".format(x_test_reshaped.shape))
Before Reshape - x_train_norm shape: (60000, 28, 28)
Before Reshape - x_test_norm shape: (10000, 28, 28)
After Reshape - x_train_reshaped shape: (60000, 28, 28, 1)
After Reshape - x_test_reshaped shape: (10000, 28, 28, 1)

๋งŒ๋“  ๋ชจ๋ธ ๊ฐ€์ง€๊ณ  ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ ํ•™์Šต์„ ์‹œ์ผœ๋ณด์ž

x_train ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ํ•™์Šต์‹œํ‚ฌ ๊ฑฐ๋‹ค.

  • ์ด ๋•Œ epoch=10์€ ์ „์ฒด 60,000๊ฐœ ๋ฐ์ดํ„ฐ๋ฅผ 10๋ฒˆ ๋ฐ˜๋ณตํ•ด์„œ ํ•™์Šต์‹œํ‚จ๋‹ค๋Š” ๊ฑฐ๋‹ค.
  • model์— ์ž…๋ ฅ ํ˜•ํƒœ๋ฅผ ๋งž์ถฐ์•ผ ํ•˜๋‹ˆ๊นŒ ์ฑ„๋„๊นŒ์ง€ ๋„ฃ์–ด์ค€ x_train_reshaped๋กœ ์‹œ์ผœ์•ผ ํ•œ๋‹ค.
model.compile(optimizer='adam',
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])

model.fit(x_train_reshaped, y_train, epochs=10)
Epoch 1/10
1875/1875 [==============================] - 10s 3ms/step - loss: 0.2158 - accuracy: 0.9334
Epoch 2/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0682 - accuracy: 0.9793
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0496 - accuracy: 0.9851
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0388 - accuracy: 0.9878
Epoch 5/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0326 - accuracy: 0.9899
Epoch 6/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0270 - accuracy: 0.9913
Epoch 7/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0223 - accuracy: 0.9930
Epoch 8/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0201 - accuracy: 0.9934
Epoch 9/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0159 - accuracy: 0.9950
Epoch 10/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0138 - accuracy: 0.9958





<keras.callbacks.History at 0x7fa094fbfb50>
  • ํ•™์Šต์ด ์ง„ํ–‰๋จ์— ๋”ฐ๋ผ epoch ๋ณ„๋กœ ์–ด๋Š ์ •๋„ ์ธ์‹ ์ •ํ™•๋„(accuracy)๊ฐ€ ์˜ฌ๋ผ๊ฐ€๋Š”์ง€ ํ™•์ธ
  • ์ธ์‹๋ฅ  ์ƒ์Šน์ด ๋ฏธ๋ฏธํ• ๋–„๊นŒ์ง€ ํ›ˆ๋ จ์‹œํ‚ค๋ฉด ๋จ

๋งŒ๋“ค์—ˆ์œผ๋‹ˆ๊นŒ ์„ฑ๋Šฅ์„ ํ™•์ธํ•ด๋ณด์ž

๊ทธ๋ž˜์„œ ์‹œํ—˜์šฉ ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ๋Š”๊ฑฐ์•ผ

  • x_test ๋กœ ํ™•์ธํ•ด๋ณด์ž
test_loss, test_accuracy = model.evaluate(x_test_reshaped,y_test, verbose=2)
print("test_loss: {} ".format(test_loss))
print("test_accuracy: {}".format(test_accuracy))
313/313 - 1s - loss: 0.0394 - accuracy: 0.9906
test_loss: 0.03935988247394562 
test_accuracy: 0.9905999898910522
  • 100%๋Š” ์•ˆ๋‚˜์˜จ๋‹ค.
  • ๋ฐ์ดํ„ฐ ๋ณด๋ฉด ์†๊ธ€์”จ ์ฃผ์ธ์ด ๋‹ค๋ฅธ ๊ฒƒ๋„ ์žˆ๋‹ค.
  • ์ฒ˜์Œ๋ณด๋Š” ํ•„์ฒด๋„ ์žˆ๋‹ค.

๊ทธ๋Ÿผ ์–ด๋–ค ๋ฐ์ดํ„ฐ๋ฅผ ์ž˜๋ชป ์ถ”๋ก ํ•œ๊ฑฐ๋ƒ? ํ™•์ธํ•ด๋ณด์ž

์ผ๋‹จ ์ œ๋Œ€๋กœ ์ถ”๋ก ํ•œ ๊ฒƒ๋ถ€ํ„ฐ ๋ณด์ž

model.predict() ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค

  • ํ™•๋ฅ ๋ถ„ํฌ๋กœ ๋ณด์—ฌ์ค€๋‹ค.
  • 1์— ๊ทผ์ ‘ํ•  ์ˆ˜๋ก ํ™•์‹ ์„ ๊ฐ€์ง€๋Š” ๊ฒƒ.
predicted_result = model.predict(x_test_reshaped)  # model์ด ์ถ”๋ก ํ•œ ํ™•๋ฅ ๊ฐ’. 
predicted_labels = np.argmax(predicted_result, axis=1)

idx=0  #1๋ฒˆ์งธ x_test๋ฅผ ์‚ดํŽด๋ณด์ž. 
print('model.predict() ๊ฒฐ๊ณผ : ', predicted_result[idx])
print('model์ด ์ถ”๋ก ํ•œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒฐ๊ณผ : ', predicted_labels[idx])
print('์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ : ', y_test[idx])
model.predict() ๊ฒฐ๊ณผ :  [5.0936549e-10 4.2054354e-10 1.4162788e-07 3.0713025e-07 1.0975039e-09
 4.7302101e-10 4.8337666e-17 9.9998915e-01 1.5172457e-09 1.0410251e-05]
model์ด ์ถ”๋ก ํ•œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒฐ๊ณผ :  7
์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ :  7

[0์œผ๋กœ ์ถ”๋ก ํ•  ํ™•๋ฅ , 1๋กœ์ถ”๋ก , ...., 7๋กœ ์ถ”๋ก :, 8๋กœ, 9๋กœ]

predicted_result = model.predict(x_test_reshaped)  # model์ด ์ถ”๋ก ํ•œ ํ™•๋ฅ ๊ฐ’. 
predicted_labels = np.argmax(predicted_result, axis=1)

idx=1  #2๋ฒˆ์งธ x_test๋ฅผ ์‚ดํŽด๋ณด์ž. 
print('model.predict() ๊ฒฐ๊ณผ : ', predicted_result[idx])
print('model์ด ์ถ”๋ก ํ•œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒฐ๊ณผ : ', predicted_labels[idx])
print('์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ : ', y_test[idx])
model.predict() ๊ฒฐ๊ณผ :  [3.8869772e-11 5.8056837e-10 1.0000000e+00 2.7700270e-13 6.0729700e-16
 9.5813839e-24 2.9645359e-13 1.0713646e-12 1.4265510e-10 7.0767866e-18]
model์ด ์ถ”๋ก ํ•œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒฐ๊ณผ :  2
์‹ค์ œ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ :  2

์ถ”๋ก ํ•œ๊ฑฐ ์‹ค์ œ ์ด๋ฏธ์ง€ ๋ณด์ž

plt.imshow(x_test[idx],cmap=plt.cm.binary)
plt.show()

์ž˜๋ชป ์ถ”๋ก ํ•œ๊ฑฐ 5๊ฐœ๋งŒ ๋ฝ‘์•„๋ณด์ž

import random
wrong_predict_list=[]
for i, _ in enumerate(predicted_labels):
    # i๋ฒˆ์งธ test_labels๊ณผ y_test์ด ๋‹ค๋ฅธ ๊ฒฝ์šฐ๋งŒ ๋ชจ์•„ ๋ด…์‹œ๋‹ค. 
    if predicted_labels[i] != y_test[i]:
        wrong_predict_list.append(i)

# wrong_predict_list ์—์„œ ๋žœ๋คํ•˜๊ฒŒ 5๊ฐœ๋งŒ ๋ฝ‘์•„๋ด…์‹œ๋‹ค.
samples = random.choices(population=wrong_predict_list, k=5)

for n in samples:
    print("์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: " + str(predicted_result[n]))
    print("๋ผ๋ฒจ: " + str(y_test[n]) + ", ์˜ˆ์ธก๊ฒฐ๊ณผ: " + str(predicted_labels[n]))
    plt.imshow(x_test[n], cmap=plt.cm.binary)
    plt.show()
์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: [9.9993050e-01 2.2502122e-12 4.7690723e-06 3.0059752e-08 1.7519780e-13
 8.0025169e-09 5.0630263e-08 6.8369168e-09 5.4815446e-05 9.7632701e-06]
๋ผ๋ฒจ: 8, ์˜ˆ์ธก๊ฒฐ๊ณผ: 0

์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: [4.3070769e-11 9.9670094e-01 2.3406226e-04 6.9325289e-08 9.2083937e-06
 7.6245671e-10 3.0557083e-03 1.7472924e-13 8.3793372e-09 3.5816929e-12]
๋ผ๋ฒจ: 6, ์˜ˆ์ธก๊ฒฐ๊ณผ: 1

์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: [2.3253349e-11 1.5969941e-05 2.2154758e-03 1.5413132e-13 9.9764663e-01
 7.1346483e-12 3.1916945e-12 1.1906381e-04 3.9855234e-08 2.7896983e-06]
๋ผ๋ฒจ: 2, ์˜ˆ์ธก๊ฒฐ๊ณผ: 4

์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: [5.2042344e-07 8.2894566e-08 4.1278052e-09 1.4097756e-06 5.7676568e-04
 5.9966849e-05 7.5762877e-08 5.5446890e-07 5.0400358e-01 4.9535707e-01]
๋ผ๋ฒจ: 9, ์˜ˆ์ธก๊ฒฐ๊ณผ: 8

์˜ˆ์ธกํ™•๋ฅ ๋ถ„ํฌ: [6.0286757e-14 7.1217093e-10 2.9044655e-14 5.7257526e-02 3.6976814e-11
 9.4270760e-01 1.5280622e-12 1.6837074e-09 3.4841847e-05 1.6904140e-08]
๋ผ๋ฒจ: 3, ์˜ˆ์ธก๊ฒฐ๊ณผ: 5

์ธ์‹๋ฅ ์„ ๋†’์—ฌ๋ณด์ž ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋Š” ๊ทธ๋Œ€๋กœ ๋‘๊ณ 

ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ๋ฐ”๊ฟ”๋ณด๊ธฐ

  • ๋ฐ”๊ฟ”๋ณผ ์ˆ˜ ์žˆ๋Š” ๊ฒƒ๋“ค
    • Conv2D ๋ ˆ์ด์–ด์—์„œ ์ž…๋ ฅ ์ด๋ฏธ์ง€์˜ ํŠน์ง• ์ˆ˜๋ฅผ ์กฐ์ ˆ
    • Dense ๋ ˆ์ด์–ด์—์„œ ๋‰ด๋Ÿฐ์ˆ˜ ๋ฐ”๊ฟ”๋ณด๊ธฐ
    • ๋ฐ˜๋ณต ํšŸ์ˆ˜์ธ epoch ๊ฐ’์„ ๋ณ€๊ฒฝํ•ด๋ณด๊ธฐ.
# ๋ฐ”๊ฟ” ๋ณผ ์ˆ˜ ์žˆ๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋“ค
n_channel_1=16
n_channel_2=32
n_dense=32
n_train_epoch=10

model=keras.models.Sequential()
model.add(keras.layers.Conv2D(n_channel_1, (3,3), activation='relu', input_shape=(28,28,1)))
model.add(keras.layers.MaxPool2D(2,2))
model.add(keras.layers.Conv2D(n_channel_2, (3,3), activation='relu'))
model.add(keras.layers.MaxPooling2D((2,2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(n_dense, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))

model.summary()
model.compile(optimizer='adam',
             loss='sparse_categorical_crossentropy',
             metrics=['accuracy'])

# ๋ชจ๋ธ ํ›ˆ๋ จ
model.fit(x_train_reshaped, y_train, epochs=n_train_epoch)

# ๋ชจ๋ธ ์‹œํ—˜
test_loss, test_accuracy = model.evaluate(x_test_reshaped, y_test, verbose=2)
print("test_loss: {} ".format(test_loss))
print("test_accuracy: {}".format(test_accuracy))
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_2 (Conv2D)            (None, 26, 26, 16)        160       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 16)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 11, 11, 32)        4640      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 5, 5, 32)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 800)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 32)                25632     
_________________________________________________________________
dense_3 (Dense)              (None, 10)                330       
=================================================================
Total params: 30,762
Trainable params: 30,762
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1991 - accuracy: 0.9392
Epoch 2/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0726 - accuracy: 0.9779
Epoch 3/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0535 - accuracy: 0.9834
Epoch 4/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0434 - accuracy: 0.9864
Epoch 5/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0366 - accuracy: 0.9884
Epoch 6/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0311 - accuracy: 0.9903
Epoch 7/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0256 - accuracy: 0.9916
Epoch 8/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0215 - accuracy: 0.9930
Epoch 9/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0183 - accuracy: 0.9940
Epoch 10/10
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0162 - accuracy: 0.9945
313/313 - 1s - loss: 0.0400 - accuracy: 0.9881
test_loss: 0.04003562033176422 
test_accuracy: 0.988099992275238

ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’์„ ๋ฐ”๊ฟ” ์ตœ๊ณ ๋กœ ๋†’์€ ์ ์ˆ˜๋ฅผ ์–ป๋Š” ๋„คํŠธ์›Œํฌ ๋ชจ๋ธ ์ฝ”๋“œ์™€ ๊ทธ ๋–„์˜ ์‹œํ—˜์šฉ ๋ฐ์ดํ„ฐ ์ธ์‹๋ฅ ์„ ํ™•์ธ

profile
๋งˆ์ผ€ํŒ…์„ ์œ„ํ•œ ์ธ๊ณต์ง€๋Šฅ ์„ค๊ณ„์™€ ์Šคํƒ€ํŠธ์—… Log

0๊ฐœ์˜ ๋Œ“๊ธ€