[Deep Learning Specialization]Neural Networks and Deep Learning-Shallow neural networks

Carvinยท2020๋…„ 10์›” 29์ผ
0
post-thumbnail

1st course: Neural Networks and Deep Learning

๐Ÿ—‚ ๋ชฉ์ฐจ

  • 1์žฅ: Introduction to Deep Learning

  • 2์žฅ: Neural Networks Basics

  • 3์žฅ: Shallow neural networks

  • 4์žฅ: Deep Neural Networks

3. Shallow neural networks

Shallow neural network๋ž€ ๋ณดํ†ต input layer์™€ hidden layer, output layer๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š” ๋ง ๊ทธ๋Œ€๋กœ ๊นŠ์ง€(deep)์•Š์€ ์–•์€(shallow) ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์˜๋ฏธํ•œ๋‹ค.

์ด๋ฒˆ ์ฃผ์ฐจ์—์„œ๋Š” 3๊ฐœ์˜ ์–•์€ ์ธต์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š” neural network์—์„œ ์ธต๊ณผ ์ธต, ๊ทธ๋ฆฌ๊ณ  ๋…ธ๋“œ์™€ ๋…ธ๋“œ๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๊ฐ€ ์ „๋‹ฌ๋˜์–ด ํ•™์Šต๋˜๋Š” ๊ณผ์ •์— ๋Œ€ํ•ด์„œ ์ง์ ‘ ๋‹ค๋ฃจ๊ฒŒ ๋˜๋ฉฐ ๊ฐ ๋…ธ๋“œ์—์„œ ์ ์šฉ๋˜ activation function์— ๋Œ€ํ•ด์„œ๋„ ์‚ดํŽด๋ณธ๋‹ค.

1. Neural Networks Overview

  • ์•ž์„œ, 2์ฃผ์ฐจ ๊ฐ•์˜์˜€๋˜ Neural Networks Basics์—์„œ๋Š” Logistic regression์— ๋Œ€ํ•ด์„œ ์ž์„ธํ•˜๊ฒŒ ๋‹ค๋ฃจ์–ด ๋ดค์Œ

  • Logistic regression์ด ๊ณ„์‚ฐ๋˜๋Š” ๊ณผ์ •์€ ๊ฐ๊ฐ์˜ ๋…ธ๋“œ์—์„œ ์ ์šฉ๋˜๋Š” ๊ณผ์ •์œผ๋กœ, ๋…ธ๋“œ์—์„œ ๊ณ„์‚ฐ๋˜๋Š” ๊ณผ์ •์˜ ์ค‘์ฒฉ์ด neural network์˜ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๊ฐ€ ์ง„ํ–‰๋˜๋Š” ๊ณผ์ •์ž„

  • ๋ณธ๊ฒฉ์ ์œผ๋กœ neural network์˜ ๊ณ„์‚ฐ ๊ณผ์ •์„ ์•Œ์•„๋ณด๊ธฐ ์ „์— ์ž์„ธํ•œ ํ‘œ๊ธฐ๋ฒ•์„ ํ•ด๋‹น ๊ฐ•์˜์—์„œ deep-learning-notation ์ œ๊ณตํ•ด์คŒ

2. Neural Network Representation

  • neural network์˜ ๋ช…์นญ ๋ฐ ๊ตฌ์กฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Œ

    • ๊ฐ€์žฅ ์ขŒ์ธก์˜ x1, x2, x3๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋Š” ๋ถ€๋ถ„์€ neural network์˜ ์ž…๋ ฅ๊ฐ’์œผ๋กœ Input layer๋ผ๊ณ  ํ•จ

    • ๊ฐ€์ž‘ ์šฐ์ธก์˜ ํ•˜๋‚˜์˜ ๋…ธ๋“œ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š” ๋ถ€๋ถ„์€ Output layer๋กœ, ๊ฒฐ๊ณผ์ธ y๊ฐ’์„ ์ƒ์„ฑํ•˜๋Š” ์—ญํ• ์„ ํ•จ

    • ์ค‘์•™์— ์œ„์น˜ํ•ด ์žˆ๋Š” ๋ถ€๋ถ„์€ Hidden layer๋ผ๊ณ  ํ•˜๋ฉฐ, ์ง€๋„ํ•™์Šต ๊ณผ์ • ์ค‘, ์ž…๋ ฅ๊ฐ’๊ณผ ์ถœ๋ ฅ๊ฐ’์ด ๋ช…์‹œ์ ์œผ๋กœ ์กด์žฌํ•˜๋Š” ์ƒํ™ฉ๊ณผ ์ž…๋ ฅ๊ฐ’์œผ๋กœ ์ถœ๋ ฅ๊ฐ’์„ ํ•™์Šตํ•˜๋Š” ๊ณผ์ •์—์„œ ๊ฐ’๋“ค์ด ๋ช…์‹œ์ ์œผ๋กœ ๋ณด์ด์ง€ ์•Š๋Š” ์ธต(hidden)์„ ์˜๋ฏธํ•จ

  • neural network์˜ ๊ตฌ์กฐ์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ํ‘œ๊ธฐ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Œ

    • ๋ณดํ†ต input layer๋ฅผ ์ œ์™ธํ•œ ๋‚˜๋จธ์ง€ layer๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์‹ ๊ฒฝ๋ง์„ ์ •์˜ํ•˜๋ฉฐ, ๋‹ค์Œ๊ฐ™์€ ๊ฒฝ์šฐ๋Š” 2-layer nerual network์ž„

    • ๊ฐ ๋…ธ๋“œ์™€ ์ธต์—์„œ๋Š” logistic regression์˜ ๊ณ„์‚ฐ ๊ณผ์ •์ด ์ ์šฉ๋˜๋Š”๋ฐ, parameter์ธ w, b์˜ ์ฐจ์›์ด hidden layer์™€ ๋…ธ๋“œ์˜ ์ˆ˜์˜ ์˜ํ•ด ๊ฒฐ์ •๋จ

    • ์ฒซ๋ฒˆ์งธ layer์—์„œ w, b๋Š” ๊ฐ๊ฐ (4, 3), (4, 1)์˜ ์ฐจ์›์„ ๊ฐ€์ง€๊ณ  ์žˆ๊ณ , ๋‘๋ฒˆ์งธ layer์—์„œ w, b๋Š” (1, 4), (1, 1)์˜ ์ฐจ์›์„ ๊ฐ€์ง€๊ณ  ์žˆ์Œ

3. Computing a Neural Network's Output

  • input layer์— ์กด์žฌํ•˜๋Š” ์ž…๋ ฅ๊ฐ’ ์ค‘ ํ•˜๋‚˜์ธ x(i) ๋ฐ์ดํ„ฐ๊ฐ€ 2๊ฐœ์˜ layer๋ฅผ ๊ฑฐ์ณ output layer์—์„œ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ง€๋Š” ๊ณผ์ •์„ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋กœ ์ •๋ฆฌํ•  ์ˆ˜ ์žˆ์Œ

  • ์ด๋Š” ๋ฐ์ดํ„ฐ x(i) ํ•˜๋‚˜์— ๋Œ€ํ•œ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ๊ฒƒ์œผ๋กœ ๋ฐ์ดํ„ฐ๊ฐ€ hidden layer์˜ ๋…ธ๋“œ๋ฅผ ๊ฑฐ์ณ ๊ฒฐ๊ณผ๊ฐ’์ด ์‚ฐ์ถœ๋˜๋ ค๋ฉด ์œ„์™€ ๊ฐ™์€ ๊ณผ์ •์ด ๋…ธ๋“œ๋ณ„๋กœ ๋ฐ˜๋ณต ๋ฐ ์ค‘์ฒฉ๋œ๋‹ค๊ณ  ํ•  ์ˆ˜ ์žˆ์Œ

  • hidden layer์˜ ๋ชจ๋“  ๋…ธ๋“œ๋“ค์—์„œ logistic regression์˜ ๊ณ„์‚ฐ ๊ณผ์ •์„ ์ ์šฉํ•˜๊ฒŒ ๋˜๋ฉด ๊ฐ ๋…ธ๋“œ๋ณ„๋กœ z๊ฐ’์„ ๊ฐ€์ง€๊ณ , ๋ชจ๋“  z๊ฐ’์— activation function์„ ์ ์šฉํ•˜๊ฒŒ ๋˜๋ฉด 4๊ฐœ์˜ a๊ฐ€ ์ฒซ๋ฒˆ์งธ hidden layer์˜ ๊ฒฐ๊ณผ๋กœ ๋‚˜์˜ค๊ฒŒ ๋จ

  • ํ•˜์ง€๋งŒ ์œ„์™€ ๊ฐ™์€ ํ•˜๋‚˜์˜ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ชจ๋“  ๋…ธ๋“œ๋“ค์— ๋Œ€ํ•œ ๊ณ„์‚ฐ ๊ณผ์ •์„ 4์ค„์˜ ์ฝ”๋“œ๋กœ ๊ฐ„๋‹จํ•˜๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์Œ

4. Vectorizing across multiple examples

  • ์•ž์„œ ์‚ดํŽด๋ณธ ๊ณ„์‚ฐ ๊ณผ์ •์€ ์—ฌ์ „ํžˆ x(i), ํ•˜๋‚˜์˜ ๋ฐ์ดํ„ฐ์— ํ•ด๋‹นํ•จ์œผ๋กœ ์ˆ˜๋งŽ์€ ๋ฐ์ดํ„ฐ์— ๊ณ„์‚ฐ ๊ณผ์ •์„ ๋ฒกํ„ฐํ™”๋ฅผ ํ†ตํ•ด ๋ณด๋‹ค ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Œ

  • m๊ฐœ์˜ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๊ฐ€ ์กด์žฌํ•  ๋•Œ, x(1) ๋ถ€ํ„ฐ x(m)๊นŒ์ง€ iteration์„ ํ†ตํ•ด ๊ฐ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๊ฒฐ๊ณผ๊ฐ’์„ ๊ตฌํ•  ์ˆ˜ ์žˆ์Œ

  • ํ•˜์ง€๋งŒ ๋ฒกํ„ฐํ™”๋ฅผ ํ•˜๊ฒŒ ๋˜๋ฉด ํ•œ๋ฒˆ์— ๊ณ„์‚ฐ์ด ๊ฐ€๋Šฅํ•จ์œผ๋กœ ํ›จ์”ฌ ๋น ๋ฅด๊ฒŒ ๊ฒฐ๊ณผ๋ฅผ ๊ตฌํ•  ์ˆ˜ ์žˆ์Œ, ์ด ๋•Œ ์ฐจ์›์„ ์ž˜ ๋งž์ถฐ์ฃผ๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•จ

6. Activation functions

  • ๋ฐ์ดํ„ฐ๊ฐ€ ํ•™์Šต๋˜๋Š” ๊ณผ์ •์ธ ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ๋ณด๊ฒŒ๋˜๋ฉด layer์˜ ๊ฒฐ๊ณผ์ธ z์— activation function์„ ์ ์šฉํ•œ a๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Œ

  • ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ activation function์ด ์กด์žฌํ•˜๋ฉฐ ๊ฐ ํ•จ์ˆ˜๋งˆ๋‹ค ๋‹ค๋ฅธ ํŠน์ง•์„ ๊ฐ€์ง€๊ณ  ์žˆ์Œ

    • sigmoid: {0 <= a <= 1}, z๊ฐ’์ด ๋„ˆ๋ฌด ์ž‘๊ฑฐ๋‚˜ ํฌ๊ฒŒ ๋˜๋ฉด, ๊ธฐ์šธ๊ธฐ(slope)๊ฐ€ 0์— ๊ฐ€๊นŒ์›Œ์ง์œผ๋กœ gradient descent๊ฐ€ ๋งค์šฐ ์ฒœ์ฒœํžˆ ์ง„ํ–‰๋˜์–ด ์„ฑ๋Šฅ์ด ์ข‹์ง€ ์•Š์•„, output layer์—์„œ ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ํ•จ์ˆ˜์ž„

    • tanh: {-1 <= a <= 1}, ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์— sigmoid๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๋ฉฐ ๊ฐ’์˜ ๋ฒ”์œ„๊ฐ€ -1๊ณผ +1 ์‚ฌ์ด์— ์œ„์น˜ํ•˜๊ฒŒ ๋˜๋ฉด์„œ, ๋ฐ์ดํ„ฐ ํ‰๊ท ๊ฐ’์ด 0์— ๊ฐ€๊น๊ฒŒ ์œ ์ง€๋˜์–ด ๋‹ค์Œ ์ธต์—์„œ์˜ ํ•™์Šต์ด ๋ณด๋‹ค ์ˆ˜์›”ํ•จ

    • Relu: max(0, z), z๊ฐ€ 0๋ณด๋‹ค ํด ๋•Œ, ๋ณธ๋ž˜์˜ ๊ธฐ์šธ๊ธฐ๋ฅผ ๊ฐ€์ง€๋Š” ํŠน์ง•์œผ๋กœ ๋น ๋ฅด๊ฒŒ gradient descent๋กœ ํ•™์Šตํ•ด ๋‚˜๊ฐˆ ์ˆ˜ ์žˆ๊ธฐ์—, ๊ฐ€์žฅ ๋ณดํŽธ์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ํ•จ์ˆ˜์ž„

    • Leaky Relu: Relu๊ฐ€ 0์˜ ๊ฐ’์„ ๊ฐ€์งˆ ๋•Œ, ์„ฑ๋Šฅ์ด ์ €ํ•˜๋˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๊ฐœ์„ ํ•œ ํ•จ์ˆ˜์ž„

  • activation function์ด ์ค‘์š”ํ•œ ์ด์œ ๋Š”, ๊ธฐ์šธ๊ธฐ(slope)๋ฅผ ํ†ตํ•ด global optimum์„ ์ฐพ์•„๊ฐ€๋Š” gradient descent๊ฐ€ ๋ฐ˜๋ณต๋˜๋Š” ๊ณผ์ •์—์„œ, ๊ธฐ์šธ๊ธฐ๋ฅผ ๋ฐ˜ํ™˜ํ•ด์ฃผ๋Š” activation function์˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ, ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ํŠน์ง•์— ๋”ฐ๋ผ ๊ฒฐ๊ณผ๊ฐ€ ๋‹ฌ๋ผ์ง€๊ธฐ ๋•Œ๋ฌธ์— ์ตœ์ ์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋‹ค์–‘ํ•œ ์ ‘๊ทผ์ด ํ•„์š”ํ•จ

7. Why do you need non-linear activation functions?

  • ๊ทธ๋ ‡๋‹ค๋ฉด ๋น„์„ ํ˜•์˜ activation function์„ ๊ฐ ์ธต๋งˆ๋‹ค ์ ์šฉํ•˜๋Š” ์ด์œ ๋Š” ๋ฐ”๋กœ input layer์—์„œ output layer๋กœ ํ•™์Šตํ•˜๋Š” ๊ณผ์ •์—์„œ ๋น„์„ ํ˜•์˜ ํŠน์ง•์„ ๋ฝ‘์•„๋‚ด๊ธฐ ์œ„ํ•ด์„œ์ž„

    • ๋น„์„ ํ˜• activation function์„ ์ ์šฉํ•˜์ง€ ์•Š๊ฑฐ๋‚˜, ์„ ํ˜• activation function์„ ์ ์šฉํ•˜๊ฒŒ ๋˜๋ฉด, ์ธต์˜ ๊นŠ์ด์— ์ƒ๊ด€์—†์ด ์„ ํ˜• ํ•จ์ˆ˜์˜ ์ค‘์ฒฉ์€ ๊ฒฐ๊ตญ ์„ ํ˜• ๊ด€๊ณ„์— ๋ถˆ๊ณผํ•˜๊ฒŒ ๋จ

8. Derivatives of activation functions

  • nerual network์˜ ํ•™์Šต์„ ์œ„ํ•ด์„œ backpropagation์„ ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” activation function์˜ derivative๋ฅผ ๊ตฌํ•ด์•ผํ•จ

  • ๊ฐ activation function์˜ ๊ทธ๋ž˜ํ”„์—์„œ ํŠน์ • ์ง€์ ์—์„œ์˜ slope๊ฐ€ activation function์˜ derivative๋ผ๊ณ  ์ดํ•ดํ•  ์ˆ˜ ์žˆ์Œ

    • sigmoid: gโ€™(z) = a(1 - a) ์ด๋ฉฐ, 0๊ณผ 1์— ๊ฐ€๊นŒ์›Œ์งˆ์ˆ˜๋ก ๊ธฐ์šธ๊ธฐ๊ฐ’์ด ์ž‘์•„์ง์„ ์•Œ ์ˆ˜ ์žˆ์Œ

    • tanh: gโ€™(z) = 1 - a^2 ์ด๋ฉฐ, -1๊ณผ 1์— ๊ฐ€๊นŒ์›Œ์งˆ์ˆ˜๋ก ๊ธฐ์šธ๊ธฐ๊ฐ’์ด ์ž‘์•„์ง์„ ์•Œ ์ˆ˜ ์žˆ์Œ

    • Relu: gโ€™(z) = 0 if z < 0, 1 if z >= 0 ์ด๋ฉฐ, leaky relu์ผ ๋•Œ๋Š” 0์ด ์•„๋‹Œ 0.01์„ ๋ฐ˜ํ™˜ํ•ด์คŒ

9. Gradient descent for Neural Networks

  • ์ด์ œ 2-layer neural network์˜ gradient descent๊ฐ€ ๋ฐ˜๋ณต์ ์œผ๋กœ ์ ์šฉ๋˜๋ฉด์„œ parameter w์™€ b๊ฐ€ ์—…๋ฐ์ดํŠธ๋˜๋Š” ๊ณผ์ •์— ๋Œ€ํ•ด์„œ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์Œ

    • ์šฐ์ธก์˜ backpropagation์˜ ๊ณผ์ •์€ forward propagation๋ฅผ ํ†ตํ•ด ์–ป์€ ๊ฒฐ๊ณผ์™€ ์‹ค์ œ ๊ฒฐ๊ณผ์™€์˜ ์ฐจ์ด๋ฅผ ์‹œ์ž‘์œผ๋กœ ๊ฐ ์ธต์˜ ๋ณ€ํ™”๋Ÿ‰์„ ๊ธฐ๋ฐ˜์œผ๋กœ loss๋ฅผ ์ค„์—ฌ๋‚˜๊ฐ€๋Š” ๊ณผ์ •์ž„

  • ํ•ด๋‹น ๊ณผ์ •์„ ์กฐ๊ธˆ ๋” ๋ณด๊ธฐ ์ข‹๊ฒŒ ์ฝ”๋“œ๋กœ ํ‘œํ˜„ํ•˜๊ฒŒ ๋˜๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์Œ
  • forward propagation
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
  • backpropagation
dZ2 = A2 - Y
dW2 = np.dot(dZ2, A1.T) / m
db2 = np.sum(dZ2, axis = 1, keepdims = True) / m
dZ1 = np.multiply(np.dot(W2.T, dZ2), 1 - np.power(A1, 2))
dW1 = np.dot(dZ1, X.T) / m
db1 = np.sum(dZ1, axis = 1, keepdims = True) / m

10. Random Initialization

  • neural network๋ฅผ ์„ค๊ณ„ํ•  ๋•Œ, ์ ์ ˆํ•œ parameter๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•œ ๋ฐ˜๋ณต์ ์ธ ๊ณผ์ •์„ ๊ฑฐ์น˜๋Š”๋ฐ, ์ดˆ๊ธฐ parameter๋ฅผ ์ž„์˜๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ๊ต‰์žฅํžˆ ์ค‘์š”ํ•จ

    • neural network์—์„œ ์ดˆ๊ธฐ weight๋ฅผ 0์œผ๋กœ ์„ค์ •ํ•˜๊ฒŒ ๋˜๋ฉด ํ•œ layer์˜ ๋ชจ๋“  ๋…ธ๋“œ์—์„œ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋‘ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ update๊ฐ€ ์ง„ํ–‰๋˜์–ด ์˜๋ฏธ์—†๋Š” ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜ค๊ฒŒ ๋จ

    • ์ด๋ ‡๊ฒŒ ๋ชจ๋“  ๋…ธ๋“œ์—์„œ ๊ฐ™์€ ์—ฐ์‚ฐ์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ๋˜์–ด ๋˜‘๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ง€๊ฒŒ ๋˜๋Š” ๊ฒƒ์„ '๋Œ€์นญ(symmetry)'์ด๋ผ๊ณ  ํ•˜๋Š”๋ฐ, weight๋ฅผ ๋žœ๋ค์œผ๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ symmetry breaking ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋จ

    • ์ด ๋•Œ, ๋žœ๋ค์œผ๋กœ ์„ค์ •ํ•˜๋Š” weight ๊ฐ’์ด ๋งค์šฐ ํฌ๊ฒŒ ๋˜๋ฉด ๊ธฐ์šธ๊ธฐ๊ฐ€ 0์œผ๋กœ ์†Œ์‹ค๋˜์–ด ๋ฒ„๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์— ํ†ต์ƒ ๋งค์šฐ ์ž‘์€ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•ด์คŒ

0๊ฐœ์˜ ๋Œ“๊ธ€