แ„‚ ๐Ÿ˜„ [9 ์ผ์ฐจ] : FUNDAMENTAL 12. ์‚ฌ์ดํ‚ท๋Ÿฐ์œผ๋กœ ๊ตฌํ˜„ํ•ด๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹

๋ฐฑ๊ฑดยท2022๋…„ 1์›” 21์ผ
0

์‚ฌ์ดํ‚ท๋Ÿฐ์œผ๋กœ ๊ตฌํ˜„ํ•ด๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹

ํ•™์Šต๋ชฉํ‘œ


  • ๋จธ์‹ ๋Ÿฌ๋‹์˜ ๋‹ค์–‘ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์†Œ๊ฐœํ•ฉ๋‹ˆ๋‹ค.
  • ์‚ฌ์ดํ‚ท๋Ÿฐ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์‚ฌ์šฉ๋ฒ•์„ ์ตํž™๋‹ˆ๋‹ค.
  • ์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ดํ•ดํ•˜๊ณ  ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์…‹๊ณผ
    ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋‚˜๋ˆ„๋Š” ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•ฉ๋‹ˆ๋‹ค.

๋ชฉ์ฐจ


  • ๋‹ค์–‘ํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜

  • ์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ ๊ฐ€์ด๋“œํ•˜๋Š” ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜

  • Hello Scikit-learn

  • ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ์ฃผ์š” ๋ชจ๋“ˆ

    • 4.1. ๋ฐ์ดํ„ฐ ํ‘œํ˜„๋ฒ•
    • 4.2. ํšŒ๊ท€ ๋ชจ๋ธ ์‹ค์Šต
    • 4.3. datasets ๋ชจ๋“ˆ
    • 4.4. ์‚ฌ์ดํ‚ท๋Ÿฐ ๋ฐ์ดํ„ฐ์…‹์„ ์ด์šฉํ•œ ๋ถ„๋ฅ˜ ๋ฌธ์ œ ์‹ค์Šต
    • 4.5. Estimator
  • ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋ถ„๋ฆฌํ•˜๊ธฐ

๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜

์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ข…๋ฅ˜

์•„๋ž˜์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ•ฉ์ณ์„œ ์‚ฌ์šฉํ•˜๊ธฐ๋„ ํ•จ
์ง€๋„ํ•™์Šต์œผ๋กœ ์ง„ํ–‰ํ•˜๋‹ค๊ฐ€ ์ฐจ์›๊ณผ ํŠน์ง•(Feature)์˜ ์ˆ˜๊ฐ€ ๋งŽ์œผ๋ฉด ๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ ์ „ํ™˜

์ง€๋„ํ•™์Šต (Supervised Learning)

  • ์ง€๋„ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•œ ์„ธํŠธ์˜ ์‚ฌ๋ก€๋“ค์„(examples) ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰
  • ์ง€๋„ ํ•™์Šต์—๋Š” ๊ธฐ์กด์— ์ด๋ฏธ ๋ถ„๋ฅ˜๋œ ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ(labeled training data)๋กœ ๊ตฌ์„ฑ๋œ ์ž…๋ ฅ ๋ณ€์ˆ˜์™€ ์›ํ•˜๋Š” ์ถœ๋ ฅ ๋ณ€์ˆ˜๊ฐ€ ์ˆ˜๋ฐ˜
  • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•ด ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์„ํ•˜์—ฌ ์ž…๋ ฅ ๋ณ€์ˆ˜๋ฅผ ์ถœ๋ ฅ ๋ณ€์ˆ˜์™€ ๋งคํ•‘์‹œํ‚ค๋Š” ํ•จ์ˆ˜๋ฅผ ์ฐพ์Œ
  • ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์ผ๋ฐ˜ํ™”(generalizing)๋ฅผ ํ†ตํ•ด ์•Œ๋ ค์ง€์ง€ ์•Š์€ ์ƒˆ๋กœ์šด ์‚ฌ๋ก€๋“ค์„ ๋งคํ•‘
  • ๋ˆˆ์— ๋ณด์ด์ง€ ์•Š๋Š” ์ƒํ™ฉ(unseen situations) ์†์—์„œ ๊ฒฐ๊ณผ๋ฅผ ์˜ˆ์ธก
  • ์ฐจ์›๊ณผ ํŠน์ง•์˜ ์ˆ˜๊ฐ€ ์ ์„ ๊ฒฝ์šฐ
  • ์•ŒํŒŒ๊ณ  ์ดˆ๊ธฐ ํ•™์Šต์‹œ ์‚ฌ์šฉ
  • ๋‹จ์ 
    • ๋ฐ์ดํ„ฐ ๋ถ„๋ฅ˜(๋ ˆ์ด๋ธ”๋ง) ์ž‘์—…์— ๋งŽ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์ด ์†Œ์š”

๋ถ„๋ฅ˜(Classification)

  • ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฒ”์ฃผํ˜•(categorical) ๋ณ€์ˆ˜๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋  ๋•Œ
  • ์ด๋ฏธ์ง€์— ๊ฐ•์•„์ง€๋‚˜ ๊ณ ์–‘์ด์™€ ๊ฐ™์€ ๋ ˆ์ด๋ธ” ๋˜๋Š” ์ง€ํ‘œ(indicator)๋ฅผ ํ• ๋‹นํ•˜๋Š” ๊ฒฝ์šฐ
  • ๋ ˆ์ด๋ธ”์ด ๋‘ ๊ฐœ์ธ ๊ฒฝ์šฐ๋ฅผ โ€˜์ด์ง„ ๋ถ„๋ฅ˜(binary classification)
  • ๋ฒ”์ฃผ๊ฐ€ ๋‘ ๊ฐœ ์ด์ƒ์ธ ๊ฒฝ์šฐ๋Š” ๋‹ค์ค‘ ํด๋ž˜์Šค ๋ถ„๋ฅ˜(multi-class classification)

ํšŒ๊ท€(Regression)

  • ์—ฐ์† ๊ฐ’์„ ์˜ˆ์ธกํ•  ๋•Œ ๋ฌธ์ œ๋Š” ํšŒ๊ท€ ๋ฌธ์ œ

์˜ˆ์ธก(Forecasting)

  • ๊ณผ๊ฑฐ ๋ฐ ํ˜„์žฌ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฏธ๋ž˜๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ณผ์ •
  • ๋™ํ–ฅ(trends)์„ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•ด ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ
  • ์˜ˆ) ์˜ฌํ•ด์™€ ์ „๋…„๋„ ๋งค์ถœ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‚ด๋…„๋„ ๋งค์ถœ์„ ์ถ”์‚ฐํ•˜๋Š” ๊ณผ์ •

์ค€์ง€๋„ํ•™์Šต(Semi-Supervised Learning or Weakly Supervised Learning)

  • ๋‚ด์šฉ
    • ๋ถ„๋ฅ˜๋œ ์ž๋ฃŒ๊ฐ€ ํ•œ์ •์ ์ผ ๋•Œ
    • ์ง€๋„ ํ•™์Šต์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ๋ถ„๋ฅ˜(unlabeled) ์‚ฌ๋ก€๋ฅผ ์ด์šฉ
    • ๊ธฐ๊ณ„(machine)๋Š” ์˜จ์ „ํžˆ ์ง€๋„ ๋ฐ›์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— โ€œ๊ธฐ๊ณ„๊ฐ€ ์ค€์ง€๋„(semi-supervised)๋ฅผ ๋ฐ›๋Š”๋‹คโ€๋ผ๊ณ  ํ‘œํ˜„
    • ํ•™์Šต ์ •ํ™•์„ฑ์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ๋ถ„๋ฅ˜ ์‚ฌ๋ก€์™€ ํ•จ๊ป˜ ์†Œ๋Ÿ‰์˜ ๋ถ„๋ฅ˜(labeled) ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉ

๋น„์ง€๋„ํ•™์Šต (Unsupervised Learning)

  • ์ฐจ์›๊ณผ ํŠน์ง•์˜ ์ˆ˜๊ฐ€ ๋งŽ์„ ๊ฒฝ์šฐ
  • ์ˆ˜ํ–‰ํ•  ๋•Œ ๊ธฐ๊ณ„๋Š” ๋ฏธ๋ถ„๋ฅ˜ ๋ฐ์ดํ„ฐ๋งŒ์„ ์ œ๊ณต
  • ๋ฐ์ดํ„ฐ์˜ ๊ธฐ์ €๋ฅผ ์ด๋ฃจ๋Š” ๊ณ ์œ  ํŒจํ„ด์„ ๋ฐœ๊ฒฌํ•˜๋„๋ก ์„ค์ •
    • ํด๋Ÿฌ์Šคํ„ฐ๋ง ๊ตฌ์กฐ(clustering structure)
    • ์ €์ฐจ์› ๋‹ค์–‘์ฒด(low-dimensional manifold)
    • ํฌ์†Œ ํŠธ๋ฆฌ ๋ฐ ๊ทธ๋ž˜ํ”„(a sparse tree and graph)

ํด๋Ÿฌ์Šคํ„ฐ๋ง(Clustering)

  • ํŠน์ • ๊ธฐ์ค€์— ๋”ฐ๋ผ ์œ ์‚ฌํ•œ ๋ฐ์ดํ„ฐ ์‚ฌ๋ก€๋“ค์„ ํ•˜๋‚˜์˜ ์„ธํŠธ๋กœ ๊ทธ๋ฃนํ™”
  • ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๊ทธ๋ฃน์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ
  • ์‚ฌ์šฉ์ž๋Š” ๊ณ ์œ ํ•œ ํŒจํ„ด์„ ์ฐพ๊ธฐ ์œ„ํ•ด ๊ฐœ๋ณ„ ๊ทธ๋ฃน ์ฐจ์›์—์„œ ๋ถ„์„์„ ์ˆ˜ํ–‰

์ฐจ์› ์ถ•์†Œ(Dimension Reduction)

  • ๊ณ ๋ ค ์ค‘์ธ ๋ณ€์ˆ˜์˜ ๊ฐœ์ˆ˜๋ฅผ ์ค„์ด๋Š” ์ž‘์—…
  • ์ฐจ์›์ˆ˜(dimensionality)๋ฅผ ์ค„์ด๋ฉด ์ž ์žฌ๋œ ์ง„์ •ํ•œ ๊ด€๊ณ„๋ฅผ ๋„์ถœํ•˜๊ธฐ ์šฉ์ด

๊ฐ•ํ™”ํ•™์Šต (Reinforcement Learning)

  • ์•ŒํŒŒ๊ณ  ์ตœ์ ํ™”๋ฅผ ์œ„ํ•œ ํ•™์Šต ๋ฐฉ๋ฒ•
  • ๊ฐ•ํ™” ํ•™์Šต์€ ํ™˜๊ฒฝ์œผ๋กœ๋ถ€ํ„ฐ์˜ ํ”ผ๋“œ๋ฐฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ–‰์œ„์ž(agent)์˜ ํ–‰๋™์„ ๋ถ„์„, ์ตœ์ ํ™”
  • ์–ด๋–ค ์•ก์…˜์„ ์ทจํ•ด์•ผ ํ• ์ง€ ๋“ฃ๊ธฐ ๋ณด๋‹ค๋Š” ์ตœ๊ณ ์˜ ๋ณด์ƒ์„ ์‚ฐ์ถœํ•˜๋Š” ์•ก์…˜์„ ๋ฐœ๊ฒฌํ•˜๊ธฐ ์œ„ํ•ด ์„œ๋กœ ๋‹ค๋ฅธ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ์‹œ๋„
  • ํŠน์ง•
    • ์‹œํ–‰ ์ฐฉ์˜ค(Trial-and-error)
    • ์ง€์—ฐ ๋ณด์ƒ(delayed reward)
  • ์šฉ์–ด
    • ์—์ด์ „ํŠธ(Agent): ํ•™์Šต ์ฃผ์ฒด (ํ˜น์€ actor, controller)
    • ํ™˜๊ฒฝ(Environment): ์—์ด์ „ํŠธ์—๊ฒŒ ์ฃผ์–ด์ง„ ํ™˜๊ฒฝ, ์ƒํ™ฉ, ์กฐ๊ฑด
    • ํ–‰๋™(Action): ํ™˜๊ฒฝ์œผ๋กœ๋ถ€ํ„ฐ ์ฃผ์–ด์ง„ ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์—์ด์ „ํŠธ๊ฐ€ ํŒ๋‹จํ•œ ํ–‰๋™
    • ๋ณด์ƒ(Reward): ํ–‰๋™์— ๋Œ€ํ•œ ๋ณด์ƒ์„ ๋จธ์‹ ๋Ÿฌ๋‹ ์—”์ง€๋‹ˆ์–ด๊ฐ€ ์„ค๊ณ„
  • ์ฐธ๊ณ 

Monte Carlo methods

Q-Learning

Policy Gradient methods

์ฐธ๊ณ 
์ตœ์ ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ณ ๋ฅด๊ธฐ ์œ„ํ•œ ์น˜ํŠธํ‚ค
1

์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ณ ๋ฅด๋Š” ์น˜ํŠธํ‚ค

์ตœ๊ณ ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ฐพ๋Š” ๋‹จํ•˜๋‚˜์˜ ํ™•์‹คํ•œ ๋ฐฉ๋ฒ•์€ ๋ชจ๋“  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‹œ๋„ํ•ด๋ณด๋Š” ๊ฒƒ
1. If [path label] then use [algorithm]
(๋งŒ์•ฝ <๊ฒฝ๋กœ ๋ ˆ์ด๋ธ”>์ด๋ฉด <์•Œ๊ณ ๋ฆฌ์ฆ˜>์„ ์‚ฌ์šฉํ•œ๋‹ค)
2. If you want to perform dimension reduction then use principal component analysis.
(์ฐจ์› ์ถ•์†Œ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ์‹ถ์œผ๋ฉด ์ฃผ์„ฑ๋ถ„ ๋ถ„์„์„ ์‚ฌ์šฉํ•œ๋‹ค.)
3. If you need a numeric prediction quickly, use decision trees or logistic regression.
(์‹ ์†ํ•œ ์ˆ˜์น˜ ์˜ˆ์ธก์ด ํ•„์š”ํ•˜๋ฉด ์˜์‚ฌ๊ฒฐ์ • ํŠธ๋ฆฌ ๋˜๋Š” ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.)
4. If you need a hierarchical result, use hierarchical clustering.
(๊ณ„์ธต์  ๊ฒฐ๊ณผ๊ฐ€ ํ•„์š”ํ•˜๋ฉด ๊ณ„์ธต์  ํด๋Ÿฌ์Šคํ„ฐ๋ง์„ ์‚ฌ์šฉํ•œ๋‹ค.)

์•Œ๊ณ ๋ฆฌ์ฆ˜ ์„ ํƒ ์‹œ ๊ณ ๋ ค ์‚ฌํ•ญ

  • ์ •ํ™•์„ฑ, ํ•™์Šต ์‹œ๊ฐ„, ์‚ฌ์šฉ ํŽธ์˜์„ฑ์„ ๊ณ ๋ ค
  • ์šฐ์„  ์‚ฌํ•ญ : โ€˜์–ด๋–ค ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜ฌ ๊ฒƒ์ธ์ง€์— ์ƒ๊ด€์—†์ด ์–ด๋–ป๊ฒŒ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ๊ฒƒ์ธ๊ฐ€โ€™

ํŠน์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋Š” ์‹œ์ 

์„ ํ˜• ํšŒ๊ท€(Linear regression)์™€ ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€(Logistic regression)

  • ์„ ํ˜• ํšŒ๊ท€(Linear regression)
    • ์—ฐ์†์ ์ธ ์ข…์† ๋ณ€์ˆ˜ y ์™€ ํ•œ ๊ฐœ ์ด์ƒ์˜ ์˜ˆ์ธก ๋ณ€์ˆ˜์ธ x ์‚ฌ์ด์˜ ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ๋งํ•˜๋Š” ์ ‘๊ทผ๋ฒ•
  • ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€(Logistic regression)
    - ์ข…์† ๋ณ€์ˆ˜๊ฐ€ ์—ฐ์†ํ˜•์ด ์•„๋‹ˆ๋ผ ๋ฒ”์ฃผํ˜•์ด๋ผ๋ฉด
    - ์„ ํ˜• ํšŒ๊ท€๋Š” ๋กœ์ง“ ์—ฐ๊ฒฐ(logit link) ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด ๋กœ์ง€์Šคํ‹ฑ ํšŒ๊ท€๋กœ ๋ณ€ํ™˜
    2

์„ ํ˜•(Linear) SVM ๋ฐ ์ปค๋„(Kernel) SVM

  • ์ปค๋„ ํŠธ๋ฆญ(๊ธฐ๋ฒ•)์€ ๋ถ„๋ฆฌ ๊ฐ€๋Šฅํ•œ ๋น„์„ ํ˜• ํ•จ์ˆ˜๋ฅผ ๊ณ ์ฐจ์›์˜ ๋ถ„๋ฆฌ ๊ฐ€๋Šฅํ•œ ์„ ํ˜• ํ•จ์ˆ˜๋กœ ๋งคํ•‘ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ
  • ์„œํฌํŠธ ๋ฒกํ„ฐ ๋จธ์‹ (SVM; support vector machine) ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜
    - ์ดˆํ‰๋ฉด(hyperplane)์˜ ๋ฒ•์„  ๋ฒกํ„ฐ(normal vector) โ€˜wโ€™์™€ ํŽธํ–ฅ ๊ฐ’(bias)
    โ€˜bโ€™๋กœ ํ‘œํ˜„๋˜๋Š” ๋ถ„๋ฅ˜๊ธฐ(classifier)๋ฅผ ์ฐพ์Œ
    - ์ดˆํ‰๋ฉด(๊ฒฝ๊ณ„)์€ ๊ฐ€๋Šฅํ•œ ์ตœ๋Œ€ ์˜ค์ฐจ(margin)๋กœ ๊ฐ๊ธฐ ๋‹ค๋ฅธ ํด๋ž˜์Šค๋ฅผ ๋ถ„๋ฆฌ
    โ†’ ๋ฌธ์ œ๋ฅผ ์ œ์•ฝ ์กฐ๊ฑด์ด ์žˆ๋Š”(constrained) ์ตœ์ ํ™” ๋ฌธ์ œ๋กœ ๋ณ€ํ™˜
    2

ํŠธ๋ฆฌ์™€ ์•™์ƒ๋ธ” ํŠธ๋ฆฌ(ensemble tree)

  • ์˜์‚ฌ๊ฒฐ์ • ํŠธ๋ฆฌ, ๋žœ๋ค ํฌ๋ ˆ์ŠคํŠธ(random forest), ๊ทธ๋ž˜๋””์–ธํŠธ ๋ถ€์ŠคํŒ…(gradient boosting)์€ ๋ชจ๋‘ ์˜์‚ฌ๊ฒฐ์ • ํŠธ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜
  • ํŠน์ง• ๊ณต๊ฐ„(feature space)์„ ๊ฑฐ์˜ ๊ฐ™์€ ๋ ˆ์ด๋ธ”๋กœ ๊ตฌ๋ณ„๋˜๋„๋ก ๋ถ„๋ฆฌ
  • ์˜์‚ฌ๊ฒฐ์ • ํŠธ๋ฆฌ๋Š” ์ดํ•ด์™€ ๊ตฌํ˜„์ด ์‰ฝ์ง€๋งŒ ๊ฐ€์ง€๋ฅผ ๋‹ค ์ณ๋‚ด๊ณ  ํŠธ๋ฆฌ์˜ ๊นŠ์ด๊ฐ€ ๋„ˆ๋ฌด ๊นŠ์–ด์งˆ ๊ฒฝ์šฐ ๋ฐ์ดํ„ฐ๋ฅผ ๊ณผ์ ํ•ฉ(overfit)ํ•˜๋Š” ๊ฒฝํ–ฅ
  • ๋žœ๋ค ํฌ๋ ˆ์ŠคํŠธ์™€ ๊ทธ๋ž˜๋””์–ธํŠธ ๋ถ€์ŠคํŒ…์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋†’์€ ์ •ํ™•์„ฑ์„ ๋‹ฌ์„ฑํ•˜๊ณ  ๊ณผ์ ํ•ฉ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ํŠธ๋ฆฌ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉ
    1

์‹ ๊ฒฝ๋ง๊ณผ ๋”ฅ๋Ÿฌ๋‹

f
โ†’์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง(convolution neural network) ์•„ํ‚คํ…์ฒ˜(์ด๋ฏธ์ง€ ์ถœ์ฒ˜: wikipedia creative commons)

  • ๊ตฌ์„ฑ
    • ์ž…๋ ฅ ๊ณ„์ธต(input layer)
    • ์€๋‹‰ ๊ณ„์ธต(hidden layers)
    • ์ถœ๋ ฅ ๊ณ„์ธต(output layer)
  • ํ•™์Šต ํ‘œ๋ณธ(training samples)
    - ์ž…๋ ฅ , ์ถœ๋ ฅ ๊ณ„์ธต์„ ์ •์˜
    - ์ถœ๋ ฅ ๊ณ„์ธต์ด ๋ฒ”์ฃผํ˜• ๋ณ€์ˆ˜์ผ ๋•Œ ์‹ ๊ฒฝ๋ง์€ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐ
    - ์ถœ๋ ฅ ๊ณ„์ธต์ด ์—ฐ์† ๋ณ€์ˆ˜์ผ ๋•Œ ์‹ ๊ฒฝ๋ง์€ ํšŒ๊ท€ ์ž‘์—…์„ ์œ„ํ•ด ์‚ฌ์šฉ
    - ์ถœ๋ ฅ ๊ณ„์ธต์ด ์ž…๋ ฅ ๊ณ„์ธต๊ณผ ๋™์ผํ•  ๋•Œ ์‹ ๊ฒฝ๋ง์€ ๊ณ ์œ ํ•œ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ
    - ์€๋‹‰ ๊ณ„์ธต์˜ ์ˆ˜๋Š” ๋ชจ๋ธ ๋ณต์žก์„ฑ๊ณผ ๋ชจ๋ธ๋ง ์ˆ˜์šฉ๋ ฅ(capacity)์„ ๊ฒฐ์ •
    2

K-ํ‰๊ท /K-๋ชจ๋“œ(k-means/k-modes), ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ(GMM; Gaussian mixture model) ํด๋Ÿฌ์Šคํ„ฐ๋ง

fe

  • ๋ชฉํ‘œ
    -n๊ฐœ์˜ ๊ด€์ธก์น˜(observations)๋ฅผ k๊ฐœ์˜ ํด๋Ÿฌ์Šคํ„ฐ๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ
  • K-ํ‰๊ท ์€ ํ‘œ๋ณธ์„ ํ•˜๋‚˜์˜ ํด๋Ÿฌ์Šคํ„ฐ์—๋งŒ ๊ฐ•ํ•˜๊ฒŒ ๊ฒฐ์†์‹œํ‚ค๋Š” โ€˜ํ•˜๋“œ ํ• ๋‹น(hard assignment)โ€™๋ฅผ ์ •์˜
  • GMM์€ ๊ฐ ํ‘œ๋ณธ์ด ํ™•๋ฅ  ๊ฐ’์„ ๊ฐ€์ง์œผ๋กœ์จ ์–ด๋Š ํ•œ ํด๋Ÿฌ์Šคํ„ฐ์—๋งŒ ๊ฒฐ์†๋˜์ง€ ์•Š๋Š” โ€˜์†Œํ”„ํŠธ ํ• ๋‹น(soft assignment)โ€™์„ ์ •์˜
  • ํด๋Ÿฌ์Šคํ„ฐ k์˜ ์ˆ˜๊ฐ€ ์ฃผ์–ด์งˆ ๋•Œ ํด๋Ÿฌ์Šคํ„ฐ๋ง์„ ๋น ๋ฅด๊ณ  ๋‹จ์ˆœํ•˜๊ฒŒ ์ˆ˜ํ–‰

DBSCAN

  • ํด๋Ÿฌ์Šคํ„ฐ k์˜ ์ˆ˜๊ฐ€ ์ฃผ์–ด์ง€์ง€ ์•Š์„ ๋•Œ ๋ฐ€๋„ ํ™•์‚ฐ(density diffusion)์„ ํ†ตํ•ด ํ‘œ๋ณธ์„ ์—ฐ๊ฒฐ
    โ†’ DBSCAN(density-based spatial clustering)์„ ์‚ฌ์šฉ
    fu12-08

๊ณ„์ธต์  ๊ตฐ์ง‘ํ™”(Hierarchical clustering)

  • ๊ณ„์ธต์  ๋ถ„ํ• ์€ ํŠธ๋ฆฌ ๊ตฌ์กฐ์ธ ๋ด๋“œ๋กœ๊ทธ๋žจ(dendrogram)๋ฅผ ์ด์šฉํ•ด ์‹œ๊ฐํ™”
  • ๊ฐ๊ธฐ ๋‹ค๋ฅธ K๋ฅผ ์‚ฌ์šฉํ•ด ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ •์ œํ•˜๊ฑฐ๋‚˜ ์กฐ๋Œ€ํ™”ํ•  ์ˆ˜ ์žˆ๋Š”
    ๊ฐ๊ธฐ ๋‹ค๋ฅธ ์„ธ๋ถ„ํ™”(granularities) ์ˆ˜์ค€์—์„œ ์ž…๋ ฅ๊ณผ ๋ถ„ํ•  ๊ฒฐ๊ณผ๋ฅผ
    ํ™•์ธํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํด๋Ÿฌ์Šคํ„ฐ์˜ ๊ฐœ์ˆ˜๊ฐ€ ํ•„์š” ์—†์Œ
    fu12-09.png

PCA, SVD, LDA

  • ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋งŽ์€ ์ˆ˜์˜ ํŠน์ง•์„ ์ง์ ‘ ํˆฌ์ž…ํ•˜๋Š” ๊ฒƒ์€ ์„ ํ˜ธ๋˜์ง€ ์•Š์Œ
    โ†’ ์ผ๋ถ€ ํŠน์ง•์€ ๊ด€๋ จ์ด ์—†๊ฑฐ๋‚˜ โ€˜๊ณ ์œ ํ•œโ€™ ์ฐจ์›์ˆ˜๊ฐ€ ํŠน์ง•์˜ ์ˆ˜๋ณด๋‹ค ์ ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ
    - ์ฃผ์„ฑ๋ถ„ ๋ถ„์„(PCA; principal component analysis)
    - ํŠน์ด๊ฐ’ ๋ถ„ํ•ด(SVD; singular value decomposition)
    - ์ž ์žฌ ๋””๋ฆฌํด๋ ˆ ํ• ๋‹น(LDA; latent Dirichlet allocation)
    โ†’์ฐจ์› ์ถ•์†Œ๋ฅผ ์ˆ˜ํ–‰
  • PCA
    • PCA๋Š” ์›๋ž˜์˜ ๋ฐ์ดํ„ฐ ๊ณต๊ฐ„์„ ์ €์ฐจ์›์˜ ๊ณต๊ฐ„์œผ๋กœ ๋งคํ•‘ํ•˜๋ฉด์„œ
      ๊ฐ€๋Šฅํ•œ ๋งŽ์€ ์ •๋ณด๋ฅผ ๋ณด์กดํ•˜๋Š” ๋น„์ง€๋„ ํด๋Ÿฌ์Šคํ„ฐ๋ง ๋ฐฉ์‹
    • PCA๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ๋ฐ์ดํ„ฐ ๋ถ„์‚ฐ(variance)์„ ๊ฐ€์žฅ ๋งŽ์ด ๋ณด์กดํ•˜๋Š”
      ํ•˜์œ„ ๊ณต๊ฐ„(subspace)์„ ์ฐพ์Œ
    • ํ•˜์œ„ ๊ณต๊ฐ„์€ ๋ฐ์ดํ„ฐ์˜ ๊ณต๋ถ„์‚ฐ ๋งคํŠธ๋ฆญ์Šค(covariance matrix)์˜
      ์ง€๋ฐฐ์ ์ธ ๊ณ ์œ  ๋ฒกํ„ฐ(eigenvectors)์— ๋”ฐ๋ผ ์ •์˜
  • SVD

    • ์ค‘์•™ ๋ฐ์ดํ„ฐ ๋งคํŠธ๋ฆญ์Šค์˜ SVD(ํŠน์ง• vs. ํ‘œ๋ณธ)์ด
      PCA๋กœ ์ฐพ์€ ๊ฒƒ๊ณผ ๋™์ผํ•œ ํ•˜์œ„ ๊ณต๊ฐ„์„ ์ •์˜ํ•˜๋Š”
      ์ง€๋ฐฐ์ ์ธ ์™ผ์ชฝ ํŠน์ด ๋ฒกํ„ฐ(left singular vectors)๋ฅผ ์ œ๊ณตํ•œ๋‹ค๋Š” ์ ์—์„œ PCA์™€ ๊ด€๋ จ
    • SVD๋Š” PCA๊ฐ€ ํ•  ์ˆ˜ ์—†๋Š” ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํ›จ์”ฌ ๋‹ค์žฌ๋‹ค๋Šฅํ•œ ๊ธฐ๋ฒ•
    • ์˜ˆ) ์‚ฌ์šฉ์ž ๋Œ€ ์˜ํ™” ๋งคํŠธ๋ฆญ์Šค์˜ SVD๋Š” ์ถ”์ฒœ ์‹œ์Šคํ…œ์—์„œ
      ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์‚ฌ์šฉ์ž ํ”„๋กœํŒŒ์ผ๊ณผ ์˜ํ™” ํ”„๋กœํŒŒ์ผ์„ ์ถ”์ถœ
    • ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP; natural language processing) ๊ณผ์ •์—์„œ
      ์ž ์žฌ ์˜๋ฏธ ๋ถ„์„(latent semantic analysis)์œผ๋กœ ์•Œ๋ ค์ง„
      ์ฃผ์ œ ๋ชจ๋ธ๋ง(topic modeling) ๋„๊ตฌ๋กœ์„œ ๋„๋ฆฌ ์‚ฌ์šฉ
  • ์ž ์žฌ ๋””๋ฆฌํด๋ ˆ ํ• ๋‹น(LDA)

    • ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP)์™€ ๊ด€๋ จ๋œ ๊ธฐ๋ฒ•
    • ํ™•๋ฅ ์  ์ฃผ์ œ ๋ชจ๋ธ(probabilistic topic model)๋กœ
      ๊ฐ€์šฐ์‹œ์•ˆ ํ˜ผํ•ฉ ๋ชจ๋ธ(GMM)์ด ์—ฐ์† ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์šฐ์‹œ์•ˆ ๋ฐ€๋„๋กœ
      ๋ถ„ํ•ดํ•˜๋Š” ๊ฒƒ๊ณผ ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ ๋ฌธ์„œ๋ฅผ ์ฃผ์ œ๋ฅผ ๊ธฐ์ค€์œผ๋กœ ๋ถ„๋ฆฌ
    • GMM๊ณผ ๋‹ค๋ฅด๊ฒŒ ์ด์‚ฐ ๋ฐ์ดํ„ฐ(discrete data, ๋ฌธ์„œ ๋‚ด ๋‹จ์–ด)๋ฅผ ๋ชจ๋ธ๋ง
    • ์ฃผ์ œ๋Š” ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ(Dirichlet distribution)์— ๋”ฐ๋ผ
      ์—ฐ์—ญ์ (priori)์œผ๋กœ ๋ถ„ํฌ๋ผ์•ผ ํ•˜๋Š” ์ œ์•ฝ์ด ์žˆ์Œ

์ •๋ฆฌ

  • ๋ฌธ์ œ๋ฅผ ์ •์˜ํ•œ๋‹ค. ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ  ์‹ถ์€๊ฐ€?
  • ๋‹จ์ˆœํ•˜๊ฒŒ ์‹œ์ž‘ํ•œ๋‹ค.
  • ๋ฐ์ดํ„ฐ์™€ ๊ธฐ์ค€์ด ๋˜๋Š” ๊ฒฐ๊ณผ(baseline results)๋ฅผ ์ž˜ ์ธ์ง€ํ•˜๊ณ  ์žˆ์–ด์•ผ ํ•œ๋‹ค.
  • ๊ทธ๋ฆฌ๊ณ  ๋‚˜์„œ ๋ณต์žกํ•œ ๊ฒƒ๋“ค์„ ์‹œ๋„

์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ ๊ฐ€์ด๋“œํ•˜๋Š” ๋จธ์‹ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜

Scikit-Learn์—์„œ๋Š” ์–ด๋–ป๊ฒŒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๋ถ„๋ฅ˜?

Choosing the right estimator

fu12-10.jpg

์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ Task 4๊ฐ€์ง€

Classification 7๊ฐ€์ง€

  • SGD Classifier
  • KNeighborsClassifier
  • LinearSVC
  • NaiveBayes
  • SVC
  • Kernel approximation
  • EnsembleClassifiers

Regression 7๊ฐ€์ง€

  • SGD Regressor
  • Lasso
  • ElasticNet
  • RidgeRegression
  • SVR(kernel='linear')
  • SVR(kernel='rbf')
  • EnsembelRegressor

Clustering 6๊ฐ€์ง€

  • Spectral
  • GMM
  • KMeans
  • MiniBatch KMeans
  • MeanShift
  • VBGMM

Dimensionality Reduction 5๊ฐ€์ง€

  • Randomized PCA
  • Isomap
  • Spectral Embedding
  • kernel approximation
  • LLE

์‚ฌ์ดํ‚ท๋Ÿฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๋ถ„๋ฅ˜ ๊ธฐ์ค€

  • ๋ฐ์ดํ„ฐ ์ˆ˜๋Ÿ‰
  • ๋ผ๋ฒจ์˜ ์œ ๋ฌด(์ •๋‹ต์˜ ์œ ๋ฌด)
  • ๋ฐ์ดํ„ฐ์˜ ์ข…๋ฅ˜ (์ˆ˜์น˜ํ˜• ๋ฐ์ดํ„ฐ(quantity)
  • ๋ฒ”์ฃผํ˜• ๋ฐ์ดํ„ฐ(category)

Hello Scikit-learn

์„ค์น˜

pip install scikit-learn

import sklearn
print(sklearn.__version__)
1.0.2

์‚ฌ์ดํ‚ท๋Ÿฐ ์‚ดํŽด๋ณด๊ธฐ.

์‚ฌ์ดํ‚ท๋Ÿฐ ์†Œ๊ฐœ ์˜์ƒ

์‚ฌ์ดํ‚ค๋Ÿฐ์—์„œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ๋‚˜๋ˆ„๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์€
train_test_split

์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ์‚ฌ์šฉ๋ฒ•

์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ

  • ETL(Extrac Transform Load) ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜
    • transformer()
  • Model๋กœ ํ‘œํ˜„๋˜๋Š” ํด๋ž˜์Šค
    • Estimator
      • ๋ฉ”์†Œ๋“œ
        • fit()
        • predict()
  • Estimator์™€ transformer() 2๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ์ˆ˜ํ–‰ํ•˜๋Š” scikit-learn์˜ API
    • Pipeline
    • meta-estimator

์ •๋ฆฌ

transformer()์™€ Estimator๊ฐ์ฒด์˜ fit()๊ณผ predict()๋ฉ”์†Œ๋“œ๊ฐ€ ์ค‘์š”ํ•œ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ ์…€๋ ‰์…˜ ์•ˆ์˜ train_test_split() ์ด๋ž€ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด ํ›ˆ๋ จ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ๋ฐ์ดํ„ฐ๋ฅผ ๋žœ๋คํ•˜๊ฒŒ ์„ž์–ด์ค๋‹ˆ๋‹ค

์‚ฌ์ดํ‚ท๋Ÿฐ์€ ํŒŒ์ด์ฌ ๊ธฐ๋ฐ˜ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ Scipy ๋ฐ NumPy ์™€ ๋น„์Šทํ•œ ๋ฐ์ดํ„ฐ ํ‘œํ˜„๊ณผ ์ˆ˜ํ•™ ๊ด€๋ จ ํ•จ์ˆ˜๋ฅผ ๊ฐ–๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๋จธ์‹ ๋Ÿฌ๋‹์—์„œ ๋ฐ์ดํ„ฐ ๊ฐ€๊ณต(ETL)์„ ๊ฑฐ์ณ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ๊ณผ์ •์„ ๊ฑฐ์น˜๋Š”๋ฐ ETL๋ถ€๋ถ„์€ ScikitLearn์˜ transformer()๋ฅผ ์ œ๊ณตํ•˜๊ณ , ๋ชจ๋ธ์˜ ํ›ˆ๋ จ๊ณผ ์˜ˆ์ธก์€ Estimator ๊ฐ์ฒด๋ฅผ ํ†ตํ•ด ์ˆ˜ํ–‰๋˜๋ฉฐ, Estimator์—๋Š” ๊ฐ๊ฐ fit()(ํ›ˆ๋ จ), predict()(์˜ˆ์ธก)์„ ํ–‰ํ•˜๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ํ›ˆ๋ จ๊ณผ ์˜ˆ์ธก์ด ๋๋‚˜๋ฉด ์ด 2๊ฐ€์ง€๋Š” ์ž‘์—…์„ Pipeline()์œผ๋กœ ๋ฌถ์–ด ๊ฒ€์ฆ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.

๋ชจ๋“ˆ : ๋ฐ์ดํ„ฐ ํ‘œํ˜„๋ฒ•

๋ฐ์ดํ„ฐ์…‹

  • NumPy์˜ ndarray
  • Pandas์˜ DataFrame
  • SciPy์˜ Sparse Matrix
  • ํ›ˆ๋ จ๊ณผ ์˜ˆ์ธก ๋“ฑ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ๋‹ค๋ฃฐ ๋•Œ
    • CoreAPI๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ๋‹ค์Œ์˜ ํ•จ ์ˆ˜ ์ด์šฉ
      • fit()
      • transfomer()
      • predict()

์ž์ฃผ์‚ฌ์šฉํ•˜๋Š” API

fu12-11.png

๋ฐ์ดํ„ฐ ํ‘œํ˜„๋ฒ•

  • ํŠน์„ฑ ํ–‰๋ ฌ(Feature Matrix)
    • ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.
    • ํŠน์„ฑ(feature)
      • ๋ฐ์ดํ„ฐ์—์„œ ์ˆ˜์น˜ ๊ฐ’, ์ด์‚ฐ ๊ฐ’, ๋ถˆ๋ฆฌ์–ธ ๊ฐ’์œผ๋กœ ํ‘œํ˜„๋˜๋Š” ๊ฐœ๋ณ„ ๊ด€์ธก์น˜๋ฅผ ์˜๋ฏธ
      • ํŠน์„ฑ ํ–‰๋ ฌ์—์„œ๋Š” ์—ด์— ํ•ด๋‹นํ•˜๋Š” ๊ฐ’
    • ํ‘œ๋ณธ(sample): ๊ฐ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ, ํŠน์„ฑ ํ–‰๋ ฌ์—์„œ๋Š” ํ–‰์— ํ•ด๋‹นํ•˜๋Š” ๊ฐ’
    • n_samples: ํ–‰์˜ ๊ฐœ์ˆ˜(ํ‘œ๋ณธ์˜ ๊ฐœ์ˆ˜)
    • n_features: ์—ด์˜ ๊ฐœ์ˆ˜(ํŠน์„ฑ์˜ ๊ฐœ์ˆ˜)
    • X: ํ†ต์ƒ ํŠน์„ฑ ํ–‰๋ ฌ์€ ๋ณ€์ˆ˜๋ช… X๋กœ ํ‘œ๊ธฐํ•ฉ๋‹ˆ๋‹ค.
    • [n_samples, n_features]์€ [ํ–‰, ์—ด] ํ˜•ํƒœ์˜ 2์ฐจ์› ๋ฐฐ์—ด ๊ตฌ์กฐ๋ฅผ ์‚ฌ์šฉ
    • NumPy์˜ ndarray, Pandas์˜ DataFrame, SciPy์˜ Sparse Matrix๋ฅผ ์‚ฌ์šฉ
  • ํƒ€๊ฒŸ ๋ฒกํ„ฐ(Target Vector)
    • ํƒ€๊ฒŸ ๋ฒกํ„ฐ (Target Vector)
    • ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์˜ ๋ผ๋ฒจ(์ •๋‹ต) ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.
    • ๋ชฉํ‘œ(Target)
      • ๋ผ๋ฒจ, ํƒ€๊ฒŸ๊ฐ’, ๋ชฉํ‘œ๊ฐ’
      • ํŠน์„ฑ ํ–‰๋ ฌ(Feature Matrix)๋กœ๋ถ€ํ„ฐ ์˜ˆ์ธกํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฒƒ
    • n_samples: ๋ฒกํ„ฐ์˜ ๊ธธ์ด(๋ผ๋ฒจ์˜ ๊ฐœ์ˆ˜)
    • ํƒ€๊ฒŸ ๋ฒกํ„ฐ์—์„œ n_features๋Š” ์—†์Œ
    • y: ๋ณ€์ˆ˜๋ช… y๋กœ ํ‘œ๊ธฐ
    • ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” ๋ณดํ†ต 1์ฐจ์› ๋ฒกํ„ฐ
    • NumPy์˜ ndarray, Pandas์˜ Series๋ฅผ ์‚ฌ์šฉ
    • ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” ๊ฒฝ์šฐ์— ๋”ฐ๋ผ 1์ฐจ์›์œผ๋กœ ๋‚˜ํƒ€๋‚ด์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Œ

โ—๏ธํŠน์„ฑ ํ–‰๋ ฌ X์˜ n_samples์™€ ํƒ€๊ฒŸ ๋ฒกํ„ฐ y์˜ n_samples๋Š” ๋™์ผํ•ด์•ผ ํ•จ
fu12-12.png

๋ชจ๋“ˆ : ํšŒ๊ท€ ๋ชจ๋ธ ์‹ค์Šต

# ํšŒ๊ท€๋ชจ๋ธ์„ ์ด์šฉํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ

import numpy as np
import matplotlib.pyplot as plt
r = np.random.RandomState(10)
x = 10 * r.rand(100)
y = 2 * x - 3 * r.rand(100)
plt.scatter(x,y)
<matplotlib.collections.PathCollection at 0x11bde3110>

# ์ž…๋ ฅ๋ฐ์ดํ„ฐ x์˜ ๋ชจ์–‘
x.shape
(100,)
# ์ •๋‹ต ๋ฐ์ดํ„ฐ y์˜ ๋ชจ์–‘
y.shape
(100,)

x์™€ y์˜ ๋ชจ์–‘์€ (100,)์œผ๋กœ 1์ฐจ์› ๋ฒกํ„ฐ

๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑ

  • ์‚ฌ์šฉํ•  ๋ชจ๋ธ์˜ ์ด๋ฆ„์€ LinearRegression
  • sklearn.linear_model ์•ˆ์— ์žˆ์Œ
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model
LinearRegression()

๋ชจ๋ธ์„ ํ›ˆ๋ จ

ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฉ”์„œ๋“œ๋Š” fit()

  • fit ๋ฉ”์„œ๋“œ์— ์ธ์ž๋กœ ํŠน์„ฑํ–‰๋ ฌ๊ณผ ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋ฅผ ๋„ฃ์–ด์คŒ
  • ํ–‰๋ ฌ ํ˜•ํƒœ์˜ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์™€ 1์ฐจ์› ๋ฒกํ„ฐ ํ˜•ํƒœ์˜ ์ •๋‹ต(๋ผ๋ฒจ)์„ ๋„ฃ์–ด์คŒ
  • ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์ธ x๋ฅผ ๊ทธ๋Œ€๋กœ ๋„ฃ์œผ๋ฉด, ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒ
  • x๋Š” numpy์˜ ndarrayํƒ€์ž…์ด๋‹ˆ reshape()๋ฅผ ์‚ฌ์šฉ
# ! ์—๋Ÿฌ ๋ฐœ์ƒ
model.fit(x, y)
---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

/var/folders/59/gjb3x8rx30s2cxwfl3zh2m040000gn/T/ipykernel_725/3325953541.py in <module>
      1 # ! ์—๋Ÿฌ ๋ฐœ์ƒ
----> 2 model.fit(x, y)


~/opt/anaconda3/envs/dev/lib/python3.7/site-packages/sklearn/linear_model/_base.py in fit(self, X, y, sample_weight)
    661 
    662         X, y = self._validate_data(
--> 663             X, y, accept_sparse=accept_sparse, y_numeric=True, multi_output=True
    664         )
    665 


~/opt/anaconda3/envs/dev/lib/python3.7/site-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params)
    579                 y = check_array(y, **check_y_params)
    580             else:
--> 581                 X, y = check_X_y(X, y, **check_params)
    582             out = X, y
    583 


~/opt/anaconda3/envs/dev/lib/python3.7/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator)
    974         ensure_min_samples=ensure_min_samples,
    975         ensure_min_features=ensure_min_features,
--> 976         estimator=estimator,
    977     )
    978 


~/opt/anaconda3/envs/dev/lib/python3.7/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator)
    771                     "Reshape your data either using array.reshape(-1, 1) if "
    772                     "your data has a single feature or array.reshape(1, -1) "
--> 773                     "if it contains a single sample.".format(array)
    774                 )
    775 


ValueError: Expected 2D array, got 1D array instead:
array=[7.71320643 0.20751949 6.33648235 7.48803883 4.98507012 2.24796646
 1.98062865 7.60530712 1.69110837 0.88339814 6.85359818 9.53393346
 0.03948266 5.12192263 8.12620962 6.12526067 7.21755317 2.91876068
 9.17774123 7.14575783 5.42544368 1.42170048 3.7334076  6.74133615
 4.41833174 4.34013993 6.17766978 5.13138243 6.50397182 6.01038953
 8.05223197 5.21647152 9.08648881 3.19236089 0.90459349 3.00700057
 1.13984362 8.28681326 0.46896319 6.26287148 5.47586156 8.19286996
 1.9894754  8.56850302 3.51652639 7.54647692 2.95961707 8.8393648
 3.25511638 1.65015898 3.92529244 0.93460375 8.21105658 1.5115202
 3.84114449 9.44260712 9.87625475 4.56304547 8.26122844 2.51374134
 5.97371648 9.0283176  5.34557949 5.90201363 0.39281767 3.57181759
 0.7961309  3.05459918 3.30719312 7.73830296 0.39959209 4.29492178
 3.14926872 6.36491143 3.4634715  0.43097356 8.79915175 7.63240587
 8.78096643 4.17509144 6.05577564 5.13466627 5.97836648 2.62215661
 3.00871309 0.25399782 3.03062561 2.42075875 5.57578189 5.6550702
 4.75132247 2.92797976 0.64251061 9.78819146 3.39707844 4.95048631
 9.77080726 4.40773825 3.18272805 5.19796986].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
#  ๋ณ€์ˆ˜๋ช… X์— ํŠน์„ฑ ํ–‰๋ ฌ์„ ๋„ฃ๊ธฐ
X = x.reshape(100,1)
#  X๋ฅผ fit()์˜ ์ธ์ž๋กœ ๋„ฃ๊ธฐ
model.fit(X,y)
LinearRegression()

โ†’ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์™€ ๊ทธ ๋ผ๋ฒจ์„ ์ด์šฉํ•ด ํ›ˆ๋ จ์„ ์™„๋ฃŒ

์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ๋„ฃ๊ณ  ์˜ˆ์ธก

  • ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋Š” np.linspace()๋ฅผ ์ด์šฉํ•ด์„œ ์ƒ์„ฑ
  • ์˜ˆ์ธก์€ predict()๋ฅผ ์‚ฌ์šฉ
  • predict()์˜ ์ธ์ž ์—ญ์‹œ ํ–‰๋ ฌ๋กœ ๋„ฃ์–ด ์ฃผ์–ด์•ผ ํ•จ
x_new = np.linspace(-1, 11, 100)
X_new = x_new.reshape(100,1)
y_new = model.predict(X_new)

reshape() ํ•จ์ˆ˜์—์„œ ๋‚˜๋จธ์ง€ ์ˆซ์ž๋ฅผ -1๋กœ ๋„ฃ์œผ๋ฉด ์ž๋™์œผ๋กœ ๋‚จ์€ ์ˆซ์ž๋ฅผ ๊ณ„์‚ฐํ•ด ์ค๋‹ˆ๋‹ค.
์ฆ‰, x_new์˜ ์ธ์ž์˜ ๊ฐœ์ˆ˜๊ฐ€ 100๊ฐœ์ด๋ฏ€๋กœ, (100, 1)์˜ ํ˜•ํƒœ๋‚˜ (2, 50)์˜ ํ˜•ํƒœ ๋“ฑ์œผ๋กœ ๋ณ€ํ™˜
(2, -1)์„ ์ธ์ž๋กœ ๋„ฃ์œผ๋ฉด (2, 50)์˜ ํ˜•ํƒœ๋กœ ์ž๋™์œผ๋กœ ๋ณ€ํ™˜

X_ = x_new.reshape(-1,1)
X_.shape
(100, 1)

์„ฑ๋Šฅ ํ‰๊ฐ€ : ํ•™์Šต๋œ ํšŒ๊ท€ ๋ชจ๋ธ์ด ์ž˜ ์˜ˆ์ธกํ–ˆ๋Š”์ง€

  • ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ ํ‰๊ฐ€ ๊ด€๋ จ ๋ชจ๋“ˆ์€ sklearn.metrics์— ์ €์žฅ
  • ํšŒ๊ท€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ RMSE(Root Mean Square Error) ๋ฅผ ์‚ฌ์šฉํ•ด ์„ฑ๋Šฅ์„ ํ‰๊ฐ€

Scikit-learn: Mean Squared Error

# mean_squared_error ํ•จ์ˆ˜์˜ ๊ณต์‹ /  np.sqrt๋ฅผ ํ™œ์šฉ
from sklearn.metrics import mean_squared_error

error = np.sqrt(mean_squared_error(y,y_new))

print(error)
9.299028215052262

์ข…ํ•ฉ

# 1. ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑ
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model

# 2. ๋ชจ๋ธ์„ ํ›ˆ๋ จ

#  ๋ณ€์ˆ˜๋ช… X์— ํŠน์„ฑ ํ–‰๋ ฌ์„ ๋„ฃ๊ธฐ
X = x.reshape(100,1)
#  X๋ฅผ fit()์˜ ์ธ์ž๋กœ ๋„ฃ๊ธฐ
model.fit(X,y)

# 3. ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ๋„ฃ๊ณ  ์˜ˆ์ธก
x_new = np.linspace(-1, 11, 100) # ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋Š” np.linspace()๋ฅผ ์ด์šฉํ•ด์„œ ์ƒ์„ฑ
X_new = x_new.reshape(100,1)
y_new = model.predict(X_new)

# 4 .๋ชจ๋ธ ์„ฑ๋Šฅ ํ‰๊ฐ€
# mean_squared_error ํ•จ์ˆ˜์˜ ๊ณต์‹ /  np.sqrt๋ฅผ ํ™œ์šฉ
from sklearn.metrics import mean_squared_error
error = np.sqrt(mean_squared_error(y,y_new))
print(error)

# 5. ์•Œ์•„๋ณด๊ธฐ ์‰ฝ๊ฒŒ ๊ทธ๋ž˜ํ”„๋กœ
plt.scatter(x, y, label='input data')
plt.plot(X_new, y_new, color='red', label='regression line')
9.299028215052262





[<matplotlib.lines.Line2D at 0x11bdc3dd0>]

๊ทธ๋ž˜ํ”„์˜ ์ ๋“ค๊ณผ ํšŒ๊ท€์„ ์ด ๊ฑฐ์˜ ์ผ์น˜

๋ชจ๋“ˆ : dataset ๋ชจ๋“ˆ

sklearn.datasets ๋ชจ๋“ˆ
๊ตฌ๋ถ„์€

  • dataset loaders
    • Toy dataset ์ œ๊ณต
    • Real World dataset ์ œ๊ณต
  • dataset fetchers
    • Toy dataset ์ œ๊ณต
    • Real World dataset ์ œ๊ณต

Toy dataset์˜ ์˜ˆ์‹œ

  • datasets.load_boston(): ํšŒ๊ท€ ๋ฌธ์ œ, ๋ฏธ๊ตญ ๋ณด์Šคํ„ด ์ง‘๊ฐ’ ์˜ˆ์ธก(version 1.2 ์ดํ›„ ์‚ญ์ œ ์˜ˆ์ •)
  • datasets.load_breast_cancer(): ๋ถ„๋ฅ˜ ๋ฌธ์ œ, ์œ ๋ฐฉ์•” ํŒ๋ณ„
  • datasets.load_digits(): ๋ถ„๋ฅ˜ ๋ฌธ์ œ, 0 ~ 9 ์ˆซ์ž ๋ถ„๋ฅ˜
  • datasets.load_iris(): ๋ถ„๋ฅ˜ ๋ฌธ์ œ, iris ํ’ˆ์ข… ๋ถ„๋ฅ˜
  • datasets.load_wine(): ๋ถ„๋ฅ˜ ๋ฌธ์ œ, ์™€์ธ ๋ถ„๋ฅ˜

datasets.load_wine()

์™€์ธ ๋ถ„๋ฅ˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•œ ๋‹ค์Œ data๋ž€ ๋ณ€์ˆ˜์— ํ• ๋‹น

from sklearn.datasets import load_wine
data = load_wine()

์ž๋ฃŒํ˜• ํ™•์ธ

type(data)
sklearn.utils.Bunch

sklearn.utils.Bunch๋ผ๊ณ  ํ•˜๋Š” ๋ฐ์ดํ„ฐ ํƒ€์ž…
โ†’ Bunch๋Š” ํŒŒ์ด์ฌ์˜ ๋”•์…”๋„ˆ๋ฆฌ์™€ ์œ ์‚ฌํ•œ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ ํƒ€์ž…

print(data)
{'data': array([[1.423e+01, 1.710e+00, 2.430e+00, ..., 1.040e+00, 3.920e+00,
        1.065e+03],
       [1.320e+01, 1.780e+00, 2.140e+00, ..., 1.050e+00, 3.400e+00,
        1.050e+03],
       [1.316e+01, 2.360e+00, 2.670e+00, ..., 1.030e+00, 3.170e+00,
        1.185e+03],
       ...,
       [1.327e+01, 4.280e+00, 2.260e+00, ..., 5.900e-01, 1.560e+00,
        8.350e+02],
       [1.317e+01, 2.590e+00, 2.370e+00, ..., 6.000e-01, 1.620e+00,
        8.400e+02],
       [1.413e+01, 4.100e+00, 2.740e+00, ..., 6.100e-01, 1.600e+00,
        5.600e+02]]), 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2]), 'frame': None, 'target_names': array(['class_0', 'class_1', 'class_2'], dtype='<U7'), 'DESCR': '.. _wine_dataset:\n\nWine recognition dataset\n------------------------\n\n**Data Set Characteristics:**\n\n    :Number of Instances: 178 (50 in each of three classes)\n    :Number of Attributes: 13 numeric, predictive attributes and the class\n    :Attribute Information:\n \t\t- Alcohol\n \t\t- Malic acid\n \t\t- Ash\n\t\t- Alcalinity of ash  \n \t\t- Magnesium\n\t\t- Total phenols\n \t\t- Flavanoids\n \t\t- Nonflavanoid phenols\n \t\t- Proanthocyanins\n\t\t- Color intensity\n \t\t- Hue\n \t\t- OD280/OD315 of diluted wines\n \t\t- Proline\n\n    - class:\n            - class_0\n            - class_1\n            - class_2\n\t\t\n    :Summary Statistics:\n    \n    ============================= ==== ===== ======= =====\n                                   Min   Max   Mean     SD\n    ============================= ==== ===== ======= =====\n    Alcohol:                      11.0  14.8    13.0   0.8\n    Malic Acid:                   0.74  5.80    2.34  1.12\n    Ash:                          1.36  3.23    2.36  0.27\n    Alcalinity of Ash:            10.6  30.0    19.5   3.3\n    Magnesium:                    70.0 162.0    99.7  14.3\n    Total Phenols:                0.98  3.88    2.29  0.63\n    Flavanoids:                   0.34  5.08    2.03  1.00\n    Nonflavanoid Phenols:         0.13  0.66    0.36  0.12\n    Proanthocyanins:              0.41  3.58    1.59  0.57\n    Colour Intensity:              1.3  13.0     5.1   2.3\n    Hue:                          0.48  1.71    0.96  0.23\n    OD280/OD315 of diluted wines: 1.27  4.00    2.61  0.71\n    Proline:                       278  1680     746   315\n    ============================= ==== ===== ======= =====\n\n    :Missing Attribute Values: None\n    :Class Distribution: class_0 (59), class_1 (71), class_2 (48)\n    :Creator: R.A. Fisher\n    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n    :Date: July, 1988\n\nThis is a copy of UCI ML Wine recognition datasets.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data\n\nThe data is the results of a chemical analysis of wines grown in the same\nregion in Italy by three different cultivators. There are thirteen different\nmeasurements taken for different constituents found in the three types of\nwine.\n\nOriginal Owners: \n\nForina, M. et al, PARVUS - \nAn Extendible Package for Data Exploration, Classification and Correlation. \nInstitute of Pharmaceutical and Food Analysis and Technologies,\nVia Brigata Salerno, 16147 Genoa, Italy.\n\nCitation:\n\nLichman, M. (2013). UCI Machine Learning Repository\n[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,\nSchool of Information and Computer Science. \n\n.. topic:: References\n\n  (1) S. Aeberhard, D. Coomans and O. de Vel, \n  Comparison of Classifiers in High Dimensional Settings, \n  Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of  \n  Mathematics and Statistics, James Cook University of North Queensland. \n  (Also submitted to Technometrics). \n\n  The data was used with many others for comparing various \n  classifiers. The classes are separable, though only RDA \n  has achieved 100% correct classification. \n  (RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data)) \n  (All results using the leave-one-out technique) \n\n  (2) S. Aeberhard, D. Coomans and O. de Vel, \n  "THE CLASSIFICATION PERFORMANCE OF RDA" \n  Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of \n  Mathematics and Statistics, James Cook University of North Queensland. \n  (Also submitted to Journal of Chemometrics).\n', 'feature_names': ['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium', 'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'od280/od315_of_diluted_wines', 'proline']}

data๋ฅผ ์ถœ๋ ฅ

  • ๋ฐ์ดํ„ฐ๋“ค์ด ์ค‘๊ด„ํ˜ธ์— {} ๋‹ด๊ฒจ์žˆ๊ณ 
  • ์ฝœ๋ก  : ์„ ์ด์šฉํ•ด์„œ ๊ตฌ๋ถ„
    โ†’ key์™€ value
    • ๋ฒˆ์น˜ ๋ฐ์ดํ„ฐ ํƒ€์ž…์—๋„ ํŒŒ์ด์ฌ์˜ ๋”•์…”๋„ˆ๋ฆฌ ๋ฉ”์„œ๋“œ์ธ keys()๋ฅผ ์‚ฌ์šฉ ๊ฐ€๋Šฅ
data.keys()
dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names'])

๋ฐ์ดํ„ฐ ํ‚ค๊ฐ’ ์˜๋ฏธ ํ™•์ธ

data

  • ํŠน์„ฑ ํ–‰๋ ฌ
  • ํ‚ค์— ์ ‘๊ทผํ•˜๊ธฐ ์œ„ํ•ด . ์‚ฌ์šฉ
data.data
array([[1.423e+01, 1.710e+00, 2.430e+00, ..., 1.040e+00, 3.920e+00,
        1.065e+03],
       [1.320e+01, 1.780e+00, 2.140e+00, ..., 1.050e+00, 3.400e+00,
        1.050e+03],
       [1.316e+01, 2.360e+00, 2.670e+00, ..., 1.030e+00, 3.170e+00,
        1.185e+03],
       ...,
       [1.327e+01, 4.280e+00, 2.260e+00, ..., 5.900e-01, 1.560e+00,
        8.350e+02],
       [1.317e+01, 2.590e+00, 2.370e+00, ..., 6.000e-01, 1.620e+00,
        8.400e+02],
       [1.413e+01, 4.100e+00, 2.740e+00, ..., 6.100e-01, 1.600e+00,
        5.600e+02]])
  • ํŠน์„ฑ ํ–‰๋ ฌ์€ 2์ฐจ์›
  • ํ–‰์—๋Š” ๋ฐ์ดํ„ฐ์˜ ๊ฐœ์ˆ˜(n_samples)
  • ์—ด์—๋Š” ํŠน์„ฑ์˜ ๊ฐœ์ˆ˜(n_features)
data.data.shape
(178, 13)

โ†’ ํŠน์„ฑ์ด 13๊ฐœ, ๋ฐ์ดํ„ฐ๊ฐ€ 178๊ฐœ์ธ ํŠน์„ฑ ํ–‰๋ ฌ

nidm ์„ ์ด์šฉํ•˜์—ฌ ์ฐจ์› ํ™•์ธ

data.data.ndim
2

target

  • ํƒ€๊ฒŸ ๋ฒกํ„ฐ
  • ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” 1์ฐจ์›
data.target
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       2, 2])

ํƒ€๊ฒŸ ๋ฒกํ„ฐ์˜ ๊ธธ์ด๋Š” ํŠน์„ฑ ํ–‰๋ ฌ์˜ ๋ฐ์ดํ„ฐ ๊ฐœ์ˆ˜์™€ ์ผ์น˜ํ•ด์•ผ ํ•จ

data.target.shape
(178,)

ํŠน์„ฑ ํ–‰๋ ฌ์˜ ํ…Œ์ดํ„ฐ ์ˆ˜์™€ ์ผ์น˜

feature_names

  • data ํ‚ค์— ์ ‘๊ทผํ•ด์„œ data์˜ ๊ฐ’์„ ํ™•์ธํ•ด ๋ณธ ๊ฒฐ๊ณผ ํŠน์„ฑ์ด ๊ฐœ์ˆ˜ ํ™•์ธ
  • feature_names๋ž€ ํ‚ค์— ํŠน์„ฑ๋“ค์˜ ์ด๋ฆ„์ด ์ €์žฅ
data.feature_names
['alcohol',
 'malic_acid',
 'ash',
 'alcalinity_of_ash',
 'magnesium',
 'total_phenols',
 'flavanoids',
 'nonflavanoid_phenols',
 'proanthocyanins',
 'color_intensity',
 'hue',
 'od280/od315_of_diluted_wines',
 'proline']

feature ๊ฐฏ์ˆ˜ ํ™•์ธ
โ†’ ๋‚ด์žฅํ•จ์ˆ˜ len() ์‚ฌ์šฉ

len(data.feature_names)
13

feature_names์˜ ๊ฐœ์ˆ˜์™€ ํŠน์„ฑ ํ–‰๋ ฌ์˜ n_features(์—ด)์˜ ์ˆซ์ž๊ฐ€ ์ผ์น˜

target_names

  • target_names๋Š” ๋ถ„๋ฅ˜ํ•˜๊ณ ์ž ํ•˜๋Š” ๋Œ€์ƒ
data.target_names
array(['class_0', 'class_1', 'class_2'], dtype='<U7')

๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ๊ฐ class_0๊ณผ class_1, class_2๋กœ ๋ถ„๋ฅ˜ํ•œ๋‹ค๋Š” ๋œป

DESCR

  • DESCR์€ describe์˜ ์•ฝ์ž๋กœ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์„ค๋ช…
print(data.DESCR)
.. _wine_dataset:

Wine recognition dataset
------------------------

**Data Set Characteristics:**

    :Number of Instances: 178 (50 in each of three classes)
    :Number of Attributes: 13 numeric, predictive attributes and the class
    :Attribute Information:
 		- Alcohol
 		- Malic acid
 		- Ash
		- Alcalinity of ash  
 		- Magnesium
		- Total phenols
 		- Flavanoids
 		- Nonflavanoid phenols
 		- Proanthocyanins
		- Color intensity
 		- Hue
 		- OD280/OD315 of diluted wines
 		- Proline

    - class:
            - class_0
            - class_1
            - class_2
		
    :Summary Statistics:
    
    ============================= ==== ===== ======= =====
                                   Min   Max   Mean     SD
    ============================= ==== ===== ======= =====
    Alcohol:                      11.0  14.8    13.0   0.8
    Malic Acid:                   0.74  5.80    2.34  1.12
    Ash:                          1.36  3.23    2.36  0.27
    Alcalinity of Ash:            10.6  30.0    19.5   3.3
    Magnesium:                    70.0 162.0    99.7  14.3
    Total Phenols:                0.98  3.88    2.29  0.63
    Flavanoids:                   0.34  5.08    2.03  1.00
    Nonflavanoid Phenols:         0.13  0.66    0.36  0.12
    Proanthocyanins:              0.41  3.58    1.59  0.57
    Colour Intensity:              1.3  13.0     5.1   2.3
    Hue:                          0.48  1.71    0.96  0.23
    OD280/OD315 of diluted wines: 1.27  4.00    2.61  0.71
    Proline:                       278  1680     746   315
    ============================= ==== ===== ======= =====

    :Missing Attribute Values: None
    :Class Distribution: class_0 (59), class_1 (71), class_2 (48)
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988

This is a copy of UCI ML Wine recognition datasets.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data

The data is the results of a chemical analysis of wines grown in the same
region in Italy by three different cultivators. There are thirteen different
measurements taken for different constituents found in the three types of
wine.

Original Owners: 

Forina, M. et al, PARVUS - 
An Extendible Package for Data Exploration, Classification and Correlation. 
Institute of Pharmaceutical and Food Analysis and Technologies,
Via Brigata Salerno, 16147 Genoa, Italy.

Citation:

Lichman, M. (2013). UCI Machine Learning Repository
[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science. 

.. topic:: References

  (1) S. Aeberhard, D. Coomans and O. de Vel, 
  Comparison of Classifiers in High Dimensional Settings, 
  Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of  
  Mathematics and Statistics, James Cook University of North Queensland. 
  (Also submitted to Technometrics). 

  The data was used with many others for comparing various 
  classifiers. The classes are separable, though only RDA 
  has achieved 100% correct classification. 
  (RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data)) 
  (All results using the leave-one-out technique) 

  (2) S. Aeberhard, D. Coomans and O. de Vel, 
  "THE CLASSIFICATION PERFORMANCE OF RDA" 
  Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of 
  Mathematics and Statistics, James Cook University of North Queensland. 
  (Also submitted to Journal of Chemometrics).

๋ชจ๋“ˆ : ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ๋ฐ์ดํ„ฐ์…‹์„ ์ด์šฉํ•œ ๋ถ„๋ฅ˜ ๋ฌธ์ œ ์‹ค์Šต

DataFrame์œผ๋กœ ๋‚˜ํƒ€๋‚ด๊ธฐ


ํŠน์„ฑ ํ–‰๋ ฌ์„ Pandas์˜ DataFrame์œผ๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค

import pandas as pd

pd.DataFrame(data.data, columns=data.feature_names)
alcohol malic_acid ash alcalinity_of_ash magnesium total_phenols flavanoids nonflavanoid_phenols proanthocyanins color_intensity hue od280/od315_of_diluted_wines proline
0 14.23 1.71 2.43 15.6 127.0 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065.0
1 13.20 1.78 2.14 11.2 100.0 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050.0
2 13.16 2.36 2.67 18.6 101.0 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185.0
3 14.37 1.95 2.50 16.8 113.0 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480.0
4 13.24 2.59 2.87 21.0 118.0 2.80 2.69 0.39 1.82 4.32 1.04 2.93 735.0
... ... ... ... ... ... ... ... ... ... ... ... ... ...
173 13.71 5.65 2.45 20.5 95.0 1.68 0.61 0.52 1.06 7.70 0.64 1.74 740.0
174 13.40 3.91 2.48 23.0 102.0 1.80 0.75 0.43 1.41 7.30 0.70 1.56 750.0
175 13.27 4.28 2.26 20.0 120.0 1.59 0.69 0.43 1.35 10.20 0.59 1.56 835.0
176 13.17 2.59 2.37 20.0 120.0 1.65 0.68 0.53 1.46 9.30 0.60 1.62 840.0
177 14.13 4.10 2.74 24.5 96.0 2.05 0.76 0.56 1.35 9.20 0.61 1.60 560.0

178 rows ร— 13 columns

DataFrame์œผ๋กœ ๋‚˜ํƒ€๋‚ด๋‹ˆ ํ•œ๊ฒฐ ๋ฐ์ดํ„ฐ ๋ณด๊ธฐ๊ฐ€ ํŽธํ•ด์ง
์ด๋ ‡๊ฒŒ ํ•˜๋ฉด EDA(Exploration Data Analysis)ํ•  ๋•Œ ๊ต‰์žฅํžˆ ํŽธํ•จ

๋จธ์‹ ๋Ÿฌ๋‹


ํŠน์„ฑํ–‰๋ ฌ ์ƒ์„ฑ

ํŠน์„ฑ ํ–‰๋ ฌ์€ ํ†ต์ƒ ๋ณ€์ˆ˜๋ช… X์— ์ €์žฅํ•˜๊ณ , ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” y์— ์ €์žฅ

X = data.data
y = data.target

๋ชจ๋ธ์„ ์ƒ์„ฑ

  • ๋ถ„๋ฅ˜ ๋ฌธ์ œ์ž„์œผ๋กœ RandomForestClassifier๋ฅผ ์‚ฌ์šฉ
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()

ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ

model.fit(X, y)
RandomForestClassifier()

์˜ˆ์ธก

y_pred = model.predict(X)

์„ฑ๋Šฅ์„ ํ‰๊ฐ€

from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report

#ํƒ€๊ฒŸ ๋ฒกํ„ฐ ์ฆ‰ ๋ผ๋ฒจ์ธ ๋ณ€์ˆ˜๋ช… y์™€ ์˜ˆ์ธก๊ฐ’ y_pred์„ ๊ฐ๊ฐ ์ธ์ž๋กœ ๋„ฃ์Šต๋‹ˆ๋‹ค. 
print(classification_report(y, y_pred))
#์ •ํ™•๋„๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. 
print("accuracy = ", accuracy_score(y, y_pred))
              precision    recall  f1-score   support

           0       1.00      1.00      1.00        59
           1       1.00      1.00      1.00        71
           2       1.00      1.00      1.00        48

    accuracy                           1.00       178
   macro avg       1.00      1.00      1.00       178
weighted avg       1.00      1.00      1.00       178

accuracy =  1.0

๋ชจ๋“ˆ : Estimator

Estimator ๊ฐ์ฒด


  • ๋ฐ์ดํ„ฐ์…‹์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ถ”์ •ํ•˜๋Š” ๊ฐ์ฒด๋ฅผ Estimator
  • ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ ๋ชจ๋“  ๋จธ์‹ ๋Ÿฌ๋‹ ๋ชจ๋ธ์€ Estimator๋ผ๋Š” ํŒŒ์ด์ฌ ํด๋ž˜์Šค๋กœ ๊ตฌํ˜„
  • ์ถ”์ •์„ ํ•˜๋Š” ๊ณผ์ • ์ฆ‰, ํ›ˆ๋ จ์€ Estimator์˜ fit()๋ฉ”์„œ๋“œ
  • ์˜ˆ์ธก์€ predict()๋ฉ”์„œ๋“œ

    Estimator ๊ฐ์ฒด๋Š” LinearRegression()๊ณผ RandomForestClassifier()

์™€์ธ์˜ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ณผ์ • ๊ทธ๋ฆผ

fu12-13.png

์„ ํ˜• ํšŒ๊ท€ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ณผ์ • ๊ทธ๋ฆผ

fu12-14.png

ํƒ€๊ฒŸ ๋ฒกํ„ฐ๊ฐ€ ์—†๋‹ค๋ฉด ์–ด๋–ป๊ฒŒ ํ‘œํ˜„

  • ์ •๋‹ต์ด ์—†๋Š” ๋ฐ์ดํ„ฐ์ธ ๋น„์ง€๋„ํ•™์Šต์˜ ๊ฒฝ์šฐ๋Š” fit() ๋ฉ”์„œ๋“œ์˜ ์ธ์ž๋กœ Target Vector๊ฐ€ ๋“ค์–ด๊ฐ€์ง€ ์•Š์Œ
    fu12-15.png
  • ์‚ฌ์ดํ‚ท๋Ÿฐ์˜ Estimator ๊ฐ์ฒด๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด ๋น„์ง€๋„ํ•™์Šต, ์ง€๋„ํ•™์Šต์— ๊ด€๊ณ„์—†์ด ํ•™์Šต๊ณผ ์˜ˆ์ธก์„ ํ•  ์ˆ˜ ์žˆ์Œ

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋ถ„๋ฆฌํ•˜๊ธฐ

Estimator ๊ฐ์ฒด์— fit()๊ณผ prediction() ๋ฉ”์„œ๋“œ์— ์ธ์ž๋กœ ๊ฐ๊ธฐ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ๊ฐ€ ๋“ค์–ด๊ฐ€์•ผ ํ•จ
fu12-16.png
ํ•˜์ง€๋งŒ ์•„๋ž˜ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด ํ›ˆ๋ จ์— ์“ฐ์ด๋Š” ๋ฐ์ดํ„ฐ์™€ ์˜ˆ์ธก์— ์“ฐ์ด๋Š” ๋ฐ์ดํ„ฐ๋Š” ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•จ
fu12-17.png

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ง์ ‘ ๋ถ„๋ฆฌํ•˜๊ธฐ


ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๋น„์œจ์€ 8:2๋กœ ์„ค์ •

from sklearn.datasets import load_wine
data = load_wine()
print(data.data.shape)
print(data.target.shape)
(178, 13)
(178,)

์ „์ฒด ๋ฐ์ดํ„ฐ์˜ ๊ฐœ์ˆ˜๋Š” 178๊ฐœ์ž…๋‹ˆ๋‹ค.

  • 8 ๋Œ€ 2๋กœ ํŠน์„ฑ ํ–‰๋ ฌ๊ณผ ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋ฅผ ๋‚˜๋ˆ„์–ด ๋ณด๋„๋ก
  • ๋ฐ์ดํ„ฐ์˜ ๊ฐœ์ˆ˜์ด๋ฏ€๋กœ ์ •์ˆ˜๋งŒ ๊ฐ€๋Šฅ
  • 178๊ฐœ์˜ 80%๋ฉด 142.4์ด์ง€๋งŒ
  • ์ •์ˆ˜๋กœ ํ‘œํ˜„ํ•ด 142๊ฐœ,
  • ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋Š” ๋‚˜๋จธ์ง€ 36๊ฐœ๋กœ

fu12-18.png

ํŠน์„ฑ ํ–‰๋ ฌ๊ณผ ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” ndarray type์ด๋‹ˆ numpy์˜ ์Šฌ๋ผ์ด์‹ฑ์„ ์‚ฌ์šฉ

ํ›ˆ๋ จ๋ฐ์ดํ„ฐ

# ํŠน์„ฑ ํ–‰๋ ฌ๊ณผ ํƒ€๊ฒŸ ๋ฒกํ„ฐ๋Š” ndarray type์ด๋‹ˆ numpy์˜ ์Šฌ๋ผ์ด์‹ฑ์„ ์‚ฌ์šฉ

X_train = data.data[:142]
X_test = data.data[142:]
print(X_train.shape, X_test.shape)
(142, 13) (36, 13)

ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ

y_train = data.target[:142]
y_test = data.target[142:]
print(y_train.shape, y_test.shape)
(142,) (36,)

ํ›ˆ๋ จ

# ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๋ถ„๋ฆฌ๊ฐ€ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿผ ๋‹ค์‹œ ํ›ˆ๋ จ๊ณผ ์˜ˆ์ธก
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
RandomForestClassifier()

์˜ˆ์ธก

y_pred = model.predict(X_test)

์ •ํ™•๋„ ํ‰๊ฐ€.

from sklearn.metrics import accuracy_score

print("์ •๋‹ต๋ฅ =", accuracy_score(y_test, y_pred))
์ •๋‹ต๋ฅ = 0.9444444444444444

train_test_split() ์‚ฌ์šฉํ•ด์„œ ๋ถ„๋ฆฌ

ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋ถ„๋ฆฌ๋Š” ํ•„์ˆ˜ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ์— ์“ด ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธก์— ์‚ฌ์šฉํ•˜๋ฉด ํ•ญ์ƒ ์ •ํ™•๋„๋Š” 100%๊ฐ€ ๋‚˜์˜ฌ ๊ฒƒ์ด๊ธฐ ๋•Œ๋ฌธ์ด์ฃ . ์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ๋Š” ์ด ํ•„์ˆ˜ ๊ธฐ๋Šฅ์„ ๋‹น์—ฐํžˆ API๋กœ ์ œ๊ณตํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ”๋กœ model_selection์˜ train_test_split() ํ•จ์ˆ˜

from sklearn.model_selection import train_test_split

result = train_test_split(X, y, test_size=0.2, random_state=42)

์ธ์ž๋กœ ํŠน์„ฑ ํ–‰๋ ฌ X์™€ ํƒ€๊ฒŸ ๋ฒกํ„ฐ y๋ฅผ ๋„ฃ๊ณ  ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์˜ ๋น„์œจ์„ ๋„ฃ์–ด ํ‚ค์›Œ๋“œ ์ธ์ž๋กœ ์ง€์ •ํ•ด ์ค๋‹ˆ๋‹ค. 20%๋กœ ํ•ด ๋ณผ๊ฒŒ์š”. ๊ทธ๋ฆฌ๊ณ  ์šฐ๋ฆฌ๋Š” 0๋ฒˆ๋ถ€ํ„ฐ ์ˆœ์ฐจ์ ์œผ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„ํ• ํ–ˆ์ฃ ? ์‚ฌ์ดํ‚ท๋Ÿฐ์€ ๋žœ๋คํ•˜๊ฒŒ ๋ฐ์ดํ„ฐ๋ฅผ ์„ž์–ด์ฃผ๋Š” ๊ธฐ๋Šฅ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. random_state ์ธ์ž์— seed ๋ฒˆํ˜ธ๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. seed ๋ฒˆํ˜ธ๋Š” ์ž„์˜๋กœ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ๊ณ , ๊ฐ™์€ seed ๋ฒˆํ˜ธ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์–ธ์ œ๋“  ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

train_test_split()์€ ๋ฐ˜ํ™˜๊ฐ’์œผ๋กœ 4๊ฐœ์˜ ์›์†Œ๋กœ ์ด๋ฃจ์–ด์ง„ list๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. (*๋ฆฌ์ŠคํŠธ ์›์†Œ์˜ ๋ฐ์ดํ„ฐ ํƒ€์ž…์€ array์ž…๋‹ˆ๋‹ค.)

print(type(result))
print(len(result))
<class 'list'>
4

๊ฐ๊ฐ์˜ ๋ชจ์–‘ ํ™•์ธ

result[0].shape
(142, 13)
result[1].shape
(36, 13)
result[2].shape
(142,)
result[3].shape
(36,)

๋ชจ์–‘์„ ๋ณด๋‹ˆ ๊ฐ์ด ์žกํžˆ์‹œ๋‚˜์š”? ๋„ค 0๋ฒˆ ์›์†Œ๋ถ€ํ„ฐ ์ˆœ์„œ๋Œ€๋กœ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์šฉ ํŠน์„ฑ ํ–‰๋ ฌ, ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์šฉ ํŠน์„ฑ ํ–‰๋ ฌ, ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์šฉ ํƒ€๊ฒŸ ๋ฒกํ„ฐ, ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์šฉ ํƒ€๊ฒŸ ๋ฒกํ„ฐ์ž…๋‹ˆ๋‹ค.

์šฐ๋ฆฌ๋Š” ์ด ํ•จ์ˆ˜๋ฅผ ์ด๋Ÿฐ ์‹์œผ๋กœ unpacking ํ•ด์„œ ์‚ฌ์šฉ

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

์‹ค์Šต

  • ์™€์ธ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์…‹๊ณผ ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ๋‚˜๋ˆˆ ๋’ค ํ›ˆ๋ จํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ์ „์ฒด ์ฝ”๋“œ๋ฅผ ์ง์ ‘ ์ž‘์„ฑ
# ๋ฐ์ดํ„ฐ์…‹ ๋กœ๋“œํ•˜๊ธฐ
# [[your code]

# ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์…‹ ๋‚˜๋ˆ„๊ธฐ
# [[your code]

# ํ›ˆ๋ จํ•˜๊ธฐ
# [[your code]

# ์˜ˆ์ธกํ•˜๊ธฐ
# [[your code]

# ์ •๋‹ต๋ฅ  ์ถœ๋ ฅํ•˜๊ธฐ
# [[your code]
# ๋ฐ์ดํ„ฐ์…‹ ๋กœ๋“œํ•˜๊ธฐ
data = load_wine()
# ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์…‹ ๋‚˜๋ˆ„๊ธฐ
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2)
# ํ›ˆ๋ จํ•˜๊ธฐ
model = RandomForestClassifier()
model.fit(X_train, y_train)
# ์˜ˆ์ธกํ•˜๊ธฐ
y_pred = model.predict(X_test)
# ์ •๋‹ต๋ฅ  ์ถœ๋ ฅํ•˜๊ธฐ
print("์ •๋‹ต๋ฅ =", accuracy_score(y_test, y_pred))
์ •๋‹ต๋ฅ = 0.9166666666666666

์ด์ •๋ฆฌ

data = load_wine()
# ํ›ˆ๋ จ์šฉ ๋ฐ์ดํ„ฐ์…‹ ๋‚˜๋ˆ„๊ธฐ
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2)
# ํ›ˆ๋ จํ•˜๊ธฐ
model = RandomForestClassifier()
model.fit(X_train, y_train)
# ์˜ˆ์ธกํ•˜๊ธฐ
y_pred = model.predict(X_test)
# ์ •๋‹ต๋ฅ  ์ถœ๋ ฅํ•˜๊ธฐ
print("์ •๋‹ต๋ฅ =", accuracy_score(y_test, y_pred))
profile
๋งˆ์ผ€ํŒ…์„ ์œ„ํ•œ ์ธ๊ณต์ง€๋Šฅ ์„ค๊ณ„์™€ ์Šคํƒ€ํŠธ์—… Log

0๊ฐœ์˜ ๋Œ“๊ธ€