๐Ÿ’  AIchemist 4th Session | ๋ถ„๋ฅ˜(2)

yellowsubmarine372ยท2023๋…„ 10์›” 9์ผ

AIchemist

๋ชฉ๋ก ๋ณด๊ธฐ
6/14
post-thumbnail

05. GBM(Gradient Boosting Machine)

๋ถ€์ŠคํŒ… ์•Œ๊ณ ๋ฆฌ์ฆ˜
์—ฌ๋Ÿฌ๊ฐœ์˜ ์•ฝํ•œ ํ•™์Šต๊ธฐ๋ฅผ ์ˆœ์ฐจ์ ์œผ๋กœ ํ•™์Šต-์˜ˆ์ธกํ•˜๋ฉด์„œ ์ž˜๋ชป ์˜ˆ์ธกํ•œ ๋ฐ์ดํ„ฐ์— ๊ฐ€์ค‘์น˜ ๋ถ€์—ฌ๋ฅผ ํ†ตํ•ด ์˜ค๋ฅ˜๋ฅผ ๊ฐœ์„ ํ•ด ๋‚˜๊ฐ€๋ฉด์„œ ํ•™์Šตํ•˜๋Š” ๋ฐฉ์‹

๐Ÿ…ฐ AdaBoost(Adaptive boosting)
๐Ÿ…ฑ Gradient Boost

  • Ada boost

  1. ์•ฝํ•œ ํ•™์Šต๊ธฐ๊ฐ€ ๋ถ„๋ฅ˜๊ธฐ์ค€ 1๋กœ ๋ถ„๋ฅ˜ ์‹œํ–‰
  2. ์ž˜๋ชป๋œ ๋ถ„๋ฅ˜๋œ ์˜ค๋ฅ˜ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ๊ฐ€์ค‘์น˜ ๊ฐ’์„ ๋ถ€์—ฌ
  3. 1-2๊ณผ์ •์„ ๋ฐ˜๋ณต
  4. ์—์ด๋‹ค ๋ถ€์ŠคํŠธ๋Š” ์•ฝํ•œ ํ•™์Šต๊ธฐ๊ฐ€ ์ˆœ์ฐจ์ ์œผ๋กœ ์˜ค๋ฅ˜๊ฐ’์— ๋Œ€ํ•ด ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ€์—ฌํ•œ ์˜ˆ์ธก ๊ฒฐ์ • ๊ธฐ์ค€์„ ๋ชจ๋‘ ๊ฒฐํ•ฉํ•ด ์˜ˆ์ธก์„ ์ˆ˜ํ–‰
  • GBM

์—์ด๋‹ค๋ถ€์ŠคํŠธ์™€ ์œ ์‚ฌํ•˜๋‚˜ ๊ฐ€์ค‘์น˜ ์—…๋ฐ์ดํŠธ๋ฅผ ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ•์„ ์ด์šฉํ•˜๋Š” ๊ฒƒ์ด ํฐ ์ฐจ์ด

๋ฐ˜๋ณต ์ˆ˜ํ–‰์„ ํ†ตํ•ด ์˜ค๋ฅ˜๋ฅผ ์ตœ์†Œํ™”ํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ฐ€์ค‘์น˜์˜ ์—…๋ฐ์ดํŠธ ๊ฐ’์„ ๋„์ถœํ•˜๋Š” ๊ธฐ๋ฒ•
GBM์ด ๋žœ๋ค ํฌ๋ ˆ์ŠคํŠธ๋ณด๋‹ค๋Š” ์˜ˆ์ธก ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚จ
GBM์€ ์˜ค๋ž˜๊ฑธ๋ฆฌ๊ณ  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹๋„๋ ฅ๋„ ๋” ํ•„์š”ํ•˜๋‹ค๋Š” ๋‹จ์  ์žˆ์Œ.

  • GBM ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ
loss : ์†์‹ค ์ •๋„(ํ‰๊ฐ€ ์ฒ™๋„๋กœ ์‚ฌ์šฉ๋จ)
learning_rate : GBM์ด ํ•™์Šต์„ ์ง„ํ–‰ํ•  ๋•Œ๋งˆ๋‹ค ์ ์šฉํ•˜๋Š” ํ•™์Šต๋ฅ 
n_estimators : weak_learner์˜ ๊ฐœ์ˆ˜ 
subsample: ์•ฝํ•œ ํ•™์Šต๊ธฐ๊ฐ€ ํ•™์Šต์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ๋ง ๋น„์œจ

06. XGBoost(eXtra Gradient Boost)

ํŠธ๋ฆฌ ์•™์ƒ๋ธ” ํ•™์Šต์—์„œ ๊ฐ€์žฅ ๊ฐ๊ด‘๋ฐ›๊ณ  ์žˆ๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ค‘ ํ•˜๋‚˜
โœ” ๋›ฐ์–ด๋‚œ ์˜ˆ์ธก ์„ฑ๋Šฅ
โœ” GBM ๋Œ€๋น„ ๋น ๋ฅธ ์ˆ˜ํ–‰ ์‹œ๊ฐ„
โœ” ๊ณผ์ ํ•ฉ ๊ทœ์ œ
โœ” Tree pruning (๋‚˜๋ฌด ๊ฐ€์ง€์น˜๊ธฐ)
โœ” ์ž์ฒด ๋‚ด์žฅ๋œ ๊ต์ฐจ ๊ฒ€์ฆ
โœ” ๊ฒฐ์†๊ฐ’ ์ž์ฒด ์ฒ˜๋ฆฌ

์˜ค... ๊ทธ๋ƒฅ ์ตœ๊ฐ•์ธ๋ฐ???

โ–ถ ์ดˆ๊ธฐ ๋…์ž์ ์ธ XGBoost ํ”„๋ ˆ์ž„์›Œํฌ ๊ธฐ๋ฐ˜์˜ XGBoost๋ฅผ ํŒŒ์ด์ฌ ๋ž˜ํผ XGBoost ๋ชจ๋“ˆ, ์‚ฌ์ดํ‚ท๋Ÿฐ๊ณผ ์—ฐ๋™๋˜๋Š” ๋ชจ๋“ˆ์„ XGBoost ๋ชจ๋“ˆ


  • XGBoost ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ

์ผ๋ฐ˜ ํŒŒ๋ผ๋ฏธํ„ฐ ๋””ํดํŠธ ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’์„ ๋ฐ”๊พธ์ง€ ์•Š๋Š” ๊ธฐ๋ณธ ํŒŒ๋ผ๋ฏธํ„ฐ
๋ถ€์Šคํ„ฐ ํŒŒ๋ผ๋ฏธํ„ฐ ํŠธ๋ฆฌ ์ตœ์ ํ™” ๋ถ€์ŠคํŒ…๊ณผ ๊ด€๋ จ ํŒŒ๋ผ๋ฏธํ„ฐ ๋“ฑ์„ ์ง€์ • (์‚ฌ์ „์กฐ์ž‘ ํŒŒ๋ผ๋ฏธํ„ฐ)
ํ•™์Šต ํƒœ์Šคํฌ ํŒŒ๋ผ๋ฏธํ„ฐ ํ•™์Šต ์ˆ˜ํ–‰์‹œ ์„ค์ •ํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ (ํ•™์Šต ํŒŒ๋ผ๋ฏธํ„ฐ)

[์ฃผ์š” ์ผ๋ฐ˜ ํŒŒ๋ผ๋ฏธํ„ฐ]
booster
silent 
nthread

[์ฃผ์š” ๋ถ€์Šคํ„ฐ ํŒŒ๋ผ๋ฏธํ„ฐ]
eta[default=0.3, alias: learning_rate]
num_boost_rounds
min_child_weight[default=1]
gamma[default=0, alias: min_split_loss
max_depth[default=6]
sub_sample[default=1]
colsample_bytree[default=1]
lambda[default=1, alias: reg_lambda]
alpha[default=0, alias: reg_alpha]
scale_pos_weight[default=1]

[ํ•™์Šต ํƒœ์Šคํฌ ํŒŒ๋ผ๋ฏธํ„ฐ]
objective
bianry:logistic
multi:softmax
multi:softprob
eval_metric

๊ณผ์ ํ•ฉ ํ•ด๊ฒฐ ํŒŒ๋ผ๋ฏธํ„ฐ

  • eta ๊ฐ’์„ ๋‚ฎ์ถ˜๋‹ค (eta ๊ฐ’ ๋‚ฎ์ถœ ๊ฒฝ์šฐ num_round ๊ฐ’์„ ๋†’์—ฌ์ค˜์•ผ ํ•จ
  • max_depth ๊ฐ’์„ ๋‚ฎ์ถ˜๋‹ค
  • min_child_weight ๊ฐ’์„ ๋†’์ธ๋‹ค
  • gamma ๊ฐ’์„ ๋†’์ธ๋‹ค
  • subsample๊ณผ colsample_bytree๋ฅผ ์กฐ์ •
  • ํŒŒ์ด์ฌ ๋ž˜ํผ XGBoost ์ ์šฉ - ์œ„์Šค์ฝ˜์‹  ์œ ๋ฐฉ์•” ์˜ˆ์ธก

์กฐ๊ธฐ ์ค‘๋‹จ

์ˆ˜ํ–‰ ์†๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๊ธฐ๋Šฅ
์˜ˆ์ธก ์˜ค๋ฅ˜๊ฐ€ ๋”์ด์ƒ ๊ฐœ์„ ๋˜์ง€ ์•Š์œผ๋ฉด ๋ฐ˜๋ณต์„ ๋๊นŒ์ง€ ์ˆ˜ํ–‰ํ•˜์ง€ ์•Š๊ณ  ์ค‘์ง€ํ•ด ์ˆ˜ํ–‰์‹œ๊ฐ„์„ ๊ฐœ์„ ํ•  ์ˆ˜ ์žˆ์Œ

(1) ์กฐ๊ธฐ์ค‘๋‹จ์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ณ„๋„์˜ ๊ฒ€์ฆ์šฉ ๋ฐ์ดํ„ฐ ํ•„์š”

์œ„์Šค์ฝ˜์‹  ์œ ๋ฐฉ์•” ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ 80%๋ฅผ ํ•™์Šต์šฉ, 20%๋ฅผ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ์ถ”์ถœํ•œ ๋’ค ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ์—์„œ 90% ์ตœ์ข… ํ•™์Šต์šฉ, 10%๋ฅผ ๊ฒ€์ฆ์šฉ์œผ๋กœ ๋ถ„ํ• 

(2) DMatrix

ํ•™์Šต์šฉ, ๊ฒ€์ฆ, ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ชจ๋‘ ์ „์šฉ์˜ ๋ฐ์ดํ„ฐ ๊ฐ์ฒด์ธ DMatrix๋กœ ์ƒ์„ฑ

# ๋งŒ์•ฝ ๊ตฌ๋ฒ„์ „ XGBoost์—์„œ DataFrame์œผ๋กœ DMatrix ์ƒ์„ฑ์ด ์•ˆ๋  ๊ฒฝ์šฐ X_train.values๋กœ ๋„˜ํŒŒ์ด ๋ณ€ํ™˜
# ํ•™์Šต, ๊ฒ€์ฆ, ํ…Œ์ŠคํŠธ์šฉ DMatrix๋ฅผ ์ƒ์„ฑ
dtr = xgb.DMatrix(data=X_tr, label=y_tr)
dval = xgb.DMatrix(data=X_val, label=y_val)
dtest = xgb.DMatrix(data=X_test, label= y_test)

(3) ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •

params = {'max_depth':3,
         'eta': 0.05,
         'objective': 'binary:logistic',
          'eval_metric': 'logloss'
         }
num_rounds = 400

(4) ์กฐ๊ธฐ์ค‘๋‹จ ์ˆ˜ํ–‰

eval_metric ํ‰๊ฐ€ ์ง€ํ‘œ(๋ถ„๋ฅ˜์ผ ๊ฒฝ์šฐ logloss ์ ์šฉ)๋กœ ํ‰๊ฐ€์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์˜ˆ์ธก ์˜ค๋ฅ˜๋ฅผ ์ธก์ •

#ํ•™์Šต ๋ฐ์ดํ„ฐ ์…‹์€ 'train' ๋˜๋Š” ํ‰๊ฐ€๋ฐ์ดํ„ฐ ์…‹์€ 'eval'๋กœ ๋ช…๊ธฐํ•ฉ๋‹ˆ๋‹ค.
eval_list = [(dtr,'train'),(dval, 'eval')] # ๋˜๋Š” eval_list = [(dval, 'eval')]๋งŒ ๋ช…๊ธฐํ•ด๋„ ๋ฌด๋ฐฉ

#ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ์™€ early stopping ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ train() ํ•จ์ˆ˜์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ์ „๋‹ฌ
xgb_model = xgb.train(params = params, dtrain=dtr, num_boost_round=num_rounds, early_stopping_rounds=50, evals=eval_list)
[Output]

[0]	train-logloss:0.65016	eval-logloss:0.66183
[1]	train-logloss:0.61131	eval-logloss:0.63609
[2]	train-logloss:0.57563	eval-logloss:0.61144
[3]	train-logloss:0.54310	eval-logloss:0.59204
[4]	train-logloss:0.51323	eval-logloss:0.57329
[5]	train-logloss:0.48447	eval-logloss:0.55037
[6]	train-logloss:0.45796	eval-logloss:0.52930
[7]	train-logloss:0.43436	eval-logloss:0.51534
[8]	train-logloss:0.41150	eval-logloss:0.49718
[9]	train-logloss:0.39027	eval-logloss:0.48154
[10]	train-logloss:0.37128	eval-logloss:0.46990
[11]	train-logloss:0.35254	eval-logloss:0.45474
[12]	train-logloss:0.33528	eval-logloss:0.44229
[13]	train-logloss:0.31892	eval-logloss:0.42961
[14]	train-logloss:0.30439	eval-logloss:0.42065
[15]	train-logloss:0.29000	eval-logloss:0.40958
[16]	train-logloss:0.27651	eval-logloss:0.39887
...
  • xgboost ๋ชจ๋ธ ํ›ˆ๋ จ

xgboost์˜ predict()๋Š” ์˜ˆ์ธก ๊ฒฐ๊ด๊ฐ’์ด ์•„๋‹Œ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ ์ถ”์ •ํ•  ์ˆ˜ ์žˆ๋Š” ํ™•๋ฅ  ๊ฐ’์„ ๋ฐ˜ํ™˜
์˜ˆ์ธก ํ™•๋ฅ ์ด 0.5๋ณด๋‹ค ํฌ๋ฉด 1, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด 0์œผ๋กœ ์˜ˆ์ธก๊ฐ’์„ ๊ฒฐ์ •ํ•˜๋Š” ๋กœ์ง์„ ์ถ”๊ฐ€ํ•˜๋ฉด ๋จ

pred_probs = xgb_model.predict(dtest)
print('predict() ์ˆ˜ํ–‰ ๊ฒฐ๊ด๊ฐ’์„ 10๊ฐœ๋งŒ ํ‘œ์‹œ, ์˜ˆ์ธก ํ™•๋ฅ  ๊ฐ’์œผ๋กœ ํ‘œ์‹œ๋จ')
print(np.round(pred_probs[:10], 3))

#์˜ˆ์ธก ํ™•๋ฅ ์ด 0.5๋ณด๋‹ค ํฌ๋ฉด 1, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด 0์œผ๋กœ ์˜ˆ์ธก๊ฐ’ ๊ฒฐ์ •ํ•˜์—ฌ lsit ๊ฐ์ฒด์ธ preds์— ์ €์žฅ
preds = [1 if x >0.5 else 0 for x in pred_probs]
print('์˜ˆ์ธก๊ฐ’ 10๊ฐœ๋งŒ ํ‘œ์‹œ:',preds[:10])
[Output]

predict() ์ˆ˜ํ–‰ ๊ฒฐ๊ด๊ฐ’์„ 10๊ฐœ๋งŒ ํ‘œ์‹œ, ์˜ˆ์ธก ํ™•๋ฅ  ๊ฐ’์œผ๋กœ ํ‘œ์‹œ๋จ
[0.845 0.008 0.68  0.081 0.975 0.999 0.998 0.998 0.996 0.001]
์˜ˆ์ธก๊ฐ’ 10๊ฐœ๋งŒ ํ‘œ์‹œ: [1, 0, 1, 0, 1, 1, 1, 1, 1, 0]
  • xgboost ์‹œ๊ฐํ™”

plot_importance() ํ•ด๋‹น ํ”ผ์ฒ˜ ์ค‘์š”๋„ ๊ทธ๋ž˜ํ”„ ์‹œ๊ฐํ™”

f ์Šค์ฝ”์–ด๋Š” ํ•ด๋‹น ํ”ผ์ฒ˜๊ฐ€ ํŠธ๋ฆฌ ๋ถ„ํ•  ์‹œ ์–ผ๋งˆ๋‚˜ ์ž์ฃผ ์‚ฌ์šฉ๋˜์—ˆ๋Š” ์ง€๋ฅผ ์ง€ํ‘œ๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฐ’

  • xgboost ์ตœ์  ํŒŒ๋ผ๋ฏธํ„ฐ ๊ตฌํ•˜๊ธฐ

cv() API๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ๊ต์ฐจ ๊ฒ€์ฆ ์ˆ˜ํ–‰ํ›„ ์ตœ์  ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ตฌํ•  ์ˆ˜ ์žˆ์Œ

07. GBM(LightGBM)

LightGBM ๊ฐ€์žฅ ํฐ ์žฅ์ ์€ XGBoost๋ณด๋‹ค ํ•™์Šต์— ๊ฑธ๋ฆฌ๋Š” ์‹œ๊ฐ„์ด ํ›จ์”ฌ ์ ์Œ. ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰๋„ ์ƒ๋Œ€์ ์œผ๋กœ ์ ์Œ.

  • ๋ฆฌํ”„ ์ค‘์‹ฌ ํŠธ๋ฆฌ ๋ถ„ํ• 

๋Œ€๋ถ€๋ถ„ ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๊ท ํ˜• ํŠธ๋ฆฌ ๋ถ„ํ•  ๋ฐฉ์‹์„ ์‚ฌ์šฉ
LightGBM์˜ ๋ฆฌํ”„ ์ค‘์‹ฌ ํŠธ๋ฆฌ ๋ถ„ํ•  ๋ฐฉ์‹์€ ํŠธ๋ฆฌ์˜ ๊ท ํ˜•์„ ๋งž์ถ”์ง€ ์•Š๊ณ , ์ตœ๋Œ€ ์†์‹ค ๊ฐ’์„ ๊ฐ€์ง€๋Š” ๋ฆฌํ”„๋…ธ๋“œ๋ฅผ ์ง€์†์ ์œผ๋กœ ๋ถ„ํ• 

โ†’ ๊ท ํ˜• ํŠธ๋ฆฌ๋ถ„ํ•  ๋ฐฉ์‹๋ณด๋‹ค ์˜ˆ์ธก ์˜ค๋ฅ˜ ์†์‹ค ์ตœ์†Œํ™” ๊ธฐ๋Œ€

LightGBM์˜ XGBoost ๋Œ€๋น„ ์žฅ์ 

  • ๋” ๋น ๋ฅธ ํ•™์Šต๊ณผ ์˜ˆ์ธก ์ˆ˜ํ–‰ ์‹œ๊ฐ„
  • ๋” ์ž‘์€ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰
  • ์นดํ…Œ๊ณ ๋ฆฌํ˜• ํ”ผ์ฒ˜์˜ ์ž๋™ ๋ณ€ํ™˜
  • LightGBM ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ
[์ฃผ์š” ํŒŒ๋ผ๋ฏธํ„ฐ]
num_iterations : ๋ฐ˜๋ณต์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ํŠธ๋ฆฌ ๊ฐœ์ˆ˜ ์ง€์ •
learning_rate : ์—…๋ฐ์ดํŠธ ๋˜๋Š” ํ•™์Šต๋ฅ  ๊ฐ’
max_depth 
min_data_leaf
num_leaves : ํ•˜๋‚˜์˜ ํŠธ๋ฆฌ๊ฐ€ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ ๋ฆฌํ”„ ๊ฐœ์ˆ˜
boosting 
bagging_fraction : ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๋น„์œจ ์ง€์ •
feature_fraction : ๋ฌด์ž‘์œ„๋กœ ์„ ํƒํ•˜๋Š” ํ”ผ์ฒ˜ ๋น„์œจ
lambda_l2 : L2 regulation ์ œ์–ด๋ฅผ ์œ„ํ•œ ๊ฐ’
lambda_l1 : L1 regulation ์ œ์–ด๋ฅผ ์œ„ํ•œ ๊ฐ’


[Learning Task ํŒŒ๋ผ๋ฏธํ„ฐ]
objective : ์ตœ์†Ÿ๊ฐ’์„ ๊ฐ€์ ธ์•ผ ํ•  ์†์‹ค ํ•จ์ˆ˜ ์ •์˜

๊ณผ์ ํ•ฉ ๋ฐฉ์ง€ ํŒŒ๋ผ๋ฏธํ„ฐ

  • num_leaves ์ตœ๋Œ€ ๋ฆฌํ”„ ๊ฐœ์ˆ˜ ์ œํ•œ
  • min_child_samples
  • mat_depth ๋ช…์‹œ์ ์œผ๋กœ ๊นŠ์ด ํฌ๊ธฐ๋ฅผ ์ œํ•œ

ํŒŒ์ด์ฌ ๋ž˜ํผ LightGBM๊ณผ ์‚ฌ์ดํ‚ท๋Ÿฐ ๋ž˜ํผ XGBoost, LightGBM ํ•˜์ดํผ ํŒŒ๋ฆฌ๋ฏธํ„ฐ ๋น„๊ต
<ํ‘œ ์‚ฌ์ง„>

  • LightGBM ์ ์šฉ - ์œ„์Šค์ฝ˜์‹  ์œ ๋ฐฉ์•” ์˜ˆ์ธก
# LighttGBM์˜ ํŒŒ์ด์ฌ ํŒจํ‚ค์ง€์ธ lightgbm์—์„œ LGBMClassifier ์ž„ํฌํŠธ
from lightgbm import LGBMClassifier

import pandas as pd
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split

dataset = load_breast_cancer()

cancer_df = pd.DataFrame(data=dataset.data, columns=dataset.feature_names)
cancer_df['target']=dataset.target
X_features = cancer_df.iloc[:, :-1]
y_label = cancer_df.iloc[:,-1]

# ์ „์ฒด ๋ฐ์ดํ„ฐ ์ค‘ 80%๋Š” ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ, 20%๋Š” ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ ์ถ”์ถœ
X_train, X_test, y_train, y_test = train_test_split(X_features, y_label, test_size=0.2, random_state=156)

# ์œ„์—์„œ ๋งŒ๋“  X_train, y_train์„ ๋‹ค์‹œ ์ชผ๊ฐœ์„œ 90%๋Š” ํ•™์Šต๊ณผ 10%๋Š” ๊ฒ€์ฆ์šฉ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„๋ฆฌ
X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=156)

# ์•ž์„œ XGBoost์™€ ๋™์ผํ•˜๊ฒŒ n_estimators๋Š” 400 ์„ค์ •.
lgbm_wrapper = LGBMClassifier(n_estimators=400, learning_rate=0.05)

# LightGBM๋„ XGBoost์™€ ๋™์ผํ•˜๊ฒŒ ์กฐ๊ธฐ์ค‘๋‹จ ์ˆ˜ํ–‰ ๊ฐ€๋Šฅ 
evals = [(X_tr, y_tr), (X_val, y_val)]
lgbm_wrapper.fit(X_tr, y_tr, early_stopping_rounds=50, eval_metric="logloss", eval_set=evals, verbose=True)
preds = lgbm_wrapper.predict(X_test)
pred_proba = lgbm_wrapper.predict_proba(X_test)[:,1]
[Output]

[1]	training's binary_logloss: 0.625671	valid_1's binary_logloss: 0.628248
[2]	training's binary_logloss: 0.588173	valid_1's binary_logloss: 0.601106
[3]	training's binary_logloss: 0.554518	valid_1's binary_logloss: 0.577587
[4]	training's binary_logloss: 0.523972	valid_1's binary_logloss: 0.556324
[5]	training's binary_logloss: 0.49615	valid_1's binary_logloss: 0.537407
[6]	training's binary_logloss: 0.470108	valid_1's binary_logloss: 0.519401
[7]	training's binary_logloss: 0.446647	valid_1's binary_logloss: 0.502637
[8]	training's binary_logloss: 0.425055	valid_1's binary_logloss: 0.488311
[9]	training's binary_logloss: 0.405125	valid_1's binary_logloss: 0.474664
[10]	training's binary_logloss: 0.386526	valid_1's binary_logloss: 0.461267

LightGBM๋„ ์กฐ๊ธฐ ์ค‘๋‹จ์ด ๊ฐ€๋Šฅํ•˜๋‹ค

08. ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๊ธฐ๋ฐ˜์˜ HyperOpt๋ฅผ ์ด์šฉํ•œ ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹

์ง€๊ธˆ๊นŒ์ง€๋Š” ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹์„ ์œ„ํ•ด ์‚ฌ์ดํ‚ท๋Ÿฐ์—์„œ ์ œ๊ณตํ•˜๋Š” Grid Search ๋ฐฉ์‹์„ ์ ์šฉ

  • ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ๊ฐœ์š”

๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™”๋Š” ๋ชฉ์ ํ•จ์ˆ˜ ์‹์„ ์ œ๋Œ€๋กœ ์•Œ ์ˆ˜ ์—†๋Š” ๋ธ”๋ž™ ๋ฐ•์Šค ํ˜•ํƒœ์˜ ํ•จ์ˆ˜์—์„œ (์ตœ์ข… ํ•จ์ˆ˜๋ฅผ ๋ชจ๋ฆ„) ์ตœ๋Œ€ ๋˜๋Š” ์ตœ์†Œ ํ•จ์ˆ˜ ๋ฐ˜ํ™˜ ๊ฐ’์„ ๋งŒ๋“œ๋Š” ์ตœ์  ์ž…๋ ฅ๊ฐ’์„ ๊ฐ€๋Šฅํ•œ ์ ์€ ์‹œ๋„๋ฅผ ํ†ตํ•ด ๋น ๋ฅด๊ณ  ํšจ๊ณผ์ ์œผ๋กœ ์ฐพ์•„์ฃผ๋Š” ๋ฐฉ์‹

๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™”

๋ฒ ์ด์ง€์•ˆ ํ™•๋ฅ ์— ๊ธฐ๋ฐ˜์„ ๋‘๊ณ  ์žˆ๋Š” ์ตœ์ ํ™” ๊ธฐ๋ฒ•.
๋Œ€์ฒด ๋ชจ๋ธ์€ ํš๋“ ํ•จ์ˆ˜๋กœ๋ถ€ํ„ฐ ์ตœ์ ํ•จ์ˆ˜๋ฅผ ์˜ˆ์ธกํ•  ์ˆ˜ ์žˆ๋Š” ์ž…๋ ฅ๊ฐ’์„ ์ถ”์ฒœ๋ฐ›์€ ๋’ค ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ตœ์  ํ•จ์ˆ˜ ๋ชจ๋ธ์„ ๊ฐœ์„ ํ•ด ๋‚˜๊ฐ€๋ฉฐ, ํš๋“ํ•จ์ˆ˜๋Š” ๊ฐœ์„ ๋œ ์˜ˆ์ธกํ•  ์ˆ˜ ์žˆ๋Š” ์ž…๋ ฅ๊ฐ’์„ ์ถ”์ฒœ๋ฐ›์€ ๋’ค ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ตœ์  ์ž…๋ ฅ๊ฐ’์„ ๊ณ„์‚ฐ

  • HyperOpt ์‚ฌ์šฉํ•˜๊ธฐ

HyperOpt ์ฃผ์š” ๋กœ์ง โญ

1. ์ž…๋ ฅ ๋ณ€์ˆ˜๋ช…๊ณผ ์ž…๋ ฅ๊ฐ’์˜ ๊ฒ€์ƒ‰๊ณต๊ฐ„ ์„ค์ •
2. ๋ชฉ์ ํ•จ์ˆ˜์˜ ์„ค์ •
3. ๋ชฉ์ ํ•จ์ˆ˜์˜ ๋ฐ˜ํ™˜ ์ตœ์†Ÿ๊ฐ’์„ ๊ฐ€์ง€๋Š” ์ตœ์ € ์ž…๋ ฅ๊ฐ’์„ ์œ ์ถ”ํ•˜๋Š” ๊ฒƒ

(1) ์ž…๋ ฅ ๋ณ€์ˆ˜๋ช…๊ณผ ์ž…๋ ฅ๊ฐ’์˜ ๊ฒ€์ƒ‰๊ณต๊ฐ„ ์„ค์ •

{'์ž…๋ ฅ๋ณ€์ˆ˜๋ช…': hp.quniform(label, low, high, q)}

from hyperopt import hp

# -10 ~ 10๊นŒ์ง€ 1๊ฐ„๊ฒฉ์„ ๊ฐ€์ง€๋Š” ์ž…๋ ฅ ๋ณ€์ˆ˜ X์™€ -15~15๊นŒ์ง€ 1๊ฐ„๊ฒฉ์œผ๋กœ ์ž…๋ ฅ ๋ณ€์ˆ˜ y ์„ค์ • 
search_space ={'x':hp.quniform('x', -10, 10, 1), 'y':hp.quniform('y', -15, 15, 1)}

(2) ๋ชฉ์  ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑ

๋ชฉ์ ํ•จ์ˆ˜๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ธ์ž๋กœ ๋ฐ›๊ณ , ํŠน์ • ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ตฌ์กฐ

from hyperopt import STATUS_OK

# ๋ชฉ์  ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑ, ๋ณ€์ˆซ๊ฐ’๊ณผ ๋ณ€์ˆ˜ ๊ฒ€์ƒ‰ ๊ณต๊ฐ„์„ ๊ฐ€์ง€๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ธ์ž๋กœ ๋ฐ›๊ณ , ํŠน์ • ๊ฐ’์„ ๋ฐ˜ํ™˜
def objective_func(search_space):
    x= search_space['x']
    y= search_space['y']
    retval = x**2 - 20*y
    
    return retval

๋ฐ˜ํ™˜๊ฐ’์ด ์ตœ์†Œ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋Š” ์ตœ์ ์˜ ์ž…๋ ฅ๊ฐ’์„ ์ฐพ์•„์•ผ ๋จ

๐Ÿ“Œ fmin()

fmin(objective, space, algo, max_evals, trials)

space : ๊ฒ€์ƒ‰ ๊ณต๊ฐ„ ๋”•์…”๋„ˆ๋ฆฌ
algo : ๋ฒ ์ด์ง€์•ˆ ์ตœ์ ํ™” ์ ์šฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜ 
max_evals : ์ž…๋ ฅ๊ฐ’ ์‹œ๋„ ํšŸ์ˆ˜ 
trials : (์ค‘์š”) ์ตœ์  ์ž…๋ ฅ๊ฐ’์„ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹œ๋„ํ•œ ์ž…๋ ฅ๊ฐ’ ๋ฐ ํ•ด๋‹น ์ž…๋ ฅ๊ฐ’์˜ ๋ชฉ์  ํ•จ์ˆ˜ ๋ฐ˜ํ™˜๊ฐ’ ๊ฒฐ๊ณผ๋ฅผ ์ €์žฅํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋จ

โœ” Trials ๊ฐ์ฒด

results๊ณผ val ์†์„ฑ ๊ฐ€์ง
results, vals ์†์„ฑ์„ ํ†ตํ•ด HyperOpt์˜ fmin() ํ•จ์ˆ˜์˜ ์ˆ˜ํ–‰ ์‹œ๋งˆ๋‹ค ์ตœ์ ํ™”๋˜๋Š” ๊ฒฝ๊ณผ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๋Š” ํ•จ์ˆ˜ ๋ฐ˜ํ™˜๊ฐ’๊ณผ ์ž…๋ ฅ ๋ณ€์ˆ˜๊ฐ’๋“ค์˜ ์ •๋ณด๋ฅผ ์ œ๊ณต

  1. results
    results๋Š” ๋ฐ˜๋ณต ์ˆ˜ํ–‰ ์‹œ๋งˆ๋‹ค ๋ฐ˜ํ™˜
    {'loss': ํ•จ์ˆ˜ ๋ฐ˜ํ™˜๊ฐ’, 'status': ๋ฐ˜ํ™˜ ์ƒํƒœ๊ฐ’}

  2. vals
    ํ•จ์ˆ˜์˜ ๋ฐ˜๋ณต ์ˆ˜ํ–‰์‹œ๋งˆ๋‹ค ์ž…๋ ฅ๋˜๋Š” ์ž…๋ ฅ ๋ณ€์ˆ˜๊ฐ’์„ ๊ฐ€์ง
    {'์ž…๋ ฅ๋ณ€์ˆ˜๋ช…': ๊ฐœ๋ณ„ ์ˆ˜ํ–‰์‹œ๋งˆ๋‹ค ์ž…๋ ฅ๋œ ๊ฐ’ ๋ฆฌ์ŠคํŠธ}



  • HyperOpt๋ฅผ ์ด์šฉํ•œ XGBoost ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ตœ์ ํ™”

์•ž๊ณผ ๋‹ค๋ฅด๊ฒŒ ๋ชฉ์ ํ•จ์ˆ˜์—์„œ XGBoost๋ฅผ ํ•™์Šต์‹œํ‚ค๋Š” ์ฐจ์ด์ 

์ฃผ์˜ํ•ด์•ผ๋  ๋ถ€๋ถ„

  1. ๊ฒ€์ƒ‰ ๊ณต๊ฐ„์—์„œ ๋ชฉ์  ํ•จ์ˆ˜๋กœ ์ž…๋ ฅ๋˜๋Š” ๋ชฉ์  ์ธ์ž๋“ค์€ ์‹ค์ˆ˜ํ˜• ๊ฐ’์ด๋ฏ€๋กœ ์ด๋“ค์„ XGBoostClassifier์˜ ์ •์ˆ˜ํ˜• ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•  ๋•Œ๋Š” ์ •์ˆ˜ํ˜•์œผ๋กœ ํ˜•๋ณ€ํ™˜
  2. HyperOpt์˜ ๋ชฉ์ ํ•จ์ˆ˜๋Š” ์ตœ์†Ÿ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์ ํ™”ํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ •ํ™•๋„์™€ ๊ฐ™์ด ๊ฐ’์ด ํด์ˆ˜๋ก ์ข‹์€ ์„ฑ๋Šฅ ์ง€ํ‘œ์ผ ๊ฒฝ์šฐ -1์„ ๊ณฑํ•œ ๋’ค ๋ฐ˜ํ™˜

(1) Objective func() ๋งŒ๋“ค๊ธฐ

from sklearn.model_selection import cross_val_score
from xgboost import XGBClassifier
from hyperopt import STATUS_OK

# fmin()์—์„œ ์ž…๋ ฅ๋œ search_space ๊ฐ’์œผ๋กœ ์ž…๋ ฅ๋œ ๋ชจ๋“  ๊ฐ’์€ ์‹ค์ˆ˜ํ˜•์ž„.
# XGBClassifier์˜ ์ •์ˆ˜ํ˜• ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ๋Š” ์ •์ˆ˜ํ˜• ๋ณ€ํ™˜์„ ํ•ด์ค˜์•ผ ํ•จ.
# ์ •ํ™•๋„๋Š” ๋†’์„์ˆ˜๋ก ๋” ์ข‹์€ ์ˆ˜์น˜์ž„. -1 * ์ •ํ™•๋„๋ฅผ ๊ณฑํ•ด์„œ ํฐ ์ •ํ™•๋„ ๊ฐ’์ผ์ˆ˜๋ก ์ตœ์†Œ๊ฐ€ ๋˜๋„๋ก ๋ณ€ํ™˜
def objective_func(search_space):
    # ์ˆ˜ํ–‰ ์‹œ๊ฐ„ ์ ˆ์•ฝ์„ ์œ„ํ•ด nestimators๋Š” 100์œผ๋กœ ์ถ•์†Œ
    xgb_clf = XGBClassifier(n_estimators=100, max_depth=int(search_space['max_depth']),
                            min_child_weight=int(search_space['min_child_weight']),
                            learning_rate=search_space['learning_rate'],
                            colsample_bytree=search_space['colsample_bytree'],
                            eval_metric='logloss')
    accuracy = cross_val_score(xgb_clf, X_train, y_train, scoring='accuracy', cv=3)
    
    # accuracy๋Š” cv=3 ๊ฐœ์ˆ˜๋งŒํผ roc-auc ๊ฒฐ๊ณผ๋ฅผ ๋ฆฌ์ŠคํŠธ๋กœ ๊ฐ€์ง. ์ด๋ฅผ ํ‰๊ท ํ•ด์„œ ๋ฐ˜ํ™˜ํ•˜๋˜ -1์„ ๊ณฑํ•จ.
    return {'loss':-1 * np.mean(accuracy), 'status': STATUS_OK}

(2) fmin()์„ ์ด์šฉํ•ด ์ตœ์  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ๋„์ถœ

from hyperopt import fmin, tpe, Trials

trial_val = Trials()
best = fmin(fn=objective_func,
            space=xgb_search_space,
            algo=tpe.suggest,
            max_evals=50, # ์ตœ๋Œ€ ๋ฐ˜๋ณต ํšŸ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.
            trials=trial_val, rstate=np.random.default_rng(seed=9))
print('best:', best)

(3) ๋„์ถœ๋œ ์ตœ์  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ๋“ค์„ ์ด์šฉํ•ด XGBClassifier ์žฌํ•™์Šต

xgb_wrapper = XGBClassifier(n_estimators=400,
                            learning_rate=round(best['learning_rate'], 5),
                            max_depth=int(best['max_depth']),
                            min_child_weight=int(best['min_child_weight']),
                            colsample_bytree=round(best['colsample_bytree'], 5)
                           )

evals = [(X_tr, y_tr), (X_val, y_val)]
xgb_wrapper.fit(X_tr, y_tr, early_stopping_rounds=50, eval_metric='logloss',
                eval_set=evals, verbose=True)

preds = xgb_wrapper.predict(X_test)
pred_proba = xgb_wrapper.predict_proba(X_test)[:, 1]

get_clf_eval(y_test, preds, pred_proba)
[Output]

[0]	validation_0-logloss:0.58942	validation_1-logloss:0.62048
[1]	validation_0-logloss:0.50801	validation_1-logloss:0.55913
[2]	validation_0-logloss:0.44160	validation_1-logloss:0.50928
[3]	validation_0-logloss:0.38734	validation_1-logloss:0.46815
[4]	validation_0-logloss:0.34224	validation_1-logloss:0.43913
[5]	validation_0-logloss:0.30425	validation_1-logloss:0.41570
[6]	validation_0-logloss:0.27178	validation_1-logloss:0.38953
[7]	validation_0-logloss:0.24503	validation_1-logloss:0.37317
[8]	validation_0-logloss:0.22050	validation_1-logloss:0.35628
[9]	validation_0-logloss:0.19873	validation_1-logloss:0.33798
[10]	validation_0-logloss:0.17945	validation_1-logloss:0.32463
[11]	validation_0-logloss:0.16354	validation_1-logloss:0.31384
[12]	validation_0-logloss:0.15032	validation_1-logloss:0.30607

11. ์Šคํƒœํ‚น ์•™์ƒ๋ธ”

Stacking ์•™์ƒ๋ธ”
๊ฐœ๋ณ„ ๋ชจ๋ธ๋กœ ์˜ˆ์ธกํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์‹œ ์ตœ์ข… ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ์‹

๋‹ค๋ฅธ ์•™์ƒ๋ธ”๊ณผ์˜ ์ฐจ์ด์ 
์—ฌ๋Ÿฌ ๊ฐœ๋ณ„ ๋ชจ๋ธ๋“ค์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ ํ•™์Šตํ•ด์„œ ์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ ๋„์ถœํ•˜๋Š” meta ๋ชจ๋ธ

  • ๊ตฌ์กฐ

๊ฐœ๋ณ„ base ๋ชจ๋ธ
์ตœ์ข… meta ๋ชจ๋ธ : ๊ฐœ๋ณ„ base ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ ํ•™์Šตํ•ด ์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ ๋ƒ„

-> ์ด๋•Œ, ๊ฐœ๋ณ„ ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋ฅผ stacking ํ˜•ํƒœ๋กœ ๊ฒฐํ•ฉํ•ด์„œ, ์ตœ์ข… meta ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ

  • CV set ๊ธฐ๋ฐ˜ Stacking

1) Step 1 : ๊ฐœ๋ณ„ base ๋ชจ๋ธ ํ•™์Šต, ์˜ˆ์ธก๊ฐ’ ๋„์ถœ

ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ K๊ฐœ์˜ fold๋กœ ๋‚˜๋ˆ”
K-1 ๊ฐœ์˜ fold๋ฅผ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ•˜์—ฌ base ๋ชจ๋ธ ํ•™์Šต (K๋ฒˆ ๋ฐ˜๋ณต)
๊ฒ€์ฆ fold 1๊ฐœ๋ฅผ ์˜ˆ์ธกํ•œ ๊ฒฐ๊ณผ (K fold) -> ์ตœ์ข… meta ๋ชจ๋ธ์˜ ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ
ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์˜ˆ์ธกํ•œ ๊ฒฐ๊ณผ (K๊ฐœ) ์˜ ํ‰๊ท  -> ์ตœ์ข… meta ๋ชจ๋ธ์˜ ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ

2) Step 2 : ์ตœ์ข… meta ๋ชจ๋ธ ํ•™์Šต

๊ฐ base ๋ชจ๋ธ์ด ์ƒ์„ฑํ•œ ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ๋ฅผ stacking -> ์ตœ์ข… meta ๋ชจ๋ธ์˜ ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ
๊ฐ base ๋ชจ๋ธ์ด ์ƒ์„ฑํ•œ ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ๋ฅผ stacking -> ์ตœ์ข… meta ๋ชจ๋ธ์˜ ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ
์ตœ์ข… ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ + ์›๋ณธ ํ•™์Šต ๋ ˆ์ด๋ธ” ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต
์ตœ์ข… ํ…Œ์ŠคํŠธ์šฉ ๋ฐ์ดํ„ฐ๋กœ ์˜ˆ์ธก -> ์›๋ณธ ํ…Œ์ŠคํŠธ ๋ ˆ์ด๋ธ” ๋ฐ์ดํ„ฐ๋กœ ํ‰๊ฐ€

profile
for well-being we need nectar and ambrosia

0๊ฐœ์˜ ๋Œ“๊ธ€