training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=100,
num_train_epochs=10,
learning_rate=5e-5,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
warmup_steps=300,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 100,
fp16=True,
dataloader_num_workers=4,
label_smoothing_factor=0.5
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=100,
num_train_epochs=10,
learning_rate=5e-5,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
warmup_steps=300,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 100,
fp16=True,
dataloader_num_workers=4,
label_smoothing_factor=0.5
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=100,
num_train_epochs=15,
learning_rate=1e-5,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
warmup_steps=300,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 100,
dataloader_num_workers=4,
label_smoothing_factor=0.5
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=100,
num_train_epochs=15,
learning_rate=1e-5,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
warmup_steps=300,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 100,
dataloader_num_workers=4,
label_smoothing_factor=0.5
)
오늘보다 더 성장한 내일의 저를 기대하며, 내일 뵙도록 하겠습니다.
읽어주셔서 감사합니다!