training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=1000,
num_train_epochs=10,
learning_rate=5e-5,
per_device_train_batch_size=256,
per_device_eval_batch_size=128,
warmup_steps=1000,
weight_decay=0.1,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 1000,
fp16=True,
dataloader_num_workers=4,
# label_smoothing_factor=0.2
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=1000,
num_train_epochs=10,
learning_rate=5e-5,
per_device_train_batch_size=256,
per_device_eval_batch_size=256,
warmup_steps=1000,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 1000,
fp16=True,
dataloader_num_workers=4,
# label_smoothing_factor=0.2
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=1000,
num_train_epochs=10,
learning_rate=1e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=256,
warmup_steps=1000,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 1000,
fp16=True,
dataloader_num_workers=4,
label_smoothing_factor=0.2
)
training_args = TrainingArguments(
output_dir='./results',
save_total_limit=5,
save_steps=1000,
num_train_epochs=10,
learning_rate=1e-5,
per_device_train_batch_size=2,
per_device_eval_batch_size=256,
warmup_steps=1000,
weight_decay=0.01,
logging_dir='./logs',
logging_steps=100,
evaluation_strategy='steps',
eval_steps = 1000,
fp16=True,
dataloader_num_workers=4,
label_smoothing_factor=0.2
)
오늘보다 더 성장한 내일의 저를 기대하며, 내일 뵙도록 하겠습니다.
읽어주셔서 감사합니다!