pre-training(content-rich domains) → fine-tuning(target content-insufficient domains)
Review Encoder : multi-layer Transformer / User Encoder : fusion - attention
combines 1. user representations, 2. item representations, 3. review interaction information
[stage 1] pre-training : content-rich domains
[stage 2] fine-tuning(rating prediction) : target content-insufficient domains
without pre-training (the original BERT’s weights) / pretrain u-bert : 성능 평가
유익한 글이었습니다.