학습 Methodology
a. Transformer Encoder backbone
x_i : 각 리뷰 문장
y_i : 각 문장에 대한 라벨 값
D={x_1, ..., x_n}
I_i=[CLS]+x_i+[SEP]
h_i bar=TransEnc(I_i)
b. Supervised Contrastive Learning
h_i bar : x_i의 sentence representation
W_s : trainable sentiment perceptron
: likelihood that s_c is most similar to s_i
tau : temperature of softmax
: similarity metric
: Supervised contrastive loss
: number of samples in the same category y_i in B
c. Review Reconstruction
d. Masked Aspect Prediction
e. Joint Training
x_ab: sentences in D_ab
w_a: x_ab에 나타나는 aspects 중 하나
y_ab: 모델이 예측하는 해당 aspect에 대한 sentiment
h^ab_a bar: aspect-based representation
s^ab: sentiment representation
a. Aspect-based Representation
b. Representation Combination
sentence representation에서 sentiment representation s_ab를 얻기 위해 같은 sentiment perceptron W_s를 사용함. 그 다음 s_ab와 aspect based representation h^ab_a bar를 concat하여 aspect level sentiment polarity를 예측함
W_a: trainable parameter matrix.
a. ABSA Dataset
b. Retrieved External Corpora