Residual connection around each of the two sub-layers, followed by layer normalization.
The output of each sub-layer is LayerNorm(x+Sublayer(x))
To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel=512
2-2. Decoder
Decoder is also composed of a stack of N=6 identical layers.
The decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.
This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.
3. Attention
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors.
The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
3-1. Scaled Dot-Product Attention
The queries simultaneously, packed together into a matrix Q.
The keys and values are also packed together into matrices K and V. Attention(Q,K,V)=softmax(dkQK⊤)V
Large values of dk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients.to counteract this effect, we scale the dot products by dk1
To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q⋅k=i=1∑dkqiki, has mean 0 and variance dk.
3-2. Multi-Head Attention
Multi-head attention allows the model to jointly attend to information from different representation subspace at different positions. with a single attention head, averaging inhibits this.
In this work we employ h=8 parallel attention layers, or heads. For each of these we use dk=dv=dmodel/h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
4. Position-wise Feed-Forward Networks
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically (i.i.d??) . This consists of two linear transformations with a ReLU activation in between.
FFN(x)=max(0,xW1+b1)W2+b2
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel=512, and the inner-layer has dimensionality dff=2048.
5. Embeddings and Softmax
learned embeddings to convert tokens and output tokens to vectors of dimension dmodel.
The usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities.
share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to..
6. Positional Encoding
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence.
In this work, we use sine and cosine functions of different frequencies:
We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k,PEpos+k can be represented as a linear function of PEpos.