Understanding Positional Embeddings in Transformers

Introduction

Positional embeddings are crucial in transformer architectures for capturing sequential information in input data. In transformers, positional embeddings are added to the input embeddings to provide the model with information about the positions of tokens in a sequence.

Transformers do not inherently understand the order or position of tokens in a sequence as they process tokens independently through the self-attention mechanism. To overcome this limitation, positional embeddings assign a unique positional encoding to each token based on its position in the input sequence.

The Need for Positional Information

Unlike recurrent neural networks (RNNs) or convolutional neural networks (CNNs), which have built-in mechanisms to capture sequential information, the transformer architecture processes all tokens in parallel. While this parallelization leads to significant computational efficiency gains, it comes at a cost: the model has no inherent notion of token order.

Consider the two sentences:

Without positional information, a transformer would treat these sentences identically, as it would only see an unordered set of tokens. Positional embeddings solve this problem by encoding the position of each token directly into its representation.

Mathematical Formulation

The original transformer paper ("Attention is All You Need" by Vaswani et al., 2017) introduced sinusoidal positional encodings. These embeddings are typically created using sine and cosine functions with different frequencies to represent different positions in the sequence.

Sinusoidal Positional Encoding

The formula for positional encoding in transformers can be represented as:

For even dimensions (2i):

\[PE(pos, 2i) = \sin\left(\frac{pos}{10000^{\frac{2i}{d_{model}}}}\right)\]

For odd dimensions (2i+1):

\[PE(pos, 2i+1) = \cos\left(\frac{pos}{10000^{\frac{2i}{d_{model}}}}\right)\]

Where:

Why Sinusoidal Functions?

The choice of sinusoidal functions is not arbitrary. This design has several important properties:

  1. Unique Encodings: Each position receives a unique encoding vector across all dimensions.
  2. Linear Relationships: The encoding allows the model to learn relative positions easily. For any fixed offset \(k\), \(PE(pos + k)\) can be represented as a linear function of \(PE(pos)\), because:

    \[\sin(\alpha + \beta) = \sin(\alpha)\cos(\beta) + \cos(\alpha)\sin(\beta)\]

    \[\cos(\alpha + \beta) = \cos(\alpha)\cos(\beta) - \sin(\alpha)\sin(\beta)\]

  3. Bounded Values: Sine and cosine functions always produce values in the range \([-1, 1]\), which helps with numerical stability during training.
  4. Extrapolation to Longer Sequences: The model can potentially generalize to sequence lengths longer than those seen during training, as the positional encoding function is defined for any position value.

Alternative Approaches

While sinusoidal encodings are elegant and effective, researchers have explored several alternatives:

Learned Positional Embeddings

Instead of using fixed sinusoidal functions, some models (like BERT) use learned positional embeddings. These are trainable parameters initialized randomly and optimized during training. The main trade-offs are:

Relative Positional Encodings

Models like Transformer-XL and T5 use relative positional encodings, which encode the relative distance between tokens rather than their absolute positions. This approach has shown improved performance on tasks requiring understanding of relative positions.

Rotary Position Embedding (RoPE)

Rotary Position Embedding (RoPE), introduced by Su et al. (2021) and adopted by modern LLMs like GPT-Neo, LLaMA, PaLM, and GPT-NeoX, represents a significant advancement in positional encoding. RoPE encodes position information by rotating the embedding space using a rotation matrix.

Mathematical Foundation of RoPE

RoPE applies a rotation matrix to encode absolute position while maintaining relative position information. For a query vector \(\mathbf{q}\) and key vector \(\mathbf{k}\) at positions \(m\) and \(n\) respectively, RoPE applies:

\[\mathbf{q}_m = \mathbf{R}_{\Theta, m} \mathbf{q}\]

\[\mathbf{k}_n = \mathbf{R}_{\Theta, n} \mathbf{k}\]

Where \(\mathbf{R}_{\Theta, m}\) is a rotation matrix parameterized by position \(m\) and frequency set \(\Theta = \{\theta_i = 10000^{-2i/d}, i \in [0, 1, \ldots, d/2-1]\}\)

The key insight is that the inner product \(\mathbf{q}_m^T \mathbf{k}_n\) only depends on the relative position \((m-n)\), enabling the model to learn relative positional relationships naturally:

\[\mathbf{q}_m^T \mathbf{k}_n = \mathbf{q}^T \mathbf{R}_{\Theta, n-m} \mathbf{k}\]

Advantages of RoPE

Context Length Extension Methods

A critical challenge with positional embeddings is extending models to handle longer contexts than they were trained on. Recent research has introduced several interpolation and extrapolation techniques to address this.

Position Interpolation (PI)

Position Interpolation, introduced by Chen et al. (2023), extends context length by linearly downscaling position indices. Instead of extrapolating to positions beyond the training range \([0, L]\), PI interpolates within the trained range:

\[f'(x, m) = f\left(x, m \cdot \frac{L}{L'}\right)\]

Where \(L\) is the original context length, \(L'\) is the extended length, and \(m\) is the position index.

This approach leverages the model's strong interpolation capabilities while avoiding the distribution shift that occurs during extrapolation. Fine-tuning with PI requires significantly less compute (as little as 1000 steps) compared to training from scratch.

YaRN: Yet another RoPE extensioN method

YaRN (Peng et al., 2023) represents a state-of-the-art approach for extending context windows in RoPE-based models. It introduces several key innovations:

1. NTK-Aware Interpolation:

YaRN applies Neural Tangent Kernel (NTK)-aware scaling that adjusts the frequency base differently across dimensions:

\[\theta'_i = \theta_i \cdot \alpha^{2i/d}\]

Where \(\alpha\) is a scaling factor computed as \(\alpha = (L'/L)^{1/(d-2)}\)

This preserves high-frequency components (important for local attention) while scaling low-frequency components (critical for long-range dependencies).

2. Attention Temperature Scaling:

YaRN introduces a temperature parameter \(\sqrt{t}\) to the attention logits to compensate for entropy loss during interpolation:

\[\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k \cdot t}}\right)V\]

3. Dynamic Scaling:

YaRN applies different scaling factors to different frequency bands, using a ramp function to smoothly transition between interpolation (low frequencies) and extrapolation (high frequencies).

Performance and Results

Empirical results demonstrate the effectiveness of these methods:

Other Modern Extensions

ALiBi (Attention with Linear Biases)

ALiBi (Press et al., 2022) eliminates positional embeddings entirely by adding a bias directly to attention scores:

\[\text{score}(q_i, k_j) = q_i^T k_j - \lambda |i - j|\]

Where \(\lambda\) is a head-specific slope. This simple approach enables strong extrapolation capabilities without position embeddings.

xPos (Exponential Position Embedding)

xPos enhances length extrapolation by applying exponential decay to position information, helping models better generalize to sequences longer than training data.

Implementation Details

In practice, positional embeddings are added element-wise to the input embeddings before they are fed into the transformer layers:

\[\text{Input to Transformer} = \text{Token Embedding} + \text{Positional Embedding}\]

This simple addition allows the model to use both the semantic information from the token embeddings and the positional information from the positional embeddings simultaneously. The self-attention mechanism can then learn to attend to tokens based on both their content and their position in the sequence.

Impact on Model Performance

Positional embeddings are critical for transformer performance across various NLP tasks:

Studies have shown that removing positional embeddings dramatically degrades transformer performance, confirming their importance in capturing sequential information.

Conclusion

Positional embeddings have evolved significantly since the original transformer paper. From the elegant simplicity of sinusoidal encodings to the sophisticated rotation-based mechanisms of RoPE, and now to advanced context extension methods like YaRN and Position Interpolation, the field continues to innovate.

Key takeaways from this evolution:

Understanding these positional encoding mechanisms is crucial for anyone working with modern language models, whether fine-tuning existing models, extending context lengths, or designing new architectures. As models continue to scale and tackle longer contexts, innovations in positional encoding will remain a critical research frontier.

About the Author: Gordi (Ghodrat) Aalipour is a Principal Applied Scientist at Gusto, specializing in agentic AI systems, LLM orchestration, and knowledge graph integration. With expertise in both theoretical foundations and practical implementations of NLP systems, he regularly explores the inner workings of modern language models.