Lesson overview | Lesson overview | Next part
Attention Mechanism Math: Part 1: Intuition to 3. Core Mechanics
1. Intuition
Intuition explains how transformer layers route information across sequence positions using differentiable, mask-aware retrieval.
1.1 Attention as soft retrieval
Purpose. Attention as soft retrieval focuses on why each token reads from other token states. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
Attention lets each token form a query, compare it against key vectors, and read a weighted mixture of value vectors.
Worked reading.
A token representing it can assign high weight to an earlier noun phrase, causing the next hidden state to mix in information from that earlier position.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- pronoun resolution.
- copying from context.
- retrieved document use.
Non-examples:
- fixed convolution window only.
- one hidden state with no content-based mixing.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
1.2 Queries keys and values as roles
Purpose. Queries keys and values as roles focuses on search vector, address vector, payload vector. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
1.3 Why scaling is needed
Purpose. Why scaling is needed focuses on variance control for dot products. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
1.4 Why masks are needed
Purpose. Why masks are needed focuses on causality padding and visibility constraints. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
A mask changes which key positions a query is allowed to see by adding large negative values to forbidden logits before softmax.
Worked reading.
In decoder-only language modeling, token may attend to positions but not to future positions .
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- causal masks.
- padding masks.
- structured prompt masks.
Non-examples:
- zeroing output after softmax.
- trusting data order without a mask.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
1.5 Why attention replaced recurrence in LLMs
Purpose. Why attention replaced recurrence in LLMs focuses on parallel sequence mixing. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
2. Formal Definitions
Formal Definitions explains how transformer layers route information across sequence positions using differentiable, mask-aware retrieval.
2.1 Input hidden-state matrix
Purpose. Input hidden-state matrix focuses on the sequence matrix. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
2.2 Linear Q K V projections
Purpose. Linear Q K V projections focuses on , , . This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
2.3 Scaled dot-product attention
Purpose. Scaled dot-product attention focuses on . This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
Scaled dot-product attention computes pairwise query-key scores, normalizes each row with softmax, and uses the resulting weights to average values.
Worked reading.
The factor keeps random dot products from growing too large as head dimension increases.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- transformer self-attention.
- cross-attention.
- decoder attention.
Non-examples:
- nearest neighbor with hard argmax only.
- unscaled scores with unstable softmax.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
2.4 Attention weights
Purpose. Attention weights focuses on row-stochastic probability-like matrices. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
2.5 Causal and padding masks
Purpose. Causal and padding masks focuses on additive masks before softmax. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
A mask changes which key positions a query is allowed to see by adding large negative values to forbidden logits before softmax.
Worked reading.
In decoder-only language modeling, token may attend to positions but not to future positions .
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- causal masks.
- padding masks.
- structured prompt masks.
Non-examples:
- zeroing output after softmax.
- trusting data order without a mask.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
3. Core Mechanics
Core Mechanics explains how transformer layers route information across sequence positions using differentiable, mask-aware retrieval.
3.1 Softmax normalization
Purpose. Softmax normalization focuses on turning scores into weights. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
3.2 Weighted value aggregation
Purpose. Weighted value aggregation focuses on convex combinations of value vectors. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
3.3 Attention entropy
Purpose. Attention entropy focuses on sharp versus diffuse attention. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
Attention diagnostics inspect weights, entropy, masks, and head importance, but they do not by themselves prove causal explanations.
Worked reading.
A low-entropy row means one or a few keys dominate; a high-entropy row means information is mixed broadly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- attention heatmaps.
- head ablations.
- entropy dashboards.
Non-examples:
- claiming attention weight equals explanation.
- inspecting only one prompt.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
3.4 Temperature and score scale
Purpose. Temperature and score scale focuses on how scaling changes concentration. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.
3.5 Numerical stability
Purpose. Numerical stability focuses on subtracting row maxima before exponentiation. This is a core part of how transformer layers turn a sequence of embeddings into context-aware hidden states.
Operational definition.
This concept is part of the attention mechanism that mixes token representations according to learned compatibility scores.
Worked reading.
The implementation habit is to write shapes, scores, masks, softmax, and value aggregation explicitly.
| Object | Shape | Meaning |
|---|---|---|
| hidden states entering the layer | ||
| query and key address vectors | ||
| value payload vectors | ||
| compatibility scores | ||
| attention weights | ||
| mixed output values |
Examples:
- self-attention.
- decoder attention.
- attention over retrieved context.
Non-examples:
- independent token processing.
- fixed averaging with no learned scores.
Derivation habit.
- Write the shapes of .
- Add masks before softmax, not after.
- Check every attention row sums to one over visible keys.
- Separate mathematical attention from kernel implementation details.
- For LLM serving, distinguish prefill attention from decode attention with a KV cache.
Implementation lens.
A correct attention implementation is mostly a shape and masking discipline. The bug that hurts language modeling most is often not the matrix multiplication; it is allowing a token to see future positions or padding tokens.
For efficient inference, the formula stays the same but the workload changes. During prefill, the model processes a full prompt. During decode, it adds one query at a time while reading cached keys and values from previous tokens.
For interpretation, attention weights are useful traces of information flow, but they are not the whole model explanation. Residual connections, MLPs, layer norms, and later layers can change or override what a single attention map appears to show.