Part 3Math for LLMs

Embedding Space Math: Part 3 - Attention Bridge To References

Math for LLMs / Embedding Space Math

Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 3
20 min read17 headingsSplit lesson page

Lesson overview | Previous part | Lesson overview

Embedding Space Math: Part 7: Attention Bridge to References

7. Attention Bridge

Attention Bridge connects token ids to continuous vectors and prepares the exact geometry used by attention, language-model logits, and dense retrieval.

7.1 Query key value projections

Purpose. Query key value projections focuses on embedding space after learned linear maps. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

RoPE(q,p)=Rpq,(Rpq)(Rsk)=qRspk.\operatorname{RoPE}(\mathbf{q},p)=R_p\mathbf{q},\qquad (R_p\mathbf{q})^\top(R_s\mathbf{k})=\mathbf{q}^\top R_{s-p}\mathbf{k}.

Operational definition.

Embeddings become contextual through projections, attention, MLPs, and residual additions. Attention compares projected vectors, not raw token ids.

Worked reading.

The query and key projections turn hidden states into vectors whose dot products define attention weights.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. QKV projections.
  2. residual stream states.
  3. dense retrieval vectors.

Non-examples:

  1. nearest neighbors over integer ids.
  2. one fixed meaning for a token in every context.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

7.2 Attention as soft nearest-neighbor lookup

Purpose. Attention as soft nearest-neighbor lookup focuses on dot products over projected embeddings. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

paramsembed=Vdmodel.\operatorname{params}_{\mathrm{embed}}=|\mathcal{V}|d_{\mathrm{model}}.

Operational definition.

Embeddings become contextual through projections, attention, MLPs, and residual additions. Attention compares projected vectors, not raw token ids.

Worked reading.

The query and key projections turn hidden states into vectors whose dot products define attention weights.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. QKV projections.
  2. residual stream states.
  3. dense retrieval vectors.

Non-examples:

  1. nearest neighbors over integer ids.
  2. one fixed meaning for a token in every context.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

7.3 Layer-wise contextualization

Purpose. Layer-wise contextualization focuses on representations change through residual blocks. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

ERV×dmodel,xt=Eit,:.E\in\mathbb{R}^{|\mathcal{V}|\times d_{\mathrm{model}}},\qquad \mathbf{x}_t=E_{i_t,:}.

Operational definition.

This concept explains how discrete language symbols become continuous vectors with trainable geometry.

Worked reading.

The operational question is what shape the vector has, how it is compared, and how training changes it.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. embedding rows.
  2. hidden states.
  3. similarity search.

Non-examples:

  1. raw text in linear algebra.
  2. ids treated as distances.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

7.4 Dimensionality reduction diagnostics

Purpose. Dimensionality reduction diagnostics focuses on PCA views of local geometry. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

xt=eitE.\mathbf{x}_t=\mathbf{e}_{i_t}^{\top}E.

Operational definition.

This concept explains how discrete language symbols become continuous vectors with trainable geometry.

Worked reading.

The operational question is what shape the vector has, how it is compared, and how training changes it.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. embedding rows.
  2. hidden states.
  3. similarity search.

Non-examples:

  1. raw text in linear algebra.
  2. ids treated as distances.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

7.5 Embedding geometry in RAG

Purpose. Embedding geometry in RAG focuses on dense retrieval and semantic neighborhoods. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

cos(u,v)=u,vu2v2.\operatorname{cos}(\mathbf{u},\mathbf{v})=\frac{\langle \mathbf{u},\mathbf{v}\rangle}{\lVert\mathbf{u}\rVert_2\lVert\mathbf{v}\rVert_2}.

Operational definition.

Embeddings become contextual through projections, attention, MLPs, and residual additions. Attention compares projected vectors, not raw token ids.

Worked reading.

The query and key projections turn hidden states into vectors whose dot products define attention weights.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. QKV projections.
  2. residual stream states.
  3. dense retrieval vectors.

Non-examples:

  1. nearest neighbors over integer ids.
  2. one fixed meaning for a token in every context.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

8. Scale and Diagnostics

Scale and Diagnostics connects token ids to continuous vectors and prepares the exact geometry used by attention, language-model logits, and dense retrieval.

8.1 Parameter counting

Purpose. Parameter counting focuses on vocabulary size times model width. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

paramsembed=Vdmodel.\operatorname{params}_{\mathrm{embed}}=|\mathcal{V}|d_{\mathrm{model}}.

Operational definition.

Embedding tables are systems objects too: they consume memory, depend on tokenizer ids, and must handle special rows carefully.

Worked reading.

Changing a tokenizer changes which row each token id selects, so old weights no longer mean the same thing without migration.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. vocabulary resizing.
  2. special token initialization.
  3. embedding quantization.

Non-examples:

  1. renaming token ids without changing weights.
  2. ignoring padding row behavior.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

8.2 Memory and quantization

Purpose. Memory and quantization focuses on storage cost of embedding rows. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

ERV×dmodel,xt=Eit,:.E\in\mathbb{R}^{|\mathcal{V}|\times d_{\mathrm{model}}},\qquad \mathbf{x}_t=E_{i_t,:}.

Operational definition.

Embedding tables are systems objects too: they consume memory, depend on tokenizer ids, and must handle special rows carefully.

Worked reading.

Changing a tokenizer changes which row each token id selects, so old weights no longer mean the same thing without migration.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. vocabulary resizing.
  2. special token initialization.
  3. embedding quantization.

Non-examples:

  1. renaming token ids without changing weights.
  2. ignoring padding row behavior.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

8.3 Norm and similarity dashboards

Purpose. Norm and similarity dashboards focuses on monitoring geometry during training. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

xt=eitE.\mathbf{x}_t=\mathbf{e}_{i_t}^{\top}E.

Operational definition.

Embedding spaces have geometry: norms, directions, subspaces, clusters, and dominant components. These structures can encode useful features and dataset artifacts.

Worked reading.

Centering an embedding cloud removes the mean direction; whitening rescales dominant axes so cosine neighborhoods are less dominated by global components.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. feature probes.
  2. bias directions.
  3. PCA diagnostics.

Non-examples:

  1. assuming every axis has semantic meaning.
  2. judging geometry from one 2D plot only.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

8.4 Outlier tokens and special tokens

Purpose. Outlier tokens and special tokens focuses on why control rows need inspection. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

cos(u,v)=u,vu2v2.\operatorname{cos}(\mathbf{u},\mathbf{v})=\frac{\langle \mathbf{u},\mathbf{v}\rangle}{\lVert\mathbf{u}\rVert_2\lVert\mathbf{v}\rVert_2}.

Operational definition.

Embedding tables are systems objects too: they consume memory, depend on tokenizer ids, and must handle special rows carefully.

Worked reading.

Changing a tokenizer changes which row each token id selects, so old weights no longer mean the same thing without migration.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. vocabulary resizing.
  2. special token initialization.
  3. embedding quantization.

Non-examples:

  1. renaming token ids without changing weights.
  2. ignoring padding row behavior.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

8.5 Migration and compatibility tests

Purpose. Migration and compatibility tests focuses on why tokenizer and embedding tables are coupled. This matters because every later transformer operation starts from these vectors or from hidden states derived from them.

=logexp(hwy)jVexp(hwj).\ell=-\log\frac{\exp(\mathbf{h}^{\top}\mathbf{w}_y)}{\sum_{j\in\mathcal{V}}\exp(\mathbf{h}^{\top}\mathbf{w}_j)}.

Operational definition.

Embedding tables are systems objects too: they consume memory, depend on tokenizer ids, and must handle special rows carefully.

Worked reading.

Changing a tokenizer changes which row each token id selects, so old weights no longer mean the same thing without migration.

ObjectShape or formulaRole
token idsB×TB\times Tdiscrete sequence from tokenizer
embedding table$\mathcal{V}
hidden statesB×T×dB\times T\times dcontextual vectors after lookup and layers
LM head$d\times\mathcal{V}
position signalvector, rotation, or biasinjects order into attention

Examples:

  1. vocabulary resizing.
  2. special token initialization.
  3. embedding quantization.

Non-examples:

  1. renaming token ids without changing weights.
  2. ignoring padding row behavior.

Derivation habit.

  1. Write the tensor shape before writing the operation.
  2. State whether vectors are raw input embeddings, hidden states, output rows, or retrieval embeddings.
  3. Choose dot product, cosine similarity, or Euclidean distance deliberately.
  4. Check whether position information is additive, rotary, learned, or an attention bias.
  5. Track whether input and output embeddings are tied.

Implementation lens.

In code, embedding lookup is simple indexing. Conceptually, it is the step where the model stops seeing symbolic ids and starts seeing trainable vectors. That is why tokenizer changes, vocabulary resizing, and special-token handling are not superficial.

When debugging model behavior, inspect embedding norms, nearest neighbors, and the mean direction. Large norms or dominant components can affect logits and similarity search. For retrieval models, normalize vectors before cosine search unless the training objective explicitly uses norm as signal.

For transformer internals, remember that input embeddings are only the first residual-stream state. After attention and MLP layers, a token position's hidden state reflects surrounding context. Static nearest neighbors and contextual hidden-state probes answer different questions.

9. Common Mistakes

#MistakeWhy it is wrongFix
1Treating token ids as numeric magnitudesIds are arbitrary labels, not ordered measurements.Use embedding lookup or one-hot selection.
2Using dot product and cosine interchangeablyDot product includes norm effects.State whether magnitude should matter.
3Assuming static token rows are final meaningTransformer layers contextualize representations.Distinguish input embeddings from hidden states.
4Ignoring position informationSelf-attention alone is permutation-equivariant.Add or rotate positional information before attention.
5Changing vocabulary without resizing embeddingsNew ids need rows and output logits.Resize, initialize, and train new rows explicitly.
6Interpreting PCA plots too literallyTwo dimensions can hide high-dimensional structure.Use PCA as a diagnostic, not proof.
7Forgetting anisotropyDominant directions can distort cosine neighbors.Inspect mean vector, norm distribution, and centered similarities.
8Assuming analogies always workLinear offsets are empirical, domain-dependent approximations.Validate with held-out relations.
9Confusing retrieval embeddings with LM token embeddingsRetriever embeddings are usually pooled sequence vectors.Name the embedding type and training objective.
10Ignoring tied embeddingsInput and output tables may share parameters.Check whether the LM head is tied to token embeddings.

10. Exercises

  1. (*) Perform embedding lookup for a batch of token ids.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  2. (*) Show one-hot lookup equivalence.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  3. (*) Compute cosine similarity and nearest neighbors.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  4. (**) Verify a synthetic analogy direction.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  5. (**) Measure anisotropy before and after centering.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  6. (**) Compute a softmax output-row gradient.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  7. (**) Build sinusoidal position encodings.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  8. (***) Apply a RoPE rotation and check norm preservation.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  9. (***) Build an ALiBi bias matrix.

    • (a) State the shape of every object.
    • (b) Compute the numeric result.
    • (c) Explain the LLM architecture consequence.
  10. (***) Count embedding parameters and explain tied embeddings.

  • (a) State the shape of every object.
  • (b) Compute the numeric result.
  • (c) Explain the LLM architecture consequence.

11. Why This Matters for AI

ConceptAI impact
Embedding lookupTransforms token ids into vectors that the transformer can optimize over.
Similarity metricsSupport nearest neighbors, retrieval, clustering, probing, and semantic diagnostics.
Analogy directionsReveal when relational information is approximately linear.
AnisotropyExplains why raw embedding spaces can have poor neighborhood structure.
Training gradientsShow how token frequency and prediction errors move embedding rows.
Position encodingsLet attention use sequence order and relative distance.
QKV projectionsTurn embeddings into attention queries, keys, and values.
Parameter countsTie vocabulary, tokenizer choice, model width, and serving memory together.

12. Conceptual Bridge

The backward bridge is tokenization. Token ids are arbitrary labels until an embedding table gives them trainable vectors. The tokenizer and embedding table therefore form one coupled interface.

The forward bridge is attention. Queries, keys, and values are learned projections of hidden states that begin as embeddings plus position information. Attention is not separate from embedding geometry; it is built on top of it.

+------------+      +------------------+      +-----------------------+
| token ids  | ---> | embedding rows   | ---> | contextual hidden      |
| B x T      |      | B x T x d        |      | states and attention   |
+------------+      +------------------+      +-----------------------+

A strong mental model is to treat embeddings as the model's input coordinate system. If that coordinate system is distorted, anisotropic, incompatible with the tokenizer, or poorly initialized for new tokens, every downstream layer inherits the problem.

References

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue