"The matrix is not the territory. It is a coordinate representation of a linear map - and the map exists independently of any basis you choose to describe it."
Overview
Linear transformations are the language in which all of machine learning is written. Every forward pass through a neural network is a sequence of linear maps punctuated by nonlinearities. Every attention head projects queries, keys, and values through learned linear transformations. Every gradient computation traces the chain rule backward through compositions of Jacobians - which are themselves linear maps. Understanding linear transformations at the abstract level - beyond matrices, beyond coordinates - is what separates a practitioner who can apply formulas from one who can reason about why those formulas work.
This section develops the full theory of linear transformations between vector spaces. We begin with the geometric intuition: what does a linear map do to space? We then build the formal machinery - kernel, image, rank-nullity theorem, matrix representation in arbitrary bases, change of basis, isomorphisms, and composition - that makes this intuition precise. We extend to affine maps (which include bias terms), to the Jacobian as the canonical linear approximation of a nonlinear function, and to the dual space, where gradients live.
Throughout, the AI connections are not decorative but structural: LoRA is a rank- linear map composition; attention is a sequence of linear projections; backpropagation is reverse-mode accumulation of Jacobians. By the end of this section, you will see matrix multiplication not as arithmetic, but as function composition - the most powerful organizing principle in applied mathematics.
Prerequisites
- Vector spaces, subspaces, basis, and dimension - Chapter 2 06: Vector Spaces and Subspaces
- Matrix multiplication, inverse, transpose - Chapter 2 02: Matrix Operations
- Rank and null space - Chapter 2 05: Matrix Rank
- Eigenvalues and eigenvectors (used in 3.4, 5.1) - 01: Eigenvalues and Eigenvectors
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive exploration of linear maps: kernel/image computation, change-of-basis visualization, Jacobian experiments, LoRA rank analysis |
| exercises.ipynb | 8 graded exercises from rank-nullity proofs through softmax Jacobian and LoRA fine-tuning analysis |
Learning Objectives
After completing this section, you will be able to:
- State the two axioms of a linear transformation and derive all immediate consequences
- Compute the kernel and image of any linear map and verify the rank-nullity theorem
- Construct the matrix representation of a linear map in any pair of bases
- Derive the change-of-basis formula and use it to diagonalize operators
- Identify and characterize projection, rotation, reflection, shear, and low-rank maps geometrically
- Extend linear maps to affine maps via homogeneous coordinates
- Compute the Jacobian of a vector-valued function and interpret backpropagation as Jacobian composition
- Explain why LoRA, attention, and linear probing are instances of linear map theory
Table of Contents
- 1. Intuition
- 2. Formal Definitions
- 3. Matrix Representation of a Linear Map
- 4. Composition, Invertibility, and Isomorphisms
- 5. Special Classes of Linear Transformations
- 6. Dual Spaces and Transposes
- 7. Affine Transformations
- 8. The Jacobian as Linear Approximation
- 9. Applications in Machine Learning
- 10. Common Mistakes
- 11. Exercises
- 12. Why This Matters for AI (2026 Perspective)
- 13. Conceptual Bridge
1. Intuition
1.1 Three Views of a Matrix
A matrix admits three distinct but equivalent interpretations, and fluent practitioners shift between them effortlessly.
View 1: Data container. The matrix is a rectangular array of numbers - rows, columns. This is the computational view. We use it when we care about entries: , the element in row , column .
View 2: Column geometry. The matrix is a collection of column vectors in . The product is then a linear combination: , where is the -th column. This view makes the column space visible and illuminates when has solutions.
View 3: Linear transformation. The matrix defines a function by . This is the abstract, coordinate-free view. The map exists independently of any coordinate representation - the matrix is merely its description in a particular pair of bases (standard basis for both and ).
THREE VIEWS OF THE MATRIX A
========================================================================
[2 1] Column geometry: Linear map:
[0 3] A = [a_1 | a_2] T: \mathbb{R}^2 -> \mathbb{R}^2
[1 -1] T(x) = Ax
Data container Span{a_1, a_2} Sends basis vectors
A_2_1 = 0 = column space e_1 -> col 1 of A
A_3_2 = -1 e_2 -> col 2 of A
All three describe the same mathematical object.
========================================================================
For AI: In a transformer, the weight matrix is simultaneously all three: raw parameters to be optimized (View 1), a basis for the query subspace (View 2), and a linear projection from the residual stream to query space (View 3). Understanding which view you're using prevents confusion about what "the attention head is doing."
1.2 Geometric Action of Linear Maps
What does do to space? The two axioms - additivity and homogeneity - impose strong geometric constraints:
-
The origin is fixed. always. A linear map cannot translate; it cannot move the origin. This is the most important geometric fact about linear transformations.
-
Lines through the origin stay lines. If is a line, then is a line through the origin in the output space.
-
Parallel lines stay parallel. If (same direction), then (still same direction).
-
Grid lines go to grid lines, equally spaced. This is the classic 3Blue1Brown visualization: apply to the integer grid of and you get a (possibly skewed) grid in .
-
The unit square maps to a parallelogram. The parallelogram spanned by the columns of is the image of the unit square under .
What can a linear map do? Depending on the matrix:
- Rotate space (orthogonal matrix, )
- Reflect space (orthogonal matrix, )
- Scale dimensions (diagonal matrix)
- Shear space (triangular matrix)
- Project onto a subspace (idempotent: )
- Collapse space to a lower dimension (rank-deficient matrix)
What a linear map cannot do: translate (no origin-shifting), apply nonlinear distortion (no bending, folding, or curving of lines).
1.3 Why Linearity is Special
The superposition principle is the reason linear algebra is tractable. For a linear map :
This means: to know on all of , you only need to know on a basis. A basis has elements. The map on an infinite space is encoded in finitely many column vectors. This is extraordinary compression: infinite -> finite.
Linearity vs nonlinearity in neural networks. A deep neural network with no nonlinearities is just a single linear transformation: . Multiple linear layers collapse to one. The nonlinearities (ReLU, GELU, SiLU) are what allow composition to create genuinely new representational power. The linear layers provide the parameterized directions; the nonlinear activations provide the expressive capacity.
Linearity in analysis. Many of the hardest problems in mathematics and ML become tractable when restricted to linear functions: linear regression has a closed-form solution, linear systems have complete theory, spectral analysis of linear operators is well-understood. The strategy of "linearize, solve, interpret" recurs throughout calculus, optimization, and signal processing.
For AI: The linear representation hypothesis (Elhage et al. 2022, Park et al. 2023) conjectures that many high-level features in LLMs are encoded as directions in representation space - i.e., they are linear features. If true, this means the crucial structure of LLM computation is linear, and all the machinery of this section applies directly to understanding what models are doing.
1.4 Historical Timeline
| Year | Person | Contribution |
|---|---|---|
| 1844 | Hermann Grassmann | Die lineale Ausdehnungslehre - first abstract treatment of linear spaces |
| 1855 | Arthur Cayley | Matrix algebra as formal system; composition of matrices |
| 1888 | Giuseppe Peano | First rigorous axiomatization of vector spaces |
| 1902 | Henri Lebesgue | Integration as a linear functional on function spaces |
| 1904-1910 | Hilbert, Riesz, Fischer | Infinite-dimensional linear algebra; Hilbert spaces; spectral theory |
| 1929 | John von Neumann | Operator theory; linear transformations on Hilbert spaces |
| 1940s | Alan Turing, von Neumann | Linear algebra for numerical computation; matrix algorithms |
| 1986 | Rumelhart, Hinton, Williams | Backpropagation - chains of Jacobians as the training algorithm |
| 2017 | Vaswani et al. | Attention = linear projections + scaled dot product; transformers |
| 2021 | Hu et al. (LoRA) | Low-rank linear map updates for efficient fine-tuning |
| 2022- | Elhage, Park et al. | Linear representation hypothesis; mechanistic interpretability |
2. Formal Definitions
2.1 The Two Axioms and All Their Consequences
Definition (Linear Transformation). Let and be vector spaces over the same field (typically or ). A function is a linear transformation (also called a linear map or homomorphism) if it satisfies:
- Additivity: for all
- Homogeneity: for all ,
These two axioms are often combined into the single condition:
Immediate consequences (each follows directly from the axioms):
Proposition 2.1.1 (Zero maps to zero). .
Proof: for any .
This is a universal test: if , then is not linear. Translation (, ) fails immediately.
Proposition 2.1.2 (Negatives are preserved). .
Proof: .
Proposition 2.1.3 (General linear combinations). For any and :
Proof: By induction using additivity and homogeneity.
Proposition 2.1.4 (Determined by basis images). If is a basis for , then is completely determined by . Moreover, for any choice of , there exists a unique linear map with .
This is the fundamental construction theorem. It means the matrix of (in standard coordinates) is:
The set of all linear maps is denoted or . It is itself a vector space under pointwise operations: and .
2.2 Kernel of a Linear Map
Definition (Kernel). The kernel (or null space) of a linear map is:
It is the set of all inputs that maps to zero - the "lost information" of the map.
Theorem 2.2.1. is a subspace of .
Proof:
- Zero: , so .
- Closure under addition: If , then .
- Closure under scaling: If , then .
Geometric meaning. is the subspace that "collapses to zero." If is the projection onto the -plane, then is the -axis. If is differentiation of polynomials, is the constant polynomials.
Theorem 2.2.2 (Injectivity criterion). is injective (one-to-one) if and only if .
Proof. () If is injective and , then . () If and , then , so , i.e., .
The dimension of the kernel, , is called the nullity of .
2.3 Image of a Linear Map
Definition (Image). The image (or range) of a linear map is:
It is the set of all possible outputs - the "reachable" part of .
Theorem 2.3.1. is a subspace of .
Proof:
- Zero: .
- Closure under addition: .
- Closure under scaling: .
Theorem 2.3.2. is the column space of the matrix representing .
Proof: , which is exactly the span of the columns of .
Theorem 2.3.3 (Surjectivity criterion). is surjective (onto) if and only if .
The dimension of the image, , is called the rank of , denoted .
2.4 The Rank-Nullity Theorem
This is one of the most elegant and useful results in linear algebra, connecting the three fundamental dimensions of a linear map.
Theorem 2.4.1 (Rank-Nullity Theorem). Let be a linear map with finite-dimensional. Then:
Proof. Let be a basis for (so ). Extend this to a basis for all of : where .
We claim is a basis for .
Spanning: For any , write for some . Then .
Linear independence: If , then , so . But is linearly independent, so all .
Intuition: The rank-nullity theorem says "uses" its input dimensions in two ways: some dimensions are collapsed to zero (nullity), and the rest are faithfully transmitted to the output (rank). Total in = lost + kept.
RANK-NULLITY THEOREM
========================================================================
T: V -----------------------------------> W
| |
+-- ker(T) ---> {0} |
| (nullity) collapsed |
| |
+-- "rest" ---> im(T) \subseteq W |
(rank) transmitted |
dim(V) = nullity(T) + rank(T)
[total] [collapsed dims] [surviving dims]
Example: T: \mathbb{R}^5 -> \mathbb{R}^3, rank 2
-> nullity = 5 - 2 = 3
-> 3 dimensions collapse, 2 survive
========================================================================
Example. defined by . Rank = 3 (full row rank). Nullity = . The null space is .
For AI: In LoRA, a weight update with , has rank at most . By rank-nullity, its kernel has dimension at least . When , the update leaves most of the input space unchanged - it only "speaks to" an -dimensional subspace.
2.5 Examples and Non-Examples
Linear transformations:
| Map | Domain -> Codomain | Kernel | Image |
|---|---|---|---|
| (matrix mult.) | null space of | column space of | |
| (differentiation) | constants | all polynomials of degree | |
| (integration) | functions vanishing at 0 | ||
| (zero map) | all of | ||
| (identity) | all of | ||
| (projection) | -axis | -axis | |
| (trace) | trace-zero matrices |
Non-linear maps (and why they fail):
| Map | Linearity failure | Test |
|---|---|---|
| () | Zero test | |
| Additivity | ||
| (norm) | ... wait, this passes? No: | Homogeneity |
| ; softmax is scale-invariant | Homogeneity | |
| in general | Additivity | |
| (elementwise square) | Homogeneity |
Note on ReLU: Though ReLU is not linear, it is piecewise linear - linear on each orthant. This means neural networks with ReLU are piecewise linear functions, which is a key fact for understanding their behavior.
3. Matrix Representation of a Linear Map
3.1 The Standard Basis Construction
Over and with standard bases, every linear map corresponds to a unique matrix .
Construction. The -th column of is , the image of the -th standard basis vector:
Why this works: For any :
The matrix is the complete, coordinate-encoded description of .
Example. Find the matrix for the counterclockwise rotation by angle .
and , giving:
Example. Find the matrix for differentiation using the bases and .
, , , . In the output basis:
This is a matrix, reflecting .
3.2 Representation in Arbitrary Bases
When and have non-standard bases, the matrix of depends on those bases.
Setup. Let be a basis for and be a basis for .
Coordinate vectors. For , write . The coordinate vector is .
The matrix of in bases . Express each in the basis :
The matrix satisfies:
The commutative diagram:
v \in V ----------------T-----------------> T(v) \in W
| |
[*]_B (coord in B) [*]_C (coord in C)
| |
down down
[v]_B ---------[T]_B^C (matrix mult.)-----> [T(v)]_C
The matrix is the bridge between coordinates: it takes -coordinates of input to -coordinates of output.
Example. Let be the map . In the non-standard basis :
, so column 1 is . , so column 2 is .
This is diagonal-like: in the basis , acts as a simple scaling on each basis vector - exactly the diagonalization idea.
3.3 The Change-of-Basis Matrix
Definition. Let and be two bases for the same space . The change-of-basis matrix from to is:
Each column is the -coordinate vector of the corresponding new basis vector.
Key property: If are the -coordinates of , then the -coordinates are:
The change-of-basis formula for . If is the matrix of in the basis , and is the change-of-basis matrix from to , then:
Derivation:
Worked example. Let have matrix in the standard basis. In the basis :
3.4 Similarity Transformations
Definition. Two matrices are similar () if there exists an invertible such that .
Geometrically: and represent the same linear map in different bases.
Invariants under similarity (properties that don't change when you change basis):
| Invariant | Formula | Meaning |
|---|---|---|
| Eigenvalues | unchanged | Spectrum is basis-independent |
| Determinant | Volume scaling is basis-independent | |
| Trace | Sum of eigenvalues | |
| Rank | Dimension of image | |
| Characteristic polynomial | Entire spectrum |
Diagonalization is change of basis. When is diagonalizable with eigenvectors , the matrix gives . We are choosing the basis of eigenvectors, in which the map acts by simple scaling.
Forward reference: Eigenvalues and Eigenvectors
The full theory of diagonalization, the spectral theorem, and when a matrix is diagonalizable is developed in 01: Eigenvalues and Eigenvectors. Here we note only that diagonalization is a special case of change of basis.
4. Composition, Invertibility, and Isomorphisms
4.1 Composition is Matrix Multiplication
Let and be linear maps. Their composition defined by is also linear.
Theorem 4.1.1. If has matrix (in standard bases) and has matrix , then has matrix .
Proof: .
This is the fundamental theorem connecting composition of functions to matrix multiplication. It explains:
- Non-commutativity: in general (different domains/codomains, or for square matrices).
- Associativity: - matches matrix associativity.
- Deep learning: A network is a composition. The forward pass computes this product. The backward pass (backprop) uses the chain rule, which is exactly the composition of Jacobians.
Collapse of linear-only networks. If (all layers linear, no activations):
where is a single matrix. No depth benefit without nonlinearity.
4.2 Injectivity, Surjectivity, and Bijectivity
Definitions. A linear map is:
- Injective (one-to-one) if
- Surjective (onto) if for every , with
- Bijective if both injective and surjective
Criteria via rank-nullity:
| Property | Condition | Matrix equivalent |
|---|---|---|
| Injective | , i.e., nullity = 0 | Columns of are linearly independent |
| Surjective | , i.e., rank = | Rows of span ; full row rank |
| Bijective | Both; requires and full rank | is square and invertible |
Dimension constraints:
- If : cannot be surjective (rank ).
- If : cannot be injective (nullity ).
- If : injective surjective bijective.
4.3 Isomorphisms
Definition. A bijective linear map is called an isomorphism. If such a map exists, and are isomorphic, written .
Isomorphic spaces are "the same" as vector spaces - they have the same algebraic structure. An isomorphism is a relabeling of elements that preserves all linear operations.
Theorem 4.3.1 (Fundamental Classification). Two finite-dimensional vector spaces over the same field are isomorphic if and only if they have the same dimension.
Consequence: Every -dimensional vector space over is isomorphic to . The space of polynomials of degree (dimension ) is isomorphic to . The space (dimension 4) is isomorphic to .
For AI: The residual stream of a transformer (a vector in ) is isomorphic to any other -dimensional space. Features are not inherently "in " - they live in an abstract vector space and is just one coordinate representation. The linear representation hypothesis says this space has meaningful geometric structure independent of the coordinate system.
Properties of isomorphisms:
- The inverse is also an isomorphism.
- Isomorphism preserves all vector space structure: subspaces, linear independence, bases, dimension.
- Isomorphisms form a group under composition.
4.4 The Inverse Map
Theorem 4.4.1. If is an isomorphism (bijective linear map), then is also a linear map.
Proof: For , let , so . Then:
So .
Matrix inverse. If has matrix (square, full rank), then has matrix .
Left and right inverses for non-square maps. When is injective but not surjective (), there is no full inverse. However:
- Left inverse: such that . Exists iff is injective. Formula: (left pseudo-inverse).
- Right inverse: such that . Exists iff is surjective. Formula: (right pseudo-inverse).
Forward reference: Moore-Penrose Pseudo-Inverse
The general pseudo-inverse , defined via SVD as , handles all cases uniformly and gives the least-squares solution when an exact solution doesn't exist. Full treatment in 02: Singular Value Decomposition.
4.5 The Four Fundamental Subspaces via Linear Maps
Every linear map (with , , matrix ) defines four fundamental subspaces:
| Subspace | Definition | Lives in | Dimension |
|---|---|---|---|
| Column space | |||
| Null space | |||
| Row space | |||
| Left null space |
Orthogonality relations (proven using the dual map - see 6):
This splits and .
THE FOUR FUNDAMENTAL SUBSPACES
========================================================================
\mathbb{R}^n (domain) \mathbb{R}^m (codomain)
+--------------------+ +--------------------+
| row space |--T---> | column space |
| (dim r) | | (dim r) |
+--------------------+ +--------------------+
| null space |--T---> | left null space |
| (dim n-r) | {0} | (dim m-r) |
+--------------------+ +--------------------+
<-> \perp <-> \perp
orthogonal complement orthogonal complement
========================================================================
5. Special Classes of Linear Transformations
5.1 Projection Operators
Definition. A linear map is a projection if it is idempotent: .
Idempotency means "doing it twice is the same as doing it once." Once you project onto a subspace, you're already there - projecting again does nothing.
Theorem 5.1.1. If is a projection, then:
- : the image of is the fixed-point set.
- : the kernel of is the image of the complementary projection.
- (direct sum decomposition).
Proof of last claim: Any decomposes as , where and (since ). Uniqueness follows from: if with , , then .
Rank and trace. For a projection: (eigenvalues are only 0 and 1; trace = sum of eigenvalues = number of 1s = rank).
Orthogonal vs oblique projections. An orthogonal projection satisfies additionally (i.e., is symmetric). In this case:
- - the kernel is the orthogonal complement of the image.
- Formula: for projection onto .
An oblique projection has but - it projects along a subspace that is not orthogonal to the target.
Forward reference: Orthogonal Projections
The full theory of orthogonal projections - including the projection formula, orthogonal decompositions, and the relationship to least squares - is in 05: Orthogonality and Orthonormality.
For AI: Attention heads in transformers apply projections onto query/key/value subspaces. The head-specific projection matrices define low-dimensional subspaces of the residual stream. The attention mechanism then computes weighted combinations within these projected spaces. Projection is also central to layer normalization (projecting onto the unit sphere in the mean-subtracted subspace).
5.2 Rotations and Reflections
Orthogonal transformations are linear maps that preserve inner products:
Equivalently, their matrices satisfy , i.e., .
The set of all orthogonal matrices forms the orthogonal group .
Determinant splits them into two classes:
- : proper rotations - preserve orientation. These form , the special orthogonal group.
- : improper rotations (reflections, or rotation + reflection) - reverse orientation.
2D rotation by angle :
Properties: , .
3D rotations are parametrized by an axis and angle (Rodrigues' formula):
where is the skew-symmetric matrix with .
Householder reflections. Given a unit vector , the reflection through the hyperplane orthogonal to is:
Properties: , , .
For AI: Rotary Position Embedding (RoPE), used in LLaMA, GPT-NeoX, and Gemma, encodes positional information via rotation matrices applied blockwise to query and key vectors. The rotation angle depends on position, and relative positions appear as relative rotations, which interact cleanly with the dot-product attention score.
5.3 Shear and Scaling Maps
Diagonal scaling: where . Scales each coordinate independently. Kernel = if all . .
Shear maps in : the horizontal shear by factor is:
- shear preserves area. .
Geometrically: the -axis is fixed; each horizontal line slides by a distance proportional to its height. Parallelograms are tilted but area is preserved.
Elementary matrices (from row operations) are all either shears, row swaps, or scalings:
- Row scaling by : - multiply row by
- Row swap: - swap rows and
- Row addition: - add times row to row (this is a shear)
Every invertible matrix is a product of elementary matrices - Gaussian elimination as composition of linear maps.
5.4 The Geometry of Low-Rank Maps
A rank- linear map with collapses the domain: it maps all of onto a -dimensional subspace of , while sending an -dimensional subspace (the null space) to zero.
Decomposition via SVD preview. Any rank- matrix admits:
This is a sum of rank-1 outer products. Each rank-1 term maps everything in the direction to the direction , scaled by .
Forward reference: SVD and Low-Rank Approximation
The full theory of SVD - including the Eckart-Young theorem (best rank- approximation) and the geometric interpretation of singular values - is in 02: Singular Value Decomposition.
LoRA preview. A rank- update (with , , ) is a composition of two low-rank linear maps:
- compresses the input to dimensions.
- expands back to dimensions.
The effective map has rank , so it only modifies the network's behavior along directions in the input space. The hypothesis underlying LoRA is that the task-relevant weight changes lie in a low-dimensional subspace - a statement directly about the geometry of linear maps.
6. Dual Spaces and Transposes
6.1 The Dual Space
Definition. Given a vector space over , the dual space is the set of all linear maps from to :
Elements of are called linear functionals or covectors or one-forms.
is itself a vector space under pointwise addition and scalar multiplication .
The dual basis. If is a basis for , then the dual basis is defined by:
The dual basis is a basis for , so .
Key example: Row vectors. In , a linear functional is any map for some fixed . So , but the identification (row vector) (column vector) is coordinate-dependent. A row vector is intrinsically an element of , not of .
Why the distinction matters. When you write , you're applying the dual vector to the vector . This is a pairing between a space and its dual - not a dot product of two vectors in the same space. In Riemannian geometry and general relativity, this distinction is essential. In ML, it matters for understanding gradients.
6.2 The Dual Map and the Transpose
Definition. Given , the dual map (or transpose) is defined by:
In words: to apply to , first apply to , then apply to the result.
Matrix of the dual map. If has matrix in standard coordinates, then has matrix (the matrix transpose). The notation "" for the dual map is intentional and consistent.
Domain and codomain switch. means . The transpose reverses the direction. This is why:
- Composing on the left: (order reversal).
- In backpropagation: if the forward pass multiplies by , the backward pass multiplies by .
Kernel of the transpose = left null space:
This is the left null space of , which is orthogonal to .
Annihilators. The annihilator of a subspace is . Key identities:
6.3 Gradients Live in the Dual Space
This connection between linear maps and gradients is one of the deepest in all of applied mathematics, and is directly relevant to understanding backpropagation.
The gradient as a linear functional. For a differentiable function , the derivative at is a linear functional , defined by:
The derivative is an element of - a covector, represented as a row vector .
The gradient vector is obtained by identifying with via the standard inner product. This identification is coordinate-dependent: on a curved manifold (like a constraint surface or the space of probability distributions), gradients and vectors must be treated differently.
Backpropagation via dual maps. Consider a composed network (one linear layer, loss ). The gradient with respect to is:
This is exactly the dual map applied to the incoming gradient . Backpropagation propagates gradients backward through the transpose of each weight matrix. The chain of transposes in the backward pass is the dual map composition:
For AI: Modern deep learning frameworks (PyTorch, JAX) compute gradients using automatic differentiation, which is precisely the evaluation of dual maps via the chain rule. Every .backward() call accumulates contributions via transpose weight matrices - it is applying the adjoint (dual) of the forward linear map.
7. Affine Transformations
7.1 Beyond Linearity: Affine Maps
A linear map fixes the origin: . But many practical transformations in geometry and ML need to shift the origin - they include a translation component.
Definition. A function is an affine transformation if it has the form:
where is a matrix (the linear part) and is a vector (the translation).
Affine maps are linear maps composed with a translation. They are NOT linear (unless ) - they fail the zero test: when .
Affine subspaces. The image of under an affine map is an affine subspace - a translate of a linear subspace. Solutions to (when they exist) form an affine subspace: .
Composition of affine maps. If and , then:
The composition is affine: linear part , translation .
7.2 Homogeneous Coordinates
The elegant trick to make affine maps linear is to lift to one higher dimension.
Definition. The homogeneous coordinates of are .
The augmented matrix. The affine map becomes:
Now is a linear map in . The last coordinate is always 1 for "proper" points.
Key benefit: composition becomes matrix multiplication. Two affine maps compose as:
No special cases needed - composition is just matrix multiplication.
Inverse of an affine map:
(Valid when is invertible.)
In computer graphics: Every rigid body transformation (rotation + translation) is an affine map, and sequences of transformations are composed by matrix multiplication in homogeneous coordinates. This is the foundation of 3D graphics pipelines (OpenGL, Vulkan).
7.3 Neural Network Layers as Affine Maps
A single fully-connected layer computes:
The pre-activation is an affine map from to . In homogeneous coordinates:
Why bias matters. Without bias (), each layer is linear and the hyperplanes separating classes must pass through the origin. With bias, the decision boundary can be placed anywhere. This is the geometric reason bias terms dramatically increase expressive power.
Multi-layer without activations. A network is still affine:
Multiple affine layers compose to a single affine layer. Depth without nonlinearity gives no expressive benefit.
BatchNorm as affine rescaling. After normalizing to zero mean and unit variance, BatchNorm applies a learned affine transformation (elementwise). This is an affine map on the normalized activations, restoring the capacity to represent any desired scale and shift.
Embedding layers. An embedding layer maps token indices to vectors. Selecting the -th embedding is equivalent to multiplying by the one-hot vector : (the -th row). This is linear in the one-hot representation. The unembedding maps representations to logits: , a pure linear map.
8. The Jacobian as Linear Approximation
8.1 Linearization of Nonlinear Maps
Every differentiable function is "locally linear" - at any point, it looks like a linear map to first order. This principle underlies calculus, optimization, and all of numerical analysis.
Definition (Total Derivative). A function is differentiable at if there exists a linear map such that:
The linear map is the total derivative or Frechet derivative of at . The first-order approximation is:
For small , the function looks like an affine map: constant plus linear correction .
Geometric meaning. At each point , the total derivative is the best linear approximation to near . It maps directions (tangent vectors at ) to directions (tangent vectors at ). This is the pushforward of vectors.
8.2 The Jacobian Matrix
Definition. The matrix representation of in standard coordinates is the Jacobian matrix:
Entry : - how the -th output changes with the -th input.
Shape mnemonic: output dimension input dimension: for .
Special cases:
- (scalar function): Jacobian is the row gradient .
- (curve): Jacobian is the column tangent vector .
- (same dim): is square; is the local volume scaling.
The Jacobian of softmax. For where :
This is a rank-deficient matrix (rank ) because softmax outputs sum to 1: the Jacobian has a constant vector in its null space.
The Jacobian of ReLU. For (elementwise):
A diagonal matrix with 1s where and 0s elsewhere. ReLU's Jacobian is a projection - it zeroes out the gradient for dead neurons.
The Jacobian of layer normalization. More complex: layer norm applies mean subtraction, variance normalization, and an affine transformation. Its Jacobian is a projection onto a specific hyperplane (orthogonal to the constant vector), scaled and shifted.
8.3 Chain Rule = Composition of Jacobians
The chain rule for vector-valued functions states: if and , then:
This is matrix multiplication of Jacobians. The composition of two differentiable maps has a Jacobian equal to the matrix product of their individual Jacobians (evaluated at the appropriate points).
Backpropagation is reverse-mode Jacobian accumulation. For a network with loss :
Reading right to left: start with , multiply by , then , etc. At each step, we multiply by the transpose Jacobian of a layer - which is the dual map of that layer's linear approximation.
Computational graph view. Each node in the computation graph stores its local Jacobian. Forward pass evaluates the functions; backward pass multiplies the transpose Jacobians (right to left). This is the mathematical content of autograd.
BACKPROPAGATION AS JACOBIAN CHAIN
========================================================================
Forward: x --J_1---> z_1 --J_2---> z_2 -- ... --J_L---> y ---> ell
Backward: \partialell/\partialx <---J_1^T-- \partialell/\partialz_1 <---J_2^T-- \partialell/\partialz_2 <--- ... <---J_L^T-- \nablay L
Each backward step: (grad at input) = J^T \times (grad at output)
= (dual map) applied to incoming gradient
========================================================================
Why Jacobians matter for training dynamics:
- Large Jacobian singular values -> exploding gradients
- Small Jacobian singular values -> vanishing gradients
- The spectral norm of measures how much the linear approximation of can amplify inputs
For AI: Gradient clipping, careful weight initialization (Xavier, He), and residual connections all address the Jacobian conditioning problem. Residual connections add an identity Jacobian contribution: , which prevents the singular values from collapsing to zero (vanishing gradients).
9. Applications in Machine Learning
9.1 Attention as Linear Projections
The scaled dot-product attention mechanism (Vaswani et al., 2017) is built entirely from linear transformations applied to a sequence of token representations.
Setup. Given an input sequence (L tokens, d-dimensional residual stream), three learned projection matrices and define:
Each is a linear transformation: projects each token into "query space", into "key space", into "value space".
Attention scores. The attention pattern is a soft selection matrix. For a fixed query vector , the attention scores are dot products in key space - a linear functional applied to each key.
Output. . The output is a weighted linear combination of value vectors - a linear operation parameterized by . The output projection maps back to the residual stream: another linear map.
Multi-head attention. Each head applies its own projection matrices , computes attention independently, and the results are concatenated and projected:
This is a composition and concatenation of linear maps. The expressivity comes from the multiplicative interaction (which is bilinear, not linear) - but conditioned on fixed , the rest is linear.
For AI: The "OV circuit" and "QK circuit" decomposition (Elhage et al., 2021) analyzes transformer attention by studying the linear maps (value writing) and (key-query matching) separately. This is possible precisely because attention is compositionally linear.
9.2 LoRA: Fine-Tuning via Low-Rank Composition
Low-Rank Adaptation (Hu et al., 2021) is one of the most important parameter-efficient fine-tuning methods, and its design is a direct application of linear map theory.
Setup. Freeze the pretrained weight . Add a trainable low-rank update:
where , , and .
Rank-nullity interpretation. The map has rank at most . By rank-nullity:
The update only changes the network's behavior along at most directions in the -dimensional input space. When and , only of input directions are affected - the update is extremely sparse in "direction space."
The two-layer interpretation. The update is a composition of two maps:
- - compression to rank- space
- - expansion back to full dimension
Initializing randomly and ensures at the start of fine-tuning, so the model starts from the pretrained weights.
Parameter count. would have parameters. uses parameters. Savings ratio: . For , : savings factor of .
Extensions: DoRA (Weight-Decomposition Low-Rank Adaptation, 2024) decomposes into magnitude and direction, applying LoRA only to the direction component. GaLore (2024) applies LoRA-style updates to gradients rather than weights.
9.3 Linear Probes and the Linear Representation Hypothesis
Linear probing tests whether a feature is linearly decodable from a model's representations. Given representations and labels , train a linear classifier:
If the probe achieves high accuracy, the feature is linearly represented - it corresponds to a direction in representation space.
The Linear Representation Hypothesis (Mikolov et al., 2013; Elhage et al., 2022; Park et al., 2023) states that high-level features (sentiment, syntax, factual attributes, world models) are encoded as directions in the residual stream - i.e., as linear features.
Evidence:
- Word2vec arithmetic: . Semantic relationships are linear offsets.
- Steering vectors: adding to all residual stream activations controls model behavior (e.g., "banana" direction, sentiment directions).
- Probing studies: most tested syntactic and semantic features are linearly decodable.
Superposition. When there are more features than dimensions, the model stores features in near-orthogonal directions that partially overlap (superposition). This is still linear representation - just with interference.
For AI: If the linear representation hypothesis holds broadly, then:
- Linear algebra provides the right toolkit for model interpretability.
- Interventions on model behavior reduce to vector addition in representation space.
- Feature extraction is a linear map - PCA/SVD on activations finds meaningful directions.
9.4 Embedding and Unembedding
Token embeddings. The embedding matrix maps vocabulary indices to -dimensional vectors. Indexing row of is equivalent to the linear map (one-hot selection). The embedding layer is linear in the one-hot representation.
Unembedding. The unembedding matrix maps residual stream vectors to logits over the vocabulary:
This is a pure linear map. The logit for token is - a dot product (linear functional) between the unembedding direction and the residual stream.
Logit lens. Applying to intermediate residual stream states (before the final layer) gives "early predictions" - showing what the model is computing at each layer. This technique (Nostalgebraist, 2020) is possible because unembedding is linear.
Tied embeddings. Many models (GPT-2, LLaMA variants) use - the same matrix for both embedding and unembedding. This enforces consistency: the most likely next token after seeing context is the one whose embedding has the highest dot product with - i.e., .
10. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Assuming is possible for a linear map | Homogeneity immediately gives . Any map with is affine or nonlinear. | The zero-test is the fastest way to rule out linearity. |
| 2 | Treating translation as linear | fails additivity: | Neural network layers are affine (), not linear. This matters for composing and inverting. |
| 3 | Confusing rank of a map with rank of a matrix | Rank is basis-independent (it's the dimension of the image) but the matrix depends on the basis. | for any bases - rank is a map invariant, not a matrix property. |
| 4 | Misapplying rank-nullity | Forgetting that rank-nullity applies to the domain dimension, not the codomain. For , rank + nullity = 5, not 3. | Identify explicitly before applying the theorem. |
| 5 | Assuming | Matrix multiplication (= composition of linear maps) is generally not commutative. Even for square matrices, in general. | Non-commutativity is the default. Commutativity (e.g., diagonal matrices, polynomial functions of a matrix) is the special case. |
| 6 | Confusing similar matrices with equal matrices | means they represent the same map in different bases - same eigenvalues, trace, determinant. But in general. | Similar matrices are equal only if the change-of-basis matrix is the identity. |
| 7 | Thinking the kernel is trivial when is tall | A tall matrix with can still have a non-trivial null space if its columns are linearly dependent. | Compute . Nullity = , regardless of whether . |
| 8 | Applying the inverse when only a one-sided inverse exists | A non-square () cannot have a two-sided inverse. Attempting for such matrices is undefined. | Use the pseudo-inverse for the least-squares solution. |
| 9 | Forgetting that the Jacobian is a linear map, not just partial derivatives | The Jacobian matrix is the coordinate representation of the total derivative . Partial derivatives individually give the columns but don't by themselves prove differentiability. | Differentiability requires the linear approximation error to go to zero - this requires the partials to exist AND be continuous. |
| 10 | Treating the gradient as a vector in the same space | Strictly, (the dual space). In Euclidean space with standard basis, the identification makes this invisible. But for optimization on manifolds or with non-Euclidean metrics, treating gradients as primal vectors gives wrong answers. | Use the gradient as a covector when working with Fisher information, natural gradient, or Riemannian optimization. |
| 11 | Assuming all bijective functions are linear isomorphisms | A function can be bijective but not linear. E.g., , is bijective but not linear (). | Isomorphisms must be both bijective AND linear. Check both conditions. |
| 12 | Forgetting that is in , not in | The image is a subspace of the codomain , not the domain . The null space is in . | Draw the diagram: . . . |
11. Exercises
Exercise 1 * - Kernel and Image Computation
Goal: Given explicit matrices, compute kernel and image and verify rank-nullity.
(a) For : find a basis for , a basis for , and verify rank + nullity = 3.
(b) For (projection to first two coordinates): identify kernel and image without computation.
(c) For the differentiation map with matrix as in 3.1: find kernel and image dimensions.
Exercise 2 * - Matrix of a Linear Map in Non-Standard Basis
Goal: Construct the matrix of a given transformation in a specified basis.
Let be reflection across the line (i.e., ). Basis .
(a) Write the standard matrix of in the standard basis.
(b) Compute the change-of-basis matrix from to the standard basis.
(c) Compute . What is notable about this result?
(d) Explain geometrically why is diagonal in the basis .
Exercise 3 * - Rank-Nullity and Linear Systems
Goal: Use rank-nullity to understand solution sets of linear systems.
(a) has rank 3. What is the nullity? How many free variables are there in ?
(b) has rank 4. Is always solvable? What is the nullity of ?
(c) Prove: if is a linear map on a finite-dimensional space, then is injective if and only if is surjective.
Exercise 4 ** - Projection Operator Construction
Goal: Build an orthogonal projection onto a given subspace and verify idempotency.
Let .
(a) Orthonormalize the spanning set using Gram-Schmidt.
(b) Construct the projection matrix where has the orthonormal basis as columns.
(c) Verify: , , and .
(d) Compute and verify it is also a projection. What does it project onto?
Exercise 5 ** - Jacobian of Softmax
Goal: Derive and verify the Jacobian of the softmax function.
(a) Derive for , handling the cases and separately.
(b) Show that the Jacobian is .
(c) Verify that (the Jacobian kills constant vectors). Why does this make sense?
(d) Show that the rank of is at most .
Exercise 6 ** - Affine Map Composition via Homogeneous Coordinates
Goal: Compose affine transformations using augmented matrices.
In , let:
- : rotation by then translate by
- : scale by in both dimensions then translate by
(a) Write augmented matrices and for and .
(b) Compute (apply then ). What are the effective rotation, scale, and translation?
(c) Apply the composition to the point . Verify by applying then directly.
Exercise 7 *** - LoRA Rank Analysis
Goal: Analyze the geometry of low-rank weight updates.
Let be a pretrained weight. A LoRA update with rank is where , .
(a) What is ? What is the nullity of ?
(b) Generate random and and numerically verify: .
(c) For a fixed input , compute . Show this is in .
(d) Compute the singular values of . How many are nonzero? What does this tell you about the effective dimension of the update?
(e) Compare the number of trainable parameters: vs for .
Exercise 8 *** - Linear Probing and the Linear Representation Hypothesis
Goal: Empirically test whether a concept is linearly represented in embedding space.
(a) Generate synthetic embeddings for binary sentiment: positive embeddings clustered near , negative near for a fixed direction , plus noise.
(b) Train a linear probe (logistic regression on top of embeddings) and measure accuracy.
(c) Apply PCA to the embeddings. Show that the first principal component aligns with .
(d) Add a "superposition" condition: embed two independent binary features in dimensions. Show that both features can be linearly decoded but with interference.
(e) Compute the mutual coherence of the feature directions. How does it relate to probe accuracy?
12. Why This Matters for AI (2026 Perspective)
| Concept | AI Application | Why It Matters |
|---|---|---|
| Linear map axioms | Neural layer computation | Every forward pass is a composition of linear maps + activations; understanding linearity separates what the layer does from what nonlinearity adds |
| Kernel and image | Information compression | The null space of a weight matrix is "dead information" - inputs in produce no signal. Attention heads have low-rank structure that defines their effective null space |
| Rank-nullity theorem | LoRA, model compression | LoRA exploits rank-nullity: a rank- update has -dimensional null space, leaving most of the input space unaffected - this is why it works with few parameters |
| Change of basis | Diagonalization, eigenfeatures | The eigenvalue decomposition of weight matrices (studied in mechanistic interpretability) is a change to the "natural basis" in which the layer acts by simple scaling |
| Composition = multiplication | Deep network analysis | The effective weight of a -layer linear network is one matrix - depth without nonlinearity has zero benefit. All depth benefits require nonlinearity |
| Projection operators | Attention heads, layer norm | Attention heads project onto query/key/value subspaces; layer norm projects onto the hyperplane of mean-zero vectors; understanding projections clarifies what information is preserved |
| Affine maps + bias | Universal approximation | Bias terms are essential for shifting decision boundaries. Without bias, the model cannot represent affine functions - only linear ones. Universal approximation requires affine layers |
| Jacobian and chain rule | Backpropagation | Every .backward() call is Jacobian-matrix multiplication via the chain rule. Gradient explosion/vanishing is about Jacobian singular value growth/decay through layers |
| Dual maps and transposes | Gradient computation | The backward pass uses transpose weight matrices - these are the dual maps. Natural gradient and Fisher information matrix methods exploit the geometry of the dual space |
| Linear representation hypothesis | Mechanistic interpretability | If features are linear, activation patching, steering vectors, and linear probing all work. This is why "linear algebra for interpretability" (e.g., representation engineering, logit lens) is a coherent research program |
13. Conceptual Bridge
Looking Back
This section builds on two foundational pillars from earlier in the curriculum.
From Chapter 2 06: Vector Spaces and Subspaces: We studied the axioms of abstract vector spaces (closure, associativity, identity, inverse, distributivity) and their subspaces (spans, null spaces, column spaces). Linear transformations are the maps between these abstract structures - the morphisms of the category of vector spaces. The four fundamental subspaces (column space, null space, row space, left null space) that were defined for matrices are now understood as , , , and - intrinsic properties of the map, not the matrix.
From 01: Eigenvalues and Eigenvectors and 02: SVD: Both eigendecomposition and SVD are studied as decompositions of linear maps into simple pieces. Eigendecomposition finds a basis in which acts by scaling. SVD finds two orthonormal bases (for domain and codomain) in which acts by scaling. These are special cases of the general change-of-basis machinery developed here in 3.
From 03: PCA: PCA uses the linear structure of the data covariance matrix - the covariance is built from linear maps - to find the principal directions via SVD. The whitening transform and PCA projection are linear maps, and their geometry (dimension reduction, preserved variance) follows directly from the rank-nullity theorem.
Looking Forward
The abstract machinery of linear transformations will appear throughout the rest of the curriculum in concrete technical forms.
05: Orthogonality and Orthonormality: Gram-Schmidt is an algorithm that constructs a specific linear map (change of basis to an orthonormal basis). QR decomposition factors a linear map as an orthogonal map followed by a triangular map . Orthogonal projections (5.1 here) are studied in depth there.
06: Matrix Norms: The spectral norm measures how much a linear map can amplify vectors. The nuclear norm measures the "effective rank." These norms quantify properties of the linear map that are invisible from the matrix entries.
Chapter 4: Calculus Fundamentals: The Jacobian (8 here) is the bridge between linear algebra and calculus. Multivariable calculus is essentially the study of how linear maps (Jacobians) approximate smooth nonlinear maps. The Hessian is the second-order analogue - a bilinear map that measures curvature. The implicit function theorem, inverse function theorem, and change-of-variables formula all rely on the Jacobian being an invertible linear map at the point of interest.
Curriculum Position
POSITION IN THE MATH FOR LLMS CURRICULUM
========================================================================
Chapter 1: Mathematical Foundations
Chapter 2: Linear Algebra Basics
+-- Vectors, Matrix Operations, Systems of Equations
+-- Determinants, Matrix Rank
+-- Vector Spaces ----------------------------+
| prerequisite
Chapter 3: Advanced Linear Algebra |
+-- 01-Eigenvalues --------------------------+
+-- 02-SVD ---------------------------------+
+-- 03-PCA ---------------------------------+
| |
+-- 04-Linear Transformations <--------------+
| (YOU ARE HERE)
| Kernel, Image, Rank-Nullity
| Matrix Representation, Change of Basis
| Composition, Isomorphisms, Jacobian
| Affine Maps, Dual Spaces, AI Applications
| |
+-- 05-Orthogonality <----------------------+
+-- 06-Matrix Norms <-----------------------+
+-- 07-Positive Definite Matrices |
+-- 08-Matrix Decompositions |
down
Chapter 4: Calculus Fundamentals
(Jacobians and chain rule developed further)
========================================================================
The unifying theme: Every major algorithm in deep learning is a composition of linear maps and nonlinearities. Understanding this section means understanding the language in which every neural network, every attention mechanism, every gradient computation, and every interpretability method is written. The matrix is not the territory - but knowing how to move between coordinate representations (change of basis), how to measure what a map collapses (kernel and rank), how to compose maps (matrix multiplication), and how to invert them (isomorphisms and pseudo-inverses) gives you the full algebraic toolkit to reason about any linear system you encounter in machine learning.
<- PCA | Back to Advanced Linear Algebra | Orthogonality ->
Appendix A: Extended Examples and Computations
A.1 Computing the Matrix of Differentiation
We work out the differentiation operator in full detail to build intuition for linear maps on function spaces.
Setting. Let with basis and with basis .
Define by (differentiation).
Step 1: Apply to each basis vector of .
- -> coordinate vector
- -> coordinate vector
- -> coordinate vector
- -> coordinate vector
Step 2: Assemble into matrix.
Step 3: Use the matrix. Differentiate . In -coordinates: .
So . Check: . OK
Kernel of : The null space is . Dimension 1.
Image of : All polynomials of degree (since every equals for some ). Dimension 3.
Rank-nullity check: . OK
A.2 Change of Basis: Full Worked Example
Problem. Let rotate vectors by counterclockwise. Express in the rotated basis .
Step 1: Standard matrix of .
Step 2: Change-of-basis matrix. The new basis vectors, in standard coordinates:
Step 3: New matrix.
Since rotation matrices commute (they all rotate around the same axis in 2D): .
Key insight: A rotation in 2D has the same matrix in every orthonormal basis (since all such matrices are just ). This is because commutes with all rotations: .
A.3 The Null Space as a Subspace: Visualization
For (rows are multiples), let's find .
Row reduce: :
Free variables: , (free). Back-substitute: .
This is a plane through the origin in . The map collapses this entire plane to zero. By rank-nullity: rank , nullity , and . OK
The image of is - a line in - since the two columns of are and , so rank = 1.
Appendix B: Abstract Linear Algebra Perspective
B.1 The Category of Vector Spaces
In the language of category theory (which provides a unifying framework for all of mathematics):
- Objects: Vector spaces over a field
- Morphisms: Linear maps between them
- Composition: Function composition (= matrix multiplication)
- Identity morphism: The identity map (= identity matrix )
This forms a category denoted .
Isomorphisms in the category are exactly the invertible linear maps - this matches our definition of isomorphism in 4.3.
Functors are maps between categories that preserve the categorical structure. The "matrix representation" is a functor from (with chosen bases) to the category of matrices. Changing bases corresponds to a natural transformation.
This perspective matters for ML because: neural network architectures are themselves categorical structures (composition of morphisms), and categorical thinking helps reason about when two architectures are "equivalent" (related by isomorphisms).
B.2 Infinite-Dimensional Extensions
The theory extends to infinite-dimensional spaces, where the familiar finite-dimensional results require modification.
Bounded linear operators. On Hilbert spaces (infinite-dimensional inner product spaces), the right notion of "linear transformation" is a bounded linear operator satisfying for some . Unbounded operators (like differentiation on ) require careful domain specification.
The spectral theorem for compact operators. A compact self-adjoint operator on a Hilbert space has a countable set of eigenvalues and an orthonormal basis of eigenvectors. This is the infinite-dimensional analogue of diagonalization.
Functional analysis. The study of linear maps on infinite-dimensional spaces is called functional analysis. Key results (Hahn-Banach, open mapping theorem, closed graph theorem) parallel finite-dimensional results but require additional technical hypotheses.
For AI: Neural network function classes are subsets of infinite-dimensional function spaces ( or Sobolev spaces). Understanding the "size" and "complexity" of these classes uses infinite-dimensional linear algebra - e.g., kernel methods and Gaussian processes operate in reproducing kernel Hilbert spaces (RKHS), and neural tangent kernel theory analyzes infinitely wide networks using spectral theory of linear operators.
B.3 The Tensor Product and Multilinear Maps
Bilinear maps. A map is bilinear if it is linear in each argument separately. The attention score is bilinear (linear in , linear in , but not linear jointly: in general).
The tensor product is the universal space for bilinear maps: every bilinear map factors through a unique linear map . This is why tensors (in the ML sense: multi-dimensional arrays) are called tensors - they represent multilinear maps.
For AI: The key operation in self-attention, , is a bilinear form. The matrix in the "weight matrix" formulation of attention () makes this explicit: is the matrix of the bilinear form. This is the reason attention is more expressive than standard linear transformations - it computes a bilinear (quadratic) function of the input.
Appendix C: The Geometry of Linear Maps - A Deep Dive
C.1 How Linear Maps Deform Space
To deeply understand a linear map , we track how it deforms geometric objects.
Ellipsoids to ellipsoids. The image of the unit sphere under a full-rank map is an ellipsoid whose semi-axes have lengths equal to the singular values of , pointing in the directions of the left singular vectors .
This is the geometric content of the SVD: means:
- : rotate the input so the "natural input directions" align with the coordinate axes.
- : stretch each axis by .
- : rotate the output to place the stretched axes along the directions.
The sphere becomes an ellipsoid. The "shape" of the ellipsoid is completely described by the singular values.
Volume scaling. The volume of the image of a set under is (when is square). More precisely, = product of all singular values = volume of the image of the unit cube.
For a rank-deficient map (), the image has lower dimension - and -dimensional volume = 0. The map "collapses" -dimensional space to a lower-dimensional flat object.
Angles. Unless is orthogonal, linear maps change angles. The angle between and satisfies:
but .
The matrix (the Gram matrix) determines how distorts inner products. Eigenvalues of are (squared singular values).
C.2 Interpreting the Four Fundamental Subspaces Geometrically
Given with matrix and rank :
The row space : This is the "input directions that survive" - the -dimensional subspace of that maps faithfully (injectively) onto the column space. Any is "noticed" by .
The null space : This is the "input directions that are killed" - the -dimensional subspace of that maps to zero. Any is "invisible" to .
The decomposition : Every input splits uniquely as where is in the row space (the "signal" part) and is in the null space (the "noise" invisible to ).
The column space : The -dimensional subspace of that can actually reach. Solutions to exist iff .
The left null space : The -dimensional complement of in . Directions in the left null space are unreachable by .
COMPLETE PICTURE OF THE FOUR FUNDAMENTAL SUBSPACES
========================================================================
\mathbb{R}^n (domain) \mathbb{R}^m (codomain)
---------------------------------------------------------
+---------------------+ T(x) = Ax +---------------------+
| row space | --------------> | column space |
| (dim = r) | isomorphism | (dim = r) |
| | | |
| ------------- | | ------------- |
| | | |
| null space | --------------> | left null space |
| (dim = n-r) | maps to 0 | (dim = m-r) |
+---------------------+ +---------------------+
up \perp complement up \perp complement
Every x = (row space part) + (null space part)
T sees only the row space part.
========================================================================
For AI (linear systems / least squares): When fitting a model with more constraints than parameters (), the system is overdetermined. A solution exists only if . If not, the least-squares solution minimizes - finding the projection of onto and then solving in the row space.
C.3 Linear Maps and Information Theory
The rank-nullity theorem has an information-theoretic interpretation.
Rank = information preserved. A linear map of rank preserves at most "dimensions" of information from the input. The remaining dimensions are destroyed.
Mutual information. For a Gaussian input and output , the mutual information:
depends only on the singular values - not on the specific directions. The null space of contributes zero mutual information.
Compression. If we want to compress to via a linear map , the maximum mutual information is achieved when projects onto the top- right singular vectors of... itself (the row space). For structured data with covariance , the optimal compression is PCA - projecting onto the top eigenvectors of .
This is why PCA is the optimal linear compressor under mean-squared error: it maximizes the retained variance (information) for any fixed rank .
C.4 Generalization of Linear Maps: Tensors
The concept of a linear map generalizes to multilinear maps and tensors in ways that are directly relevant to deep learning.
Bilinear maps and matrices of bilinear forms. A bilinear form can be written as for a matrix . The bilinear form is:
- Symmetric if (and ): .
- Positive definite if for : inner products are positive definite symmetric bilinear forms.
Multilinear maps. A -linear map is linear in each argument separately. The space of -linear maps on is the space of tensors of order .
For AI: The multi-head attention score is a bilinear form parameterized by . Understanding bilinear forms via their eigendecomposition ( by SVD) reveals what "patterns" each attention head is sensitive to: the left singular vectors are "what queries to look for" and the right singular vectors are "what keys are being matched against."
Appendix D: Computational Methods and Numerical Considerations
D.1 Computing the Kernel via Row Reduction
Given , finding a basis for requires solving .
Algorithm (Gaussian Elimination to RREF):
- Apply row operations to reduce to reduced row echelon form (RREF).
- Identify pivot columns (columns with leading 1s in RREF) and free columns (all other columns).
- For each free variable, set it to 1 and all other free variables to 0, then solve for pivot variables.
- Each such solution is one basis vector for .
Example. .
RREF: , :
, :
: already done.
Pivot columns: 1 and 4. Free variables: and .
Setting : , . Basis vector: .
Setting : , . Basis vector: .
Null space basis: . Nullity = 2.
Rank-nullity: , rank = 2 (two pivots), nullity = 2. Check: . OK
D.2 Numerical Stability of Basis Computations
Computing the null space or column space numerically requires care because floating-point arithmetic can introduce small errors.
The SVD approach (recommended). Instead of row reduction, compute the SVD: . Then:
- = span of columns of corresponding to zero singular values (or singular values below a threshold ).
- = span of columns of corresponding to nonzero singular values.
The SVD-based approach is numerically stable because orthonormal bases ( and ) are well-conditioned.
Numerical rank. For floating-point matrices, "zero" singular values appear as small but nonzero values. The numerical rank with threshold is:
A common choice is (about for double precision). numpy.linalg.matrix_rank uses a default threshold based on machine epsilon.
Why this matters: In practice, a matrix with theoretical rank may appear to have rank due to measurement noise. The SVD reveals the "intrinsic" rank through the gap in singular values.
D.3 Efficient Change of Basis Computations
Naive approach: Compute directly. For matrices, this costs (two matrix multiplications plus one matrix inversion).
Better approach when is orthogonal: If is orthogonal (), then costs only but with better constants than general (no matrix inversion needed).
Eigendecomposition case: When , computing requires only operations to compute (raise each diagonal entry to the -th power), plus two matrix-vector multiplications for and .
For AI: Computing (matrix exponential, important for continuous-time state space models like Mamba/S4) is done by diagonalizing: , where is diagonal with entries .
D.4 The Rank-Revealing QR Decomposition
Standard QR decomposition doesn't directly reveal rank. The rank-revealing QR (RRQR) uses column pivoting:
where is a permutation matrix, and is "small" (its Frobenius norm bounds how far is from rank-). The columns of form a basis for .
RRQR is preferred over SVD when only a basis for the column space (not the singular values themselves) is needed, as it is about faster.
Appendix E: Connections to Other Fields
E.1 Linear Maps in Physics
In quantum mechanics, operators on Hilbert spaces are infinite-dimensional linear maps. The Hamiltonian , momentum , and position are linear operators. The Schrodinger equation is a linear ODE on the Hilbert space of quantum states.
The spectral theorem for self-adjoint operators (the quantum generalization of symmetric matrix diagonalization) guarantees that observables have real eigenvalues (the possible measurement outcomes) and that the eigenfunctions form a complete orthonormal basis.
For AI: Transformers share surprising mathematical parallels with quantum mechanics: both involve attention-like mechanisms (inner products of states), superposition (linear combinations of basis states), and entanglement-like correlations. The linear algebra of quantum mechanics and of transformers both live in the framework of linear maps on Hilbert spaces.
E.2 Linear Maps in Topology: Homomorphisms
In algebraic topology, chain complexes are sequences of vector spaces connected by linear maps:
where (boundary of a boundary is zero - exactly the condition ). The homology groups measure "holes" in topological spaces.
Persistent homology, used in topological data analysis (TDA), applies this to point cloud data to find features that persist across scales. It's used in ML for analyzing data manifolds, protein structure prediction, and understanding neural network loss landscapes.
E.3 Linear Maps in Signal Processing
The Discrete Fourier Transform (DFT) is a linear map with matrix entries . The DFT matrix is unitary ().
Convolution is linear - convolving a signal with a kernel is a linear map - and in the Fourier domain it becomes pointwise multiplication. This is the key to making CNNs efficient: convolution is a structured linear map with shared weights (translation equivariance), and the Fourier transform diagonalizes the convolution operator.
For AI: The fast Fourier transform (FFT) is instead of for the full DFT matrix multiply, by exploiting the structure (sparsity in a different basis) of the DFT linear map. Similarly, FlashAttention speeds up attention by exploiting the structure of the attention linear map to minimize memory bandwidth.
Appendix F: The Algebra of Linear Maps - Structural Results
F.1 The Space of Linear Maps is a Vector Space
We noted briefly that is a vector space. Let's make this precise and compute its dimension.
Operations: For and :
- The zero element is the zero map for all .
Dimension. If and , then:
Proof: Every linear map is determined by vectors in (images of basis vectors), each in a -dimensional space. The natural isomorphism is (as vector spaces), which corresponds to the identification with matrices.
Basis for . The standard basis consists of the maps defined by . In matrix form, is the matrix with 1 in position and 0 elsewhere.
F.2 Composition Gives an Algebra Structure
When , linear maps can be composed. The space (endomorphisms of ) is a ring under composition (it is also a vector space - together, an algebra).
Properties of composition in :
- Associative:
- Identity:
- Distributive: and
- NOT commutative: in general
Matrix polynomials. For , we can form for any polynomial . This is well-defined because we can add and compose linear maps.
Cayley-Hamilton theorem. Every linear operator satisfies its own characteristic polynomial: , where .
For AI: The spectral approach to recurrent networks analyzes the long-run behavior of as . If has eigenvalues , then (stable memory decay). If any , the recurrence explodes. This spectral stability analysis is the foundation of designing stable RNNs (LSTM, GRU use gating to control the effective eigenvalue spectrum of the recurrence).
F.3 Quotient Maps and Projections
The quotient space. Given with kernel , the quotient space consists of equivalence classes .
is a vector space of dimension .
The first isomorphism theorem. Every linear map factors as:
where is the quotient map () and is an isomorphism from to .
This is the coordinate-free statement of the rank-nullity theorem: .
Geometric meaning. The quotient map "collapses" the null space to a point, then acts faithfully (injectively) on the resulting space. Any linear map splits into: collapse (project out the null space) + inject faithfully into the codomain.
For AI: In contrastive learning (SimCLR, MoCo), the projection head maps representations to a lower-dimensional space. This is a linear (or nonlinear) quotient map - it deliberately collapses some dimensions (those corresponding to nuisance factors like image augmentation) while preserving the semantically meaningful directions. The first isomorphism theorem says: what survives in the image is exactly what was not collapsed.
F.4 Dual Bases and the Canonical Isomorphism
We saw that , so . But this isomorphism is non-canonical - it depends on the choice of basis.
With an inner product. When is an inner product space (like with the standard dot product), there is a canonical isomorphism defined by:
That is, is the linear functional that takes .
This isomorphism is canonical because it doesn't depend on any choice of basis - it uses only the inner product structure. When we identify as a column vector (primal) rather than a row vector (dual), we are implicitly using this canonical isomorphism via the standard inner product.
For AI: On non-Euclidean spaces (manifolds of probability distributions, manifolds of neural network weights under the Fisher metric), the identification is NO longer trivial - gradients and velocity vectors live in different spaces. The natural gradient method corrects for this by using the Fisher information matrix as the metric: . This maps the gradient (a covector) to a tangent vector using the Riemannian metric instead of the Euclidean metric.
Appendix G: Linear Maps in Practice - Worked Problems
G.1 Verifying Linearity: Systematic Approach
Problem: Is defined by linear?
Check additivity: . OK
Check homogeneity: . OK
Conclusion: is linear. Its matrix (viewing with basis ):
So as a matrix. Kernel = (trace-zero matrices), dimension 3.
Problem: Is defined by linear?
Check homogeneity: . For : , but linearity requires .
For : but . So .
Conclusion: is NOT linear (the norm fails homogeneity due to the absolute value).
G.2 Finding the Kernel: Four Approaches
For (rows are multiples), find :
Approach 1: Row reduction. RREF: . Free variables , . Then .
.
Approach 2: Inspection. The columns satisfy and . So ... no, wait: .
Correcting: means , so . And since , so - or equivalently .
Approach 3: SVD. Compute SVD of ; null space vectors are the right singular vectors with zero (or near-zero) singular values.
Approach 4: Random sampling + orthogonalization. Sample many vectors, project out the row space, keep those with zero image (useful when is very large).
G.3 Composition of Transforms in a Graphics Pipeline
A 3D object is processed through a graphics pipeline using compositions of affine maps:
- Model matrix : transform from object coordinates to world coordinates (rotation, scale, translation).
- View matrix : transform from world coordinates to camera coordinates (rotation + translation).
- Projection matrix : from camera coordinates to clip coordinates (perspective projection).
The combined transform: (in homogeneous coordinates).
Composition order matters. Reading left to right: first apply , then , then . The matrix product can be precomputed once per frame (not per vertex), saving matrix multiplications.
This is the same principle as "avoid recomputing shared prefixes" in transformer KV-caching: the cache stores the linear maps , for all past tokens, so they don't need to be recomputed when generating each new token.
Appendix H: Linear Maps in Optimization and Training
H.1 The Gradient as a Linear Map
In optimization, we minimize a loss function . The gradient tells us the direction of steepest ascent. But more precisely:
The directional derivative of at in direction is:
This is a linear functional in : it is the dual vector .
Gradient descent in its pure form: . This uses the Euclidean identification of the gradient (covector) with a primal vector.
Natural gradient descent: , where is the Fisher information matrix. This uses the correct metric on the manifold of probability distributions (the Fisher-Rao metric) to convert the covector gradient to a tangent vector.
H.2 The Hessian as a Bilinear Map
The Hessian is the matrix of second derivatives:
But more abstractly, the Hessian is a bilinear form :
The Hessian determines the curvature of the loss landscape:
- Positive definite Hessian ( for all ): the point is a local minimum.
- Indefinite Hessian (has both positive and negative eigenvalues): the point is a saddle point.
- The ratio of largest to smallest eigenvalue is the condition number - it governs how slowly gradient descent converges.
For AI: Modern optimizers (Adam, AdaGrad) approximate Hessian-related quantities. Adam's second moment estimate approximates the diagonal of the Hessian. Dividing the gradient by is an approximation to Newton's method (which divides by ). This is why Adam often converges much faster than SGD on ill-conditioned problems.
H.3 Weight Matrices as Linear Maps: Training Dynamics
The neural tangent kernel (NTK) theory (Jacot et al., 2018) analyzes infinitely wide neural networks and shows that their training dynamics under gradient flow are governed by a linear system:
where is the NTK matrix (constant in the infinite-width limit). This is an ODE with a constant linear map - so its solution is .
The eigenvalues of determine which output directions are learned quickly (large eigenvalues -> fast convergence) and which slowly (small eigenvalues -> slow convergence). This is linear algebra - specifically, the spectral decomposition of a positive semidefinite linear map.
H.4 Gradient Flow through Linear Layers
Consider a linear layer with loss . The gradient with respect to the weight matrix:
where is the "error signal" (upstream gradient). The gradient is an outer product - a rank-1 matrix.
This means gradient updates are always rank-1. For a mini-batch of samples, the gradient is:
A sum of rank-1 matrices - the gradient has rank at most . For large models with batch size , the gradient is a very low-rank update to the weight matrix. This low-rank structure of gradients is the empirical justification for gradient low-rank projection methods (GaLore, 2024).
Appendix I: Reference Tables
I.1 Linear Map Properties at a Glance
| Property | Condition | Matrix Equivalent | Geometric Meaning |
|---|---|---|---|
| Linear | Any matrix | Preserves addition and scaling | |
| Injective | Full column rank | No two inputs map to same output | |
| Surjective | Full row rank | Every output is reachable | |
| Bijective (isomorphism) | Both injective and surjective | Square, full rank, invertible | Perfect correspondence |
| Orthogonal | , orthogonal columns | Preserves lengths and angles | |
| Unitary | (complex) | , unitary columns | Complex analogue of orthogonal |
| Projection | Idempotent: applying twice = once | ||
| Symmetric | Self-adjoint; diagonalizable by spectral theorem | ||
| Positive definite | for | All eigenvalues positive; curvature at min | |
| Normal | Diagonalizable by unitary matrix | ||
| Nilpotent | for some | Powers eventually vanish; all eigenvalues 0 | |
| Involution | Self-inverse (like Householder reflections) |
I.2 Rank and Dimension Formulas
| Formula | Statement |
|---|---|
| Rank-nullity theorem for | |
| Row rank equals column rank | |
| Rank cannot increase under composition | |
| Subadditivity of rank | |
| Inclusion-exclusion for subspaces | |
| (for subspace ) | Dimension of quotient space |
| Gram matrix has same rank | |
| (for projection ) | Rank = trace for idempotent matrices |
I.3 AI Applications Cross-Reference
| Linear Map Concept | Where It Appears in AI | Mathematical Role |
|---|---|---|
| (affine map) | Every neural layer | Pre-activation computation |
| (linear projection) | Attention mechanism | Projects to query subspace |
| (low-rank) | LoRA fine-tuning | Rank- weight update |
| (Jacobian) | Backpropagation | Chain rule at each layer |
| (transpose map) | Backward pass | Dual map of forward |
| (projection) | Layer norm, attention | Projects onto subspace |
| (linear map) | Unembedding (logit computation) | Representation to vocabulary |
| (rotation in ) | RoPE positional encoding | Positional rotation |
| (metric-adjusted gradient) | Natural gradient / Adam | Riemannian gradient |
| (Gram matrix of Jacobian) | Neural tangent kernel | Training dynamics |
Appendix J: Proofs of Key Results
J.1 Proof:
This is a fundamental result that deserves a careful proof.
Theorem. For any matrix , the column rank (dimension of the column space) equals the row rank (dimension of the row space).
Proof (via RREF). Let have RREF (obtained by row operations, which don't change the row space but can change the column space). In :
- The nonzero rows are linearly independent (each has a leading 1 not shared by any other row).
- The number of nonzero rows = number of pivot columns = rank.
So row rank = column rank = number of pivots in RREF.
Alternative proof (via SVD). The SVD has nonzero singular values. The column space of is spanned by (first left singular vectors), dimension . The row space of (= column space of ) is spanned by (first right singular vectors), dimension . Both have dimension = number of nonzero singular values.
J.2 Proof: Kernel and Image are Subspaces
Theorem. For any linear map , both and are subspaces (of and respectively).
Proof for :
- Non-empty: , so .
- Closed under addition: Let . Then , so .
- Closed under scalar multiplication: Let , . Then , so .
Proof for :
- Non-empty: .
- Closed under addition: Let , so for some . Then .
- Closed under scalar multiplication: Let , . Then .
J.3 Proof: Composition of Linear Maps is Linear
Theorem. If and are linear, then is linear.
Proof:
J.4 Proof: The Dual Map is Linear
Theorem. If is linear, then defined by is linear.
Proof:
So .
J.5 Proof: Invertibility Criterion for Finite-Dimensional Spaces
Theorem. Let be a linear map on a finite-dimensional space . Then the following are equivalent:
- is injective (one-to-one)
- is surjective (onto)
- is bijective (invertible)
Proof: : injective iff (standard).
: By rank-nullity: . Nullity = 0 iff rank = .
: Rank = . Since and both are finite-dimensional, iff (a subspace of equal dimension must be the whole space).
: by definition of bijective.
Important: This equivalence only holds for maps with the same domain and codomain. For with , injective and surjective are NOT equivalent (one is impossible given the dimension constraint).
Appendix K: Additional AI Case Studies
K.1 Mechanistic Interpretability via Linear Maps
Mechanistic interpretability (MI) aims to reverse-engineer neural networks by understanding what computation each component performs. Linear map theory is central to this enterprise.
The residual stream as a communication bus. In transformer models, each layer reads from and writes to a shared "residual stream" . Each attention head and MLP layer contributes an additive update:
Each update is (approximately) a low-rank linear map from the residual stream back to itself. The attention update's linear part is (the "OV circuit"); the MLP's linear part is after linearizing the activation.
SVD of the OV circuit. The matrix can be analyzed via SVD. Its singular values reveal how strongly the attention head modifies the residual stream, and its singular vectors reveal which directions it reads from and writes to. Heads with near-zero singular values are "inattentive" - they barely modify the residual stream regardless of attention pattern.
Subspace decomposition. The full set of attention heads (for layers, heads per layer) collectively form a large linear map from the input to the residual stream updates. The total update is a sum of low-rank linear maps. Understanding the structure of this sum - which heads are redundant, which are essential - is a central goal of circuit-level MI.
K.2 Linear Algebra of Diffusion Models
Diffusion models (DDPM, Score matching) add Gaussian noise to data and learn to denoise. The forward process is an affine map:
This is an affine interpolation between the data and pure noise. The coefficient scales the data, and scales the noise.
The denoising objective. The neural network estimates (the noise) from the noisy input. Near a data point , this estimator is approximately a linear function of - the Tweedie formula gives the optimal estimator as:
which is a linear function of and . The diffusion process itself is a composition of affine maps in the forward direction, and the learned reverse process approximately inverts these affine maps.
K.3 State Space Models as Linear Dynamical Systems
Structured State Space Models (S4, Mamba, RWKV) compute their state updates via linear recurrences:
where , , , are (possibly input-dependent) matrices.
The state transition is a linear dynamical system - the fundamental object of study in control theory and signal processing.
Key linear algebra results for SSMs:
-
Eigenvalues of determine memory. If for all , the system has bounded memory decay. If any , the state can grow unboundedly.
-
Diagonalization for efficiency. If , the recurrence decouples into independent scalar recurrences - each computable independently. S4 uses diagonal (DPLR structure) for parallel computation via convolution.
-
The convolution view. Unrolling the recurrence: . The impulse response is a sequence of matrix powers - analyzable by the spectral decomposition of .
-
Mamba's selectivity. Mamba makes input-dependent: . The recurrence becomes bilinear in , not purely linear. The linearization (around typical inputs) gives a locally linear system analyzable by the tools of this section.
Appendix L: Further Reading
Primary References
-
Axler, S. (2015). Linear Algebra Done Right (3rd ed.). Springer. - The definitive abstract treatment of linear maps. Goes from axioms to spectral theory without matrices until chapter 10. Highly recommended for conceptual depth.
-
Strang, G. (2016). Introduction to Linear Algebra (5th ed.). Wellesley-Cambridge Press. - Computational and applied focus. Excellent for four fundamental subspaces and applications.
-
Horn, R. & Johnson, C. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. - Comprehensive advanced reference. Proofs of all major results, including Cayley-Hamilton, spectral theorems, singular values.
-
Trefethen, L. & Bau, D. (1997). Numerical Linear Algebra. SIAM. - Gold standard for computational linear algebra and stability.
AI-Focused References
-
Vaswani, A. et al. (2017). "Attention is All You Need." NeurIPS. - The original transformer paper; read the attention mechanism as linear projections.
-
Hu, E. et al. (2021). "LoRA: Low-Rank Adaptation of Large Language Models." ICLR 2022. - LoRA rank-nullity argument.
-
Elhage, N. et al. (2021). "A Mathematical Framework for Transformer Circuits." Anthropic. - OV and QK circuits as linear maps; residual stream as communication bus.
-
Park, K. et al. (2023). "The Linear Representation Hypothesis and the Geometry of Large Language Models." - Linear features in transformer representations.
-
Jacot, A. et al. (2018). "Neural Tangent Kernel: Convergence and Generalization in Neural Networks." NeurIPS. - Training dynamics via linear maps (NTK theory).
-
Gu, A. et al. (2022). "Efficiently Modeling Long Sequences with Structured State Spaces." ICLR. - SSMs as linear dynamical systems.
This section is part of the Math for LLMs curriculum - a systematic treatment of the mathematics underlying modern large language models.
Appendix M: Self-Assessment Checklist
After completing this section, you should be able to answer the following questions without notes.
Conceptual Understanding
-
Q1. State the two axioms of a linear map. What is the fastest way to show a map is NOT linear?
-
Q2. What is the kernel of a linear map? Prove it is a subspace.
-
Q3. State the rank-nullity theorem. Give an example where rank = 2, nullity = 3. What can you say about the domain and codomain dimensions?
-
Q4. Why is a linear map from to injective if and only if it is surjective? Why does this fail for maps between spaces of different dimensions?
-
Q5. What is the change-of-basis formula? If , what relationship does that establish between the maps represented by and ?
-
Q6. What is an orthogonal projection? How do you verify that a matrix is a projection? What two extra properties make it an orthogonal projection?
-
Q7. What is the Jacobian matrix? For , what is the shape of ?
-
Q8. In the backward pass of backpropagation, why do we multiply by rather than ?
-
Q9. What makes an affine map different from a linear map? How do you represent an affine map as a linear map (in one higher dimension)?
Computational Skills
-
C1. Given a matrix , find a basis for using row reduction.
-
C2. Given a linear map defined on a non-standard basis, write its matrix in that basis.
-
C3. Given two bases and , compute the change-of-basis matrix and use it to transform the matrix of .
-
C4. For a rank- update , compute the null space dimension and verify it numerically.
-
C5. Compute the Jacobian of a given vector-valued function (e.g., softmax, elementwise ReLU, an affine map composed with sigmoid).
AI Connections
-
AI1. Explain why LoRA (low-rank adaptation) is more parameter-efficient than full fine-tuning, using the language of rank and nullity.
-
AI2. Describe the forward pass of a multi-head attention layer as a sequence of linear maps. Which operations are linear, which are bilinear, and which are nonlinear?
-
AI3. What is the linear representation hypothesis? Why does it matter for interpretability research?
-
AI4. Why does a purely linear deep network (no activations) collapse to a single linear map, regardless of depth?
-
AI5. How does the dual map relate to backpropagation? What mathematical object is the "gradient" in the strict sense?
Appendix N: Notation Summary for This Section
| Symbol | Meaning |
|---|---|
| Linear transformation from to | |
| Kernel (null space) of | |
| Image (range) of | |
| Dimension of | |
| Dimension of | |
| Matrix of from basis to basis | |
| Change-of-basis matrix | |
| Dual (transpose) map | |
| Dual space of | |
| Total derivative (Frechet derivative) of at | |
| Jacobian matrix of at | |
| Space of all linear maps from to | |
| and are isomorphic | |
| Quotient space of modulo subspace | |
| and are similar matrices () | |
| Projection (idempotent) | |
| Orthogonal matrix | |
| Affine map | |
| LoRA low-rank update, |
Appendix O: Linear Maps and Symmetry
O.1 Equivariant Maps
A linear map is equivariant with respect to a group if, for every and every :
where and are representations of on and .
Intuitively: "applying the group action then the map = applying the map then the group action." The map commutes with the symmetry.
Examples:
- Translation equivariance: ... but this is additivity - every linear map is equivariant to the translation group on vector spaces.
- Rotation equivariance: for all rotations . In 3D: implies for some scalar (Schur's lemma for the rotation representation).
- Permutation equivariance: for all permutation matrices . Implies is a sum of a "same-position" term and a "mean-field" term - this is why mean pooling and attention with tied weights are permutation equivariant.
For AI: CNNs achieve translation equivariance by using convolutional (shared-weight) linear maps. Equivariant graph neural networks use permutation-equivariant maps. Geometric deep learning is the systematic study of building neural networks as compositions of equivariant linear maps. Transformer attention (without positional encoding) is permutation equivariant - adding positional encodings explicitly breaks this symmetry.
O.2 Schur's Lemma and Irreducible Representations
Schur's Lemma. Let be a linear map that commutes with all maps in an irreducible representation of a group (i.e., for all ). Then for some scalar .
This powerful result says: the only linear maps that commute with all symmetries of an irreducible representation are scalar multiples of the identity. This constrains the form of equivariant maps.
Application: If attention weights must be equivariant to the representation of a certain symmetry group acting on the heads, Schur's lemma constrains the possible attention patterns.
O.3 Representation Theory Preview
Representation theory studies how groups act on vector spaces via linear maps. Every group representation is a group homomorphism - a map that takes group elements to invertible linear maps, preserving the group structure:
This is the language in which equivariant neural networks (E(3)-equivariant networks for molecular property prediction, SE(3)-equivariant networks for robotics, permutation-equivariant networks for sets) are designed. The "weights" of an equivariant linear layer are constrained to be equivariant - and representation theory tells you exactly what form these weights can take.
Appendix P: Quick Reference - Common Linear Maps in and
Common Linear Maps
| Transformation | Matrix | Properties |
|---|---|---|
| Rotation by | Orthogonal, , $ | |
| Reflection across -axis | Symmetric, , | |
| Reflection across | Symmetric, , | |
| Horizontal shear by | , (double) | |
| Scaling by | Symmetric, , | |
| Projection onto -axis | Symmetric, idempotent, , | |
| Projection onto | Symmetric, idempotent, | |
| Zero map | Rank 0, , | |
| Identity | All eigenvalues 1, |
All these are linear maps. To make them affine (include translation), append a row and column in homogeneous form.
Common Linear Maps
| Transformation | Description | Key Properties |
|---|---|---|
| Rotation around -axis | : rotates -plane, fixes | Orthogonal, |
| Reflection across -plane | Symmetric, | |
| Projection onto -plane | Symmetric, idempotent, rank 2 | |
| Householder reflection | Symmetric, , (mult. 2) and | |
| Scaling | Diagonal; eigenvalues are | |
| Shear | () | ; all eigenvalues 1 |
End of Linear Transformations section. Continue to 05: Orthogonality and Orthonormality.