"The Fourier transform is a mathematical prism: it refracts a signal into its constituent frequencies, revealing structure invisible in time."
- paraphrasing Norbert Wiener
Overview
The Fourier Transform extends Fourier series to non-periodic signals by taking the period to infinity. Where a Fourier series decomposes a periodic function into a discrete sum of harmonics, the Fourier Transform decomposes an aperiodic function into a continuous integral over all frequencies. The result is the spectrum of a signal - a complete description of which frequencies are present, at what amplitude, and with what phase.
This extension is not merely a generalization for mathematical completeness. It is the foundation of modern signal processing, quantum mechanics, partial differential equations, and - crucially for this curriculum - large language models, neural operators, and kernel methods. The Fourier Transform converts convolution (an expensive operation) into pointwise multiplication, converts differentiation into algebraic multiplication by frequency, and provides the mathematical basis for the uncertainty principle that constrains every time-frequency representation from spectrograms to attention windows.
We develop the theory in full: the definition and its limitations, the extension via Plancherel's theorem, the complete set of operational properties, the inversion theorem, the Heisenberg uncertainty principle with proof, the distribution-theoretic extension to handle Dirac deltas and periodic signals, and a thorough treatment of AI applications including FNet, random Fourier features, spectral normalization, and the Fourier Neural Operator.
Prerequisites
- Fourier Series - complex Fourier coefficients, , Parseval's identity, orthonormality of (Section 20-01 Fourier Series)
- Complex analysis - , complex modulus, argument, conjugate (Section 01 Mathematical Foundations)
- Integration - improper integrals, Fubini's theorem, dominated convergence (Section 04 Calculus Fundamentals)
- function spaces - inner product, norm, completeness, Hilbert space (Section 12-02 Hilbert Spaces)
- Linear algebra - unitary operators, isometries (Section 03 Advanced Linear Algebra)
Companion Notebooks
| Notebook | Description |
|---|---|
| theory.ipynb | Interactive derivations: FT of standard functions, properties, uncertainty principle, Plancherel, FNet, random Fourier features |
| exercises.ipynb | 10 graded problems from computing transforms to proving inversion to implementing kernel approximation |
Learning Objectives
After completing this section, you will:
- Derive the Fourier Transform formula as the limit of the Fourier series
- Compute the Fourier Transform of the Gaussian, rectangular pulse, Lorentzian, and triangle function from the definition
- Apply all ten standard properties (linearity, shift, scaling, differentiation, conjugation, duality, convolution) to evaluate transforms without integration
- State and prove Plancherel's theorem and use it to evaluate norms in the frequency domain
- State and prove the Heisenberg uncertainty principle and identify the Gaussian as the unique extremal function
- Define the Schwartz space and tempered distributions; compute the Fourier Transform of , , , and the Dirac comb
- Explain the Poisson summation formula and its role in sampling theory
- Describe how FNet (Lee-Thorp et al., 2022) replaces self-attention with Fourier transforms and analyze its computational tradeoffs
- Implement random Fourier features (Rahimi & Recht, 2007) and verify the kernel approximation guarantee
- Explain spectral normalization (Miyato et al., 2018) and how it uses the FT to enforce Lipschitz constraints on discriminators
- Distinguish the three normalization conventions (, , ) and convert between them without error
- Explain how the uncertainty principle constrains RoPE context length extension and attention window design
Table of Contents
- 1. Intuition
- 2. Formal Definitions
- 3. Core Properties
- 4. The Fourier Inversion Theorem
- 5. Plancherel's Theorem and Theory
- 6. The Heisenberg Uncertainty Principle
- 7. Tempered Distributions and the Dirac Delta
- 8. Applications in Machine Learning
- 9. Common Mistakes
- 10. Exercises
- 11. Why This Matters for AI (2026 Perspective)
- 12. Conceptual Bridge
1. Intuition
1.1 From Fourier Series to Fourier Transform
Recall from Section 20-01 Fourier Series that a -periodic function has a Fourier series in complex exponential form:
The frequencies present are the discrete set , separated by spacing .
Now ask: what happens as ? The function is no longer required to repeat - it becomes an arbitrary function on all of . Three things change simultaneously:
- The frequency spacing : the discrete spectrum becomes a continuum.
- The discrete sum becomes an integral .
- The coefficient (which scales as ) becomes a density .
Concretely, substitute and let :
where the Fourier Transform (using the -convention) is:
This derivation makes the Fourier Transform inevitable: it is not a definition pulled from thin air, but the natural limit of the Fourier series as periodicity is removed. The continuous spectrum is the spectral density of - the amplitude and phase contributed by frequency per unit frequency interval.
Non-example: The derivation above requires that the "coefficients" are well-defined as . For this, we need , i.e., . A constant function is NOT in , so it does not have a classical Fourier Transform - we must extend the theory to distributions (Section 7) to handle it.
1.2 Frequency as a Continuous Variable
In a Fourier series, the spectrum is a discrete set of spikes: the function has energy only at the harmonics . In the Fourier Transform, the spectrum is a continuous function , and the function can have energy spread continuously across all frequencies.
The magnitude spectrum tells you how much of each frequency is present. The phase spectrum tells you the phase offset of each frequency component. Together they completely determine (via the inversion theorem). Some examples of what the spectrum looks like:
- A pure sinusoid : spectrum is two spikes at (as Dirac deltas).
- A rectangular pulse (nonzero on ): spectrum is a sinc function, spreading out in frequency.
- A Gaussian : spectrum is another Gaussian (this is the self-dual case).
- A chirp (frequency increasing linearly with time): spectrum is spread over a range of frequencies, with the time-frequency tradeoff governed by the uncertainty principle.
- White noise: flat spectrum - equal energy at every frequency.
The key principle is the time-frequency duality: a signal that is concentrated in time (short duration) must have a broad spectrum, and a signal with a narrow spectrum (nearly monochromatic) must extend over a long time. This is not a limitation of our measurement apparatus - it is a mathematical theorem (the Heisenberg uncertainty principle, Section 6).
For AI: This duality directly constrains attention mechanisms. RoPE uses frequencies to encode position, and extending context length (longer time window) requires lower frequencies - which is why YaRN (Peng et al., 2023) and LongRoPE (Ding et al., 2024) rescale the frequency base to accommodate longer sequences. The uncertainty principle is the fundamental reason why this rescaling is necessary.
1.3 Why It Matters for AI (2026 Perspective)
The Fourier Transform is not peripheral to modern AI - it is structurally present in several of the most important systems:
FNet (Lee-Thorp et al., 2022): Replaces the self-attention sublayer in a Transformer with a 2D Fourier transform over the sequence and embedding dimensions. Achieves 92% of BERT's accuracy on GLUE benchmarks at 7 the training speed, because FFT is while attention is . The mathematical insight: the FT acts as a "global mixer" that combines all token representations - a weaker but cheaper version of attention.
Random Fourier Features (Rahimi & Recht, 2007, NeurIPS Best Paper): Every shift-invariant kernel is the Fourier Transform of a non-negative measure (Bochner's theorem). By sampling random frequencies from that measure, one gets a randomized feature map such that . This reduces kernel machine computation from to .
Spectral Normalization (Miyato et al., 2018): To train stable GANs, the discriminator's weight matrices are normalized by their largest singular value (spectral radius). This enforces a Lipschitz constraint on the discriminator, stabilizing training. The "spectral" in the name refers to the spectrum of the weight matrix - a direct application of the FT of the weight matrices interpreted as linear operators.
Fourier Neural Operator (Li et al., 2021): Learns mappings between function spaces by applying a learnable linear transformation in Fourier space, then transforming back. Used for solving PDEs 1000 faster than traditional numerical methods. The key is that convolution in space = pointwise multiplication in Fourier space, so global dependencies can be captured efficiently.
RoPE (Su et al., 2021): Used in LLaMA-3, Mistral, Qwen, and virtually every modern LLM. Interprets each position as a rotation in the complex plane, using Fourier basis vectors at different frequencies for different embedding dimensions. The relative-position property follows from the group structure of the complex exponential.
1.4 Historical Timeline
FOURIER TRANSFORM - HISTORICAL MILESTONES
========================================================================
1822 Fourier's "Theorie analytique de la chaleur": introduces the
integral formula that becomes the Fourier Transform
1880s Rayleigh uses Fourier integrals in optics and acoustics
1898 Heaviside operational calculus (early form of the Laplace/FT)
1915 Plancherel proves the L2 isometry theorem (Plancherel's theorem)
1927 Heisenberg formulates the uncertainty principle in quantum
mechanics; Kennard gives the precise mathematical inequality
1933 Norbert Wiener's "The Fourier Integral and Certain of its
Applications" - rigorous L2 theory; lays groundwork for
signal processing as a mathematical discipline
1949 Shannon's sampling theorem (uses Poisson summation formula)
1965 Cooley & Tukey publish the FFT - makes FT computable in O(NlogN)
1966 Schwartz distributions: FT extended to , constants, sinusoids
1975 FT enters digital signal processing (audio, radar, MRI)
2007 Rahimi & Recht: Random Fourier Features (NeurIPS Best Paper)
2017 Vaswani et al.: sinusoidal positional encodings in Transformers
2018 Miyato et al.: spectral normalization for GANs
2021 Li et al.: Fourier Neural Operator for PDE solving
2021 Su et al.: RoPE - now in LLaMA-3, Mistral, Qwen, GPT-NeoX
2022 Lee-Thorp et al.: FNet - Fourier-based alternative to attention
2023 YaRN (Peng et al.): Fourier-based context length extension
2024 LongRoPE (Ding et al.): progressive frequency rescaling to 2M
context length; RWKV-7 uses state-space FT interpretation
========================================================================
1.5 What the Fourier Transform Does Geometrically
The most useful geometric picture is that the Fourier Transform decomposes in an orthonormal basis of complex exponentials - just as the Fourier series decomposed a periodic function in the trigonometric basis. The difference is that now the "basis" is uncountably infinite and the "coefficients" form a continuous function rather than a sequence.
Formally, the complex exponentials do not form a basis of in the usual Hilbert space sense (they are not in themselves - they have constant modulus 1 and are not square-integrable). The rigorous statement is that the Fourier Transform is a unitary operator on (Plancherel's theorem), meaning it preserves inner products and norms:
Think of the Fourier Transform as a rotation in an infinite-dimensional space: it rotates the function into a new coordinate system (the frequency domain) where the same function looks different but has exactly the same "length" (energy). Just as rotating a vector in preserves its norm but changes its coordinates, the Fourier Transform preserves energy but expresses it in frequency coordinates.
A second geometric picture: the Fourier Transform of the convolution is the pointwise product . Convolution in time/space corresponds to multiplication in frequency. This is the key theorem for signal processing and the foundation of efficient convolution in CNNs via the FFT. (Full treatment in Section 20-04 Convolution Theorem.)
GEOMETRIC PICTURE OF THE FOURIER TRANSFORM
========================================================================
Time domain Frequency domain
------------- ----------------
f(t): a function of time f(): a function of frequency
Signal duration T <--> Bandwidth ~ 1/T (uncertainty principle)
Convolution f*g <--> Pointwise product fg (Convolution Theorem)
Differentiation d/dt <--> Multiplication by 2i
Translation f(t-a) <--> Modulation e^{-2ia}f()
Scaling f(at) <--> Dilation (1/|a|)f(/a)
The FT is a UNITARY OPERATOR on L2(R):
f2 = f2 (Plancherel)
f,g = f,g (Parseval)
========================================================================
2. Formal Definitions
2.1 The Fourier Transform on
Definition 2.1 (Fourier Transform on ). For (i.e., ), the Fourier Transform of is the function defined by:
We write or .
Well-definedness. For and any :
So is bounded: . Moreover, one can show (by dominated convergence) that is continuous on , and by the Riemann-Lebesgue lemma, as .
Theorem 2.1 (Riemann-Lebesgue Lemma). If , then (continuous functions vanishing at infinity):
Proof sketch. For step functions, the integral reduces to a finite sum of terms , each going to 0. Approximate general functions by step functions and use the boundedness .
What tells you. The value is a complex number encoding:
- : the amplitude of frequency in
- : the phase of frequency in
The power spectrum gives the energy density per unit frequency at .
2.2 Convention War: vs vs
The Fourier Transform appears in three notational conventions in the literature, all equivalent but related by rescaling:
| Convention | Transform formula | Inverse formula | Used in |
|---|---|---|---|
| Frequency (Hz) | Signal processing, probability | ||
| Angular freq (rad/s), no | Physics, engineering | ||
| Angular freq , symmetric | Pure mathematics |
This section uses the -convention (row 1). It is self-symmetric (no factors), maps the Gaussian to itself, and is standard in modern ML papers (FNet, FNO, RFF all use it). The relationship between conventions: .
Warning: The most common source of errors in Fourier analysis is mixing conventions. When reading a paper, identify the convention in the first equation before proceeding. Properties like "differentiation multiplies by " vs "multiplies by " depend entirely on which convention is in use.
2.3 Standard Examples
These four transforms appear constantly in applications and should be memorized.
Example 1: Rectangular Pulse (rect function)
The sinc function oscillates and decays as . The slow decay reflects the sharp discontinuity of - sharp features in time produce slow decay in frequency (this is the spectral analog of the Gibbs phenomenon from Section 20-01).
A general pulse of width : has transform . Wider pulse -> narrower, taller sinc lobe.
Example 2: Gaussian
The Gaussian is self-dual under the Fourier Transform with the -convention: . This is unique among "nice" functions and makes the Gaussian the natural test function in Fourier analysis, the ground state in quantum mechanics, and the kernel of the heat equation.
Derivation: Complete the square in the exponent: . Then:
where the integral equals 1 by contour integration (the Gaussian integral is analytic and the contour shift is justified because the integrand decays rapidly).
Example 3: Lorentzian (Cauchy distribution)
The exponential decay in frequency reflects the moderate smoothness of the Lorentzian (it is but not compactly supported). The relationship between smoothness and spectral decay is captured by the general theorem in Section 3.5.
Example 4: One-Sided Exponential
This has modulus , which decays as for large - reflecting the discontinuity at .
Non-example 1: . This is in but not (it decays too slowly), so the integral does not converge absolutely and the definition above does not apply. The theory (Section 5) handles this case.
Non-example 2: . This is bounded but not in or - it does not decay. Its Fourier Transform exists only as a distribution: (see Section 7).
2.4 The Inverse Fourier Transform
Definition 2.2 (Inverse Fourier Transform). For :
Note: the inverse is the same as the forward transform except the sign in the exponent is flipped (). In the -convention, this symmetric form is one of its advantages.
The inversion problem: Given , can we recover ? Yes, under mild conditions:
but this requires knowing that (which is not guaranteed by alone - for instance, is not in ). The full inversion theorem is treated rigorously in Section 4.
2.5 Non-Examples and the Extension
The classical definition fails for many important functions:
| Function | Why FT fails | Solution |
|---|---|---|
| (broader Gaussian) | In - this one is fine | Not a failure |
| Not in | Use Plancherel (Section 5) | |
| Not in or | Use distributions (Section 7) | |
| Not in or | Use distributions (Section 7) | |
| Not a function | Use distributions (Section 7) |
The extension proceeds via a density argument. The subspace is dense in . For , the classical integral defines . The Plancherel theorem (Section 5) then shows , which allows extending to all of by continuity. The resulting transform is a unitary operator on .
3. Core Properties
3.1 Linearity and Conjugate Symmetry
Theorem 3.1 (Linearity). For and :
Proof: Immediate from linearity of the integral.
Theorem 3.2 (Conjugate Symmetry / Hermitian Property). For real-valued :
Proof: , where the last step uses (real-valued).
Corollaries for real :
- : the magnitude spectrum is even
- : the phase spectrum is odd
- If is also even, then is real-valued and even
- If is also odd, then is purely imaginary and odd
For AI: The Hermitian symmetry means that for real signals, half the spectrum is redundant - you only need frequencies . This is why the FFT of a real signal produces unique complex outputs from real inputs. Real-valued weight matrices in neural networks have Hermitian-symmetric spectra, which is exploited in efficient spectral computation.
3.2 Time-Shift and Frequency-Shift (Modulation)
Theorem 3.3 (Time-Shift / Translation). For :
Proof: .
Interpretation: Shifting a signal in time multiplies its spectrum by a complex phase factor. The magnitude spectrum is unchanged - a time shift does not affect which frequencies are present, only their phases. The phase changes linearly with frequency: .
Theorem 3.4 (Frequency-Shift / Modulation). For :
Proof: .
Interpretation: Multiplying by a complex exponential (modulation) shifts the spectrum. This is the mathematical basis for amplitude modulation (AM) radio and for the RoPE positional encoding.
For AI - RoPE connection: In RoPE, the query at position is (rotation by in the complex plane). The inner product between query at and key at is , depending only on relative position . This is the modulation theorem in action.
3.3 Scaling and Dilation
Theorem 3.5 (Scaling / Dilation). For , :
Proof: .
Interpretation: This is the time-bandwidth product in action:
- Compressing a signal in time (, so is narrower): the spectrum stretches by factor and shrinks in amplitude by
- Stretching a signal in time (): the spectrum compresses
This confirms the time-frequency duality: doubling the duration halves the bandwidth, and vice versa. This is a hard mathematical constraint, not an engineering limitation.
3.4 Time Reversal
Theorem 3.6 (Time Reversal). For :
Proof: .
Duality: There is a deeper symmetry: . Applying the Fourier Transform four times returns the original function: . This means the FT has eigenvalues and the Hermite functions are its eigenfunctions (with the Gaussian as the eigenfunction for eigenvalue 1).
3.5 Differentiation and Integration
Theorem 3.7 (Differentiation in Time). If :
More generally, .
Proof: Integration by parts: . The boundary term vanishes since implies as .
Theorem 3.8 (Differentiation in Frequency). If :
Equivalently: .
Key consequence for smoothness vs. spectral decay:
| Smoothness of | Decay of |
|---|---|
| and | $ |
| (smooth) | decays faster than any polynomial |
| has a jump discontinuity | $ |
| is analytic | decays exponentially |
For AI: The differentiation property is why Fourier methods are efficient for solving differential equations. It converts a PDE into an ODE , which is solved by - a simple exponential decay. The Fourier Neural Operator (Section 8.4) exploits this by learning the spectral solution operator directly.
3.6 The Master Properties Table
| Property | ||
|---|---|---|
| Linearity | ||
| Time shift | ||
| Frequency shift | ||
| Scaling | ||
| Time reversal | ||
| Conjugation | ||
| Hermitian (real ) | ||
| Differentiation | ||
| Freq. differentiation | ||
| Convolution | ||
| Multiplication | ||
| Duality |
3.7 Convolution-Multiplication Duality - Preview
The Convolution Theorem states that the Fourier Transform converts convolution into pointwise multiplication:
where .
This is the most important property of the Fourier Transform for applications. It means that:
- Linear filtering (convolution with a filter ) corresponds to pointwise multiplication of spectra
- A filter's effect on frequency is simply multiplication by (the frequency response)
- Convolution of length- signals costs directly but via FFT
Preview: The full treatment of the convolution theorem - including circular convolution, cross-correlation, the Wiener-Khinchin theorem, and applications to CNNs, WaveNet, and Mamba - is in Section 20-04 Convolution Theorem. Here we state the theorem and use it; the proof and all applications belong there.
4. The Fourier Inversion Theorem
4.1 Statement and Sufficient Conditions
The fundamental question: given , can we recover ? The answer is yes under appropriate conditions, but the precise statement requires care.
Theorem 4.1 (Fourier Inversion Theorem). Suppose and . Then for almost every :
Moreover, if is also continuous at , equality holds exactly (not just a.e.).
The subtlety: The condition is not automatic. If , then is bounded and continuous (by Riemann-Lebesgue), but not necessarily in . For example, gives , and (it decays too slowly: ). The inversion of the rect function requires the theory.
Sufficient condition for inversion: If and has bounded variation near (e.g., is differentiable at ), then the principal value integral:
converges to where is continuous, and to at jump discontinuities - exactly as in Dirichlet's theorem for Fourier series.
4.2 Proof via Approximate Identity
The standard proof uses the concept of an approximate identity - a family of functions that converge to the Dirac delta as a parameter goes to zero.
Definition (Approximate Identity). A family is an approximate identity if:
- for all
- For every : as
The Gauss-Weierstrass kernel is an approximate identity.
Proof sketch of inversion theorem:
Define the Gauss regularized inversion:
The factor provides absolute convergence (the Gaussian is in ). Using Fubini's theorem to interchange integration order:
Since is an approximate identity: as at every point of continuity of (and in norm). Since as when , the result follows.
Significance: This proof reveals why the Gaussian plays a central role - it is the unique function (up to scaling) that is its own Fourier Transform, making the Gauss regularization self-consistent.
4.3 The Inversion Formula in Practice
The inversion formula is used to:
- Recover a signal from its spectrum: Given , compute
- Compute transforms of new functions from known transforms: Use the FT table plus properties
- Solve PDEs: Take FT, solve the resulting ODE, invert
Example: Compute such that (double-sided exponential spectrum).
By the inversion formula:
So - confirming the Lorentzian result from Section 2.3 (Example 3).
Numerical verification: The inversion theorem can be verified numerically via the FFT: compute numerically, then invert, and check that you recover up to discretization error (this is Exercise 3 in Section 10).
4.4 The Self-Dual Gaussian
The Gaussian satisfies , making it a fixed point of the Fourier Transform. This property makes it indispensable in both theory and practice.
Generalized Gaussian family. For , define (Gaussian of width ). By the scaling theorem:
Wide Gaussian ( large) -> narrow Fourier Transform (bandwidth ). This is the uncertainty principle made explicit: the product of time-width and frequency-width equals 1, the minimum possible (see Section 6).
Eigenfunctions of the FT: The Fourier Transform has eigenvalues . The corresponding eigenfunctions are the Hermite functions:
where is the -th Hermite polynomial. For : (eigenvalue 1, the self-dual Gaussian). For : (eigenvalue ).
For AI: The Gaussian window is the optimal time-frequency window (by the uncertainty principle), which is why Gaussian smoothing is used in signal preprocessing and why Gaussian attention kernels appear in some efficient attention variants.
GAUSSIAN SELF-DUALITY
========================================================================
Time domain Frequency domain
------------- ----------------
g(t) = e^{-t2} --FT-- g() = e^{-2} <- same function!
Width _t = 1/(2) Width _ = 1/(2)
Product: _t _ = 1/(2) = MINIMUM (uncertainty bound)
Scaling:
f(t) = e^{-t2/2} --FT-- e^{-22}
^ wider in time ^ narrower in frequency
========================================================================
5. Plancherel's Theorem and Theory
5.1 Statement: FT as a Unitary Isometry
Theorem 5.1 (Plancherel's Theorem). The Fourier Transform extends uniquely from to a unitary operator on :
Unitarity means (the adjoint equals the inverse), which in this case means - the inverse FT.
Why this is non-trivial: For but , the defining integral need not converge absolutely. Plancherel's theorem says we can still define as the limit:
The limit exists because the truncated transforms form a Cauchy sequence in (by the case and density of that subspace in ).
5.2 Parseval's Relation
Theorem 5.2 (Parseval / Plancherel Identity). For :
The special case gives the energy conservation (or Parseval's formula):
Interpretation: The total energy of a signal is the same whether measured in the time domain or the frequency domain. No energy is created or destroyed by the Fourier Transform - it is a lossless change of representation.
Using Parseval to compute integrals: Sometimes is hard but is easy (or vice versa). Example: compute .
We know , so by Parseval:
For AI: Parseval's relation underpins the analysis of spectral energy distribution in neural networks. The WeightWatcher tool (Martin & Mahoney, 2021) analyzes the spectrum of weight matrices to diagnose training quality - a healthy model has a power-law spectral distribution. Parseval tells us the Frobenius norm equals the energy in the spectral domain.
5.3 Proof Sketch: Extension from to
The proof of Plancherel's theorem follows a classic functional analysis pattern:
Step 1: Show the Parseval identity for (the intersection is dense in both spaces).
For , use Fubini to compute:
Step 2: The isometry for shows is a bounded operator with operator norm 1.
Step 3: By density ( is dense in ) and the fact that a bounded linear isometry extends uniquely to the closure, extends to all of preserving the isometry.
Step 4: Show is surjective by showing (the inverse FT) also extends and .
5.4 Energy Conservation in Practice
The Parseval identity has immediate computational consequences:
Low-pass filtering: A filter that removes frequencies (a rectangular window in frequency) preserves at most a fraction of the total energy. For a signal with most energy at low frequencies, this fraction is close to 1.
Compression by truncation: For , the best -term approximation in the Fourier basis minimizes the error. The error is - the energy in the discarded frequencies. This is the Fourier analog of truncated SVD in matrix approximation.
For AI: The Fourier Neural Operator (Section 8.4) exploits this by keeping only the top- Fourier modes (lowest frequencies) of the input function and discarding the rest. Plancherel guarantees the error is exactly the discarded energy - a principled truncation criterion.
6. The Heisenberg Uncertainty Principle
6.1 Time Spread and Frequency Spread
To state the uncertainty principle precisely, we need quantitative measures of how "spread out" a signal is in time and frequency.
Definition 6.1 (Time Center and Spread). For with :
Definition 6.2 (Frequency Center and Spread). Similarly:
Note: since (Plancherel), and are probability densities. The time and frequency spreads are the standard deviations of these distributions.
6.2 Formal Statement:
Theorem 6.1 (Heisenberg-Kennard Uncertainty Principle). For any with and :
Equality holds if and only if is a Gaussian (up to time-shift, frequency-shift, and scaling):
for some and .
The bound is fundamental. This is not a statement about measurement error or quantum physics - it is a purely mathematical theorem about functions and their Fourier Transforms. Any signal with time duration necessarily has bandwidth at least . You cannot concentrate a signal to be both time-limited and bandwidth-limited simultaneously.
6.3 Proof via Cauchy-Schwarz
Proof. Without loss of generality, assume (we can always translate in time and frequency without changing the spreads). Then:
By Parseval and the differentiation property, (using and Parseval).
So we need to show:
i.e., .
By the Cauchy-Schwarz inequality:
More elegantly, integrate by parts and use Cauchy-Schwarz:
Wait - the cleaner path: use , integrate over , and the left side integrates to 0 (since implies ):
So . Then by Cauchy-Schwarz:
Dividing both sides by :
6.4 The Gaussian as the Unique Extremal Function
Equality in Cauchy-Schwarz requires for some constant . This ODE has solution . For , we need , giving for - a Gaussian.
Adding back the time-shift and frequency-shift :
Check: ... Actually with the normalization : , , product = - exactly the bound.
Implication: The Gaussian is the only signal that achieves perfect time-frequency concentration in the sense of minimizing . Every other signal is "more spread out" in at least one of the two domains. This is why Gaussian windows are optimal for time-frequency analysis (Gabor atoms), and why the Gaussian noise model is natural in signal processing.
6.5 Implications for Neural Architecture Design
The uncertainty principle has direct, concrete implications for AI systems:
RoPE and context length extension: In RoPE, each dimension pair uses frequency . The lowest frequencies encode the longest-range positional information. To extend context from 4K to 128K tokens (as in LLaMA-3 long context), the lowest frequencies must be able to distinguish positions up to 128K. But by the uncertainty principle, a signal that is distinguishable over a range in "position space" must have frequency resolution - which requires the frequency to be low enough. YaRN (Peng et al., 2023) and LongRoPE (Ding et al., 2024) resolve this by rescaling to use lower frequencies for long-context models.
Spectrogram and STFT: Audio models like Whisper (Radford et al., 2022) use the Short-Time Fourier Transform - the FT applied to windowed segments. The window length controls the time-frequency tradeoff. Longer windows: better frequency resolution, worse time resolution. The Gaussian window is optimal but the Hamming window is commonly used for computational reasons.
Attention window size: Multi-head self-attention with local windows (as in LongFormer, BigBird) effectively applies a bandpass filter in "position space." The uncertainty principle says a window of size can resolve frequency up to - setting a fundamental limit on the position encodings that can be usefully resolved within the window.
UNCERTAINTY PRINCIPLE - ARCHITECTURE IMPLICATIONS
========================================================================
Signal Analysis Neural Architecture Analog
----------------- -------------------------
Time duration T Context window / sequence length
Bandwidth B ~ 1/T Position frequency resolution
t >= 1/(4) Short context coarse PE, long fine PE
STFT window size W: Local attention window size W:
-> freq res: ~ 1/W -> PE resolution: ~ 1/W
-> time res: ~ W -> local context: ~ W
Gaussian window: Gaussian attention kernel:
-> optimal concentration -> used in some efficient attention variants
LongRoPE: rescales j to lower freqs -> finer resolution at long range
========================================================================
7. Tempered Distributions and the Dirac Delta
7.1 Why Distributions Are Necessary
The and theories of the Fourier Transform handle a large class of functions, but they exclude some of the most important objects in signal processing and physics:
- - the Dirac delta: not a function, but models an ideal impulse
- - the constant function: not in or , but models a DC signal
- - a pure tone: not in or , but is a fundamental signal
- - the Dirac comb: not a function, but models periodic sampling
All of these have Fourier Transforms in the distributional sense - and all appear in practical signal processing. The distribution framework, developed by Laurent Schwartz in the 1950s, extends the Fourier Transform to this broader class.
7.2 Schwartz Space and Tempered Distributions
Definition 7.1 (Schwartz Space). The Schwartz space is the space of all functions such that for all :
i.e., and all its derivatives decay faster than any polynomial. Examples: , (the derivative condition fails at 0 - not Schwartz), Gaussian bump functions.
The Schwartz space is closed under the Fourier Transform: if , then . This is the key property: the FT maps to itself bijectively.
Definition 7.2 (Tempered Distributions). A tempered distribution is a continuous linear functional . We write .
Examples of tempered distributions:
-
Regular distributions: Every function () defines a tempered distribution .
-
Dirac delta: . The Dirac delta evaluates at 0. It is NOT a function.
-
Derivative of delta: .
-
Dirac comb: .
Fourier Transform of a tempered distribution: Define for . This is the "transpose" of the Fourier Transform.
7.3 The Dirac Delta and its Fourier Transform
The Dirac delta models a unit impulse concentrated at :
It is the limit of a sequence of ordinary functions: converges to in the distributional sense as .
Fourier Transform of : By the definition . Therefore:
An ideal impulse at has a flat (white) spectrum - it contains equal energy at all frequencies. This makes perfect physical sense: to create a perfect impulse, you need all frequencies constructively interfering at .
Fourier Transform of a constant: By duality (applying FT twice): . A constant signal (DC) lives entirely at frequency .
The duality principle in full: The FT maps and - a striking symmetry.
More generally: A delta at :
This is the time-shift property: shifted by gives a phase factor , but the magnitude is still flat: .
7.4 FT of Constants, Sinusoids, and Periodic Signals
Using and the frequency-shift property :
Complex exponential:
Cosine and Sine:
A pure cosine has a spectrum consisting of two Dirac deltas, one at and one at (the negative frequency is the mirror image required by Hermitian symmetry). The amplitude of each spike is because the energy is split equally between the two.
General periodic signal: A -periodic function has Fourier Transform:
The Fourier Transform of a periodic signal is a discrete sum of Dirac deltas at the harmonics - precisely the discrete spectrum of the Fourier series! This shows that Fourier series and Fourier Transform are two aspects of the same theory, unified by the distributional framework.
7.5 The Dirac Comb and Poisson Summation Formula
Definition 7.3 (Dirac Comb). The -periodic Dirac comb is:
Theorem 7.1 (FT of the Dirac Comb).
A comb of spacing in time has a spectrum that is a comb of spacing in frequency. This is the time-frequency duality made concrete: fine temporal sampling -> coarse frequency sampling, and vice versa.
Theorem 7.2 (Poisson Summation Formula). For :
More generally, for sampling at interval :
Proof: The function is -periodic, so it has a Fourier series with coefficients (by computing ). Evaluating at : .
The Poisson summation formula is the key to sampling theory. Shannon's sampling theorem (1949) follows directly: a signal with bandwidth (i.e., for ) is completely determined by its samples at rate (the Nyquist rate). The Poisson summation formula explains both the reconstruction formula and the aliasing that occurs when undersampling.
For AI: The discrete Fourier Transform (Section 03) is the computational realization of the Poisson summation formula. The FFT computes the sampled Fourier Transform exactly, which requires the signal to be band-limited (or periodic) - any violation causes aliasing, the discrete analog of spectral leakage.
7.6 Unifying Fourier Series and Fourier Transform
The distribution framework reveals that Fourier series and Fourier Transform are the same thing:
| Situation | Signal | Spectrum |
|---|---|---|
| General signal | Continuous, aperiodic | Continuous, aperiodic |
| Periodic signal (period ) | Continuous, periodic | Discrete (spacing ): Fourier series |
| Bandlimited (bandwidth ) | Discrete (rate ): sampling | Periodic (period ): aliases |
| Periodic AND bandlimited | Discrete and periodic | Discrete and periodic: DFT (Section 03) |
FOURIER TRANSFORM UNIFICATION
========================================================================
Time domain Frequency domain Setting
----------- ---------------- -------
Continuous Continuous L1/L2 Fourier Transform
Continuous Discrete Fourier Series (periodic)
Discrete Continuous DTFT (sampled signal)
Discrete Discrete DFT / FFT -> see Section 20-03
All four are the same theory viewed through the distributional lens.
The Dirac comb + Poisson summation connects all four cases.
========================================================================
8. Applications in Machine Learning
8.1 FNet: Replacing Self-Attention with Fourier Transforms
Paper: Lee-Thorp, Ainslie, Eckstein, Ontanon (2022). "FNet: Mixing Tokens with Fourier Transforms." NAACL.
The idea: In a standard Transformer, the self-attention sublayer computes:
which is in sequence length . FNet replaces this with a 2D DFT:
where applies FFT along the sequence dimension and along the embedding dimension. This is - much faster.
Why does it work? The Fourier Transform is a global linear mixing operation: every output position depends on every input position (via the sum ). This is similar to attention's global mixing, but unparameterized - the mixing weights are fixed Fourier basis functions rather than learned attention scores.
Results: FNet achieves 92-97% of BERT's performance on GLUE benchmarks. For tasks requiring fine-grained token interaction (e.g., extractive QA), the gap is larger; for tasks where global context suffices (e.g., sentence classification), FNet nearly matches BERT. Training speed is 7 faster on GPUs, 2 faster on TPUs.
Mathematical insight: The key theorem is that the DFT matrix satisfies (circular DFT is symmetric up to normalization). Therefore - the DFT is unitary, just like the continuous FT by Plancherel. The real part operation ensures the output is real-valued.
Code sketch:
import numpy as np
def fnet_mixing(X):
"""
X: (batch, seq_len, d_model) array
Returns: real-valued mixing output of same shape
"""
X_fft = np.fft.fft(np.fft.fft(X, axis=1), axis=2)
return np.real(X_fft)
8.2 Random Fourier Features
Paper: Rahimi & Recht (2007). "Random Features for Large-Scale Kernel Machines." NeurIPS (Best Paper Award).
The setup: Kernel methods (SVMs, Gaussian Processes) require computing for all training pairs - an matrix that costs to invert. For , this is prohibitive.
Bochner's theorem: A continuous, shift-invariant kernel is positive definite if and only if it is the Fourier Transform of a non-negative measure :
(Full treatment of Bochner's theorem in Section 12-03 Kernel Methods.)
The approximation: Sample frequencies and define the random feature map:
where are random phase offsets.
Guarantee: By the law of large numbers:
with concentration: .
For popular kernels:
- RBF kernel :
- Laplace kernel : is a Cauchy distribution
Impact: Random Fourier Features reduce kernel SVM training from to , making kernel methods scalable. The same idea appears in Performer (Choromanski et al., 2021) - an efficient attention mechanism that approximates the attention kernel with random features, reducing attention to .
8.3 Spectral Normalization
Paper: Miyato, Kataoka, Koyama, Yoshida (2018). "Spectral Normalization for Generative Adversarial Networks." ICLR.
The problem: Training GANs is notoriously unstable because the discriminator can overfit, causing gradient vanishing for the generator. A Lipschitz constraint on the discriminator stabilizes training.
The solution: Normalize each weight matrix by its spectral norm (the largest singular value):
This makes every linear layer 1-Lipschitz: . The composition of 1-Lipschitz layers is 1-Lipschitz - so the entire discriminator is 1-Lipschitz.
Computing : Power iteration gives an efficient approximation. Starting from a random unit vector :
- One iteration per training step suffices empirically.
Why "spectral"? The spectral norm is the largest eigenvalue of - the spectrum of the operator viewed as a linear map. This connects to Fourier analysis: for a convolution layer with kernel , the spectral norm is - the supremum of the Fourier Transform of the filter.
For AI: Spectral normalization is now standard in GAN training. It also appears in attention normalization - normalizing the key/value projection matrices to control the Lipschitz constant of the attention map, which is critical for stable training of large models.
8.4 Fourier Neural Operator - Preview
The Fourier Neural Operator (FNO) (Li et al., 2021) learns a mapping between function spaces (e.g., PDE initial conditions -> solutions) by:
- Lift input to a higher-dimensional representation
- Apply Fourier integral operator layers:
- Project to output:
The key operation is a learnable complex-valued tensor applied pointwise in Fourier space - a global convolution with a learned filter. Only the lowest Fourier modes are kept (Plancherel guarantees the truncation error equals the discarded energy).
FNO achieves 1000 speedup over classical PDE solvers for problems like weather prediction and turbulence simulation.
Full treatment in Section 20-03 DFT and FFT - the discretized version of FNO and its implementation via FFT is covered there.
8.5 Bochner's Theorem and Shift-Invariant Kernels - Preview
Bochner's theorem (1932) characterizes all positive-definite, shift-invariant kernels as Fourier Transforms of non-negative measures:
This is a profound connection between Fourier analysis and kernel methods. The kernel's shape in the spatial domain determines (and is determined by) its spectral density .
Full treatment in Section 12-03 Kernel Methods - the RKHS theory, Mercer's theorem, and applications to SVMs, Gaussian Processes, and the Neural Tangent Kernel are covered there. Here we note that Bochner's theorem is the mathematical foundation for Random Fourier Features (Section 8.2).
8.6 RoPE from the Continuous FT Perspective
Rotary Position Embedding (RoPE) (Su, Lu, Pan, Meng, Luo, 2021) is now used in LLaMA-3, Mistral, Qwen, GPT-NeoX, and virtually every frontier LLM. Its mathematical foundation is the Fourier Transform on the circle group.
Setup. In attention, the score between query at position and key at position should depend only on the relative position . This is a translation invariance requirement - exactly the condition Bochner's theorem characterizes.
Construction. Pair up embedding dimensions: and for . Treat each pair as a complex number . Apply rotation by angle :
The attention score becomes:
which depends only on - the relative position. This is the Fourier modulation theorem: multiplying by is a frequency shift by in the "position domain."
The frequencies : The geometric spacing means each pair encodes a different "octave" of positional information. Low-index dimensions ( small) use high frequencies - sensitive to local position differences. High-index dimensions use low frequencies - sensitive to global structure over thousands of positions. This is a multi-resolution decomposition - the Fourier Transform at multiple frequency scales.
Why the base? The maximum useful context is tokens at the original RoPE scale (each frequency completes at most one full rotation). For LLaMA-3 with 128K context, the effective base must be much larger - which is why LLaMA-3.1 uses instead of .
ROPE: FOURIER TRANSFORM IN ATTENTION
========================================================================
f_q(q, m) = q e^{im} <- modulation by Fourier basis e^{im}
f_q(q,m), f_k(k,n) = Re[q*k e^{i(m-n)}]
^ depends only on relative position m-n
j = 10000^{-2j/d}: j=0 -> approx1 (fast/local), j=d/2 -> approx1/10000 (slow/global)
Context extension:
Original RoPE: base=10000, max context ~10K tokens
LLaMA-3.1: base=500K, max context 128K tokens
LongRoPE: non-uniform rescaling up to 2M context
========================================================================
9. Common Mistakes
| # | Mistake | Why It's Wrong | Fix |
|---|---|---|---|
| 1 | Confusing the and conventions and getting factors wrong | has a in the inverse; does not. Mixing these adds spurious factors | Always identify the convention on first use; for this curriculum use the -convention |
| 2 | Assuming whenever | has . The Riemann-Lebesgue lemma only guarantees , not | Check whether is smooth enough (e.g., and Lipschitz) for to hold |
| 3 | Writing | has a flat spectrum: for all . A "pointlike" impulse contains all frequencies equally | Compute distributional FT: |
| 4 | Thinking the uncertainty principle only limits our instruments | is a theorem about functions, not about measurement devices. It holds for any , regardless of how it is measured | Accept the bound as a mathematical fact; designing around it requires fundamentally different time-frequency representations (wavelets, Section 05) |
| 5 | Applying the differentiation property to non-differentiable | requires . For , in the distributional sense | Work in the distributional setting when has jumps; the formula still holds but is a distribution |
| 6 | Claiming Parseval says $\int | f | ^2 = \int |
| 7 | Mixing up the FT of a product vs. convolution | (convolution in frequency), not . The multiplicative dual of the convolution theorem is often reversed | Memorize the duality table: convolution in time product in frequency; product in time convolution in frequency |
| 8 | Forgetting the $1/ | a | $ in the scaling theorem |
| 9 | Treating negative frequencies as "unphysical" | For real , negative frequencies are redundant (Hermitian symmetry) but are not wrong. For complex (e.g., analytic signals), negative and positive frequencies are distinct and carry different information | Embrace negative frequencies; they simplify formulas and are essential for complex signals |
| 10 | Applying pointwise instead of a.e. | Inversion holds almost everywhere (for sufficient regularity). At a jump discontinuity, - the average of left and right limits | Account for the -average behavior at discontinuities (same as Dirichlet's theorem in Section 01) |
| 11 | Using and interchangeably | There are two conventions for sinc. The normalized sinc satisfies in the -convention; the unnormalized does not | In this curriculum: (normalized), consistent with the -convention FT |
| 12 | Thinking FNet replaces attention entirely | FNet is a weaker mixer than attention - it cannot learn data-dependent weights. It works well for classification but poorly for tasks requiring fine-grained token interactions (QA, coreference) | Use FNet as a fast baseline; switch to attention for tasks requiring selective, query-dependent mixing |
10. Exercises
Exercise 1 * - Computing Fourier Transforms from Definition
(a) Compute for with . Show all steps of integration. What type of function is ?
(b) Compute for a pulse of width . Express using the normalized sinc function.
(c) Verify the bound: show for your answer in (a).
(d) Confirm the Riemann-Lebesgue lemma numerically for : plot and verify it decays to 0.
Exercise 2 * - Applying Properties
(a) Given , use the time-shift property to write the Fourier Transform of .
(b) Given , find the Fourier Transform of . (Use shift + modulation.)
(c) If has , what is the FT of ? (Use both shift and scaling.)
(d) Prove: if is real and even, then is real and even.
Exercise 3 * - Parseval's Relation Numerically
(a) For , compute analytically.
(b) Compute analytically and verify .
(c) Numerically verify Parseval using the FFT: discretize on with points, apply FFT, and compare vs .
(d) Use Parseval to evaluate .
Exercise 4 ** - The Inversion Theorem
(a) For (not in after FT since ), show that the principal value integral converges to for by evaluating the integral explicitly.
(b) Show that at (the jump discontinuity), the principal value integral converges to .
(c) This mirrors Dirichlet's theorem from Section 20-01. Identify the precise analog: what plays the role of the Dirichlet kernel in the Fourier integral case?
Exercise 5 ** - Uncertainty Principle
(a) For (normalized to appropriately), compute (RMS time spread) and (RMS frequency spread). Verify .
(b) For , compute and numerically. Is the uncertainty bound satisfied? Is it tight?
(c) A signal designer wants to transmit a pulse with duration s using bandwidth kHz. Is this possible? What does the uncertainty principle say?
(d) Show that any signal of the form achieves equality in the uncertainty principle.
Exercise 6 ** - Tempered Distributions
(a) Verify using the distributional definition .
(b) Compute (FT of the derivative of delta).
(c) Use the result from (a) to verify: .
(d) Verify the Poisson summation formula numerically for : compare with for .
Exercise 7 *** - Random Fourier Features
(a) For the RBF kernel (setting ), the spectral density is . Draw frequencies for and construct the random feature map (using cos and sin).
(b) Generate 100 random pairs with . For each pair, compute the exact kernel value and the RFF approximation . Plot the approximation vs. exact values and report RMSE.
(c) Study the approximation quality as a function of . How does RMSE scale with ? Is it consistent with the theoretical bound ?
(d) Takeaway: explain in 2 sentences why RFF matters for LLM-scale systems where kernel computation would be infeasible.
Exercise 8 *** - FNet vs Attention Experiment
(a) Implement a minimal FNet mixing layer: given , apply 2D FFT and take the real part. Implement a minimal attention mixing layer: .
(b) For a synthetic task - "copy the token from position to the output" (a task requiring precise position tracking) - construct training pairs and train both mixers with a linear head on top. Compare accuracy and training speed.
(c) For a synthetic task - "output the mean of all input tokens" (a global aggregation task) - repeat the comparison. Which mixer is better suited? Why?
(d) Takeaway: connect your results to the mathematical properties of the FT (fixed unparameterized mixing vs. learned data-dependent mixing) and explain what types of tasks each mixer is appropriate for.
11. Why This Matters for AI (2026 Perspective)
| Concept | AI Impact |
|---|---|
| Fourier Transform as unitary operator (Plancherel) | FNet (Lee-Thorp et al., 2022) uses FT as a token mixer - same information content, 7 faster than attention on GPU. The unitarity of FT ensures no information loss during mixing. |
| Convolution-multiplication duality | The theoretical basis for convolution via FFT - the reason CNNs can be trained efficiently. FlashConv (Fu et al., 2023) and Hyena (Poli et al., 2023) extend this to long-range sequence models. |
| Modulation theorem (time-shift frequency-shift) | Foundation of RoPE (Su et al., 2021) - multiplying embeddings by shifts the position representation. Now standard in LLaMA-3, Mistral, Qwen, GPT-NeoX. |
| Uncertainty principle | Fundamental constraint on context length extension. YaRN (Peng et al., 2023) and LongRoPE (Ding et al., 2024) must lower the frequency to extend context - the UP dictates the minimum frequency needed. |
| Bochner's theorem + FT of spectral density | Random Fourier Features (Rahimi & Recht, 2007) reduce kernel computation from to . Performer (Choromanski et al., 2021) uses the same idea to get attention. |
| Spectral norm | Spectral normalization (Miyato et al., 2018) - dividing discriminator weights by their spectral norm enforces Lipschitz constraints. Standard in GAN training and used in diffusion model discriminators. |
| FT as basis for function space operators | Fourier Neural Operator (Li et al., 2021) - solves PDEs 1000 faster by learning in Fourier space. Used in climate modeling, fluid simulation, and materials science. |
| Differentiation in frequency domain () | Heat and wave equation solutions in closed form via FT. Kernel of the heat equation is a Gaussian - explains why diffusion models add Gaussian noise: it has maximal spectral support. |
| Gaussian as optimal time-frequency window | Gaussian embeddings in Gaussian attention (Tsai et al., 2019) - replacing dot-product attention with a Gaussian kernel gives an attention with explicit time-frequency interpretation. |
| Dirac comb + Poisson summation | Shannon sampling theorem - ensures that Whisper's mel-spectrogram preprocessing (16kHz sampling, 80-channel mel filterbank) captures all speech frequencies (up to 8kHz) without aliasing. |
12. Conceptual Bridge
Looking Back: From Fourier Series to Fourier Transform
The Fourier series (Section 20-01) established that periodic functions decompose into discrete harmonics. The key insight was geometric: the trigonometric functions form an orthonormal basis of , and the Fourier coefficients are the projections of onto these basis vectors.
The Fourier Transform carries this insight to its natural conclusion. By letting the period , the discrete harmonic sum becomes the continuous integral , and the discrete spectrum becomes the continuous spectrum . Everything from the Fourier series has an analog:
- Orthonormality -> Plancherel's theorem (Section 5)
- Parseval's identity -> Parseval's relation (Section 5.2)
- Dirichlet's theorem on convergence -> Fourier inversion theorem (Section 4)
- Gibbs phenomenon -> spectral leakage in DFT (Section 03)
- Smoothness spectral decay -> differentiation property (Section 3.5)
The distribution theory in Section 7 reveals the deepest unity: Fourier series is just the Fourier Transform restricted to periodic distributions. The Dirac comb with spacing transforms to a Dirac comb with spacing - the very relationship between period and harmonic spacing that the Fourier series encodes.
Looking Forward: Three Branches
From the Fourier Transform, the curriculum branches in three directions:
Branch 1: Discretization -> Section 20-03 DFT and FFT. The continuous FT must be made computable. Discretizing both time and frequency leads to the DFT, and the Cooley-Tukey algorithm (1965) computes it in instead of . The FFT is arguably the most important algorithm in scientific computing - it is what makes spectrograms, MRI, and FNet feasible. The DFT inherits all properties of the continuous FT but adds complications: aliasing (from discretizing frequency), spectral leakage (from finite duration), and the circular convolution structure.
Branch 2: Convolution Theorem -> Section 20-04. The most important property of the FT for applications is that convolution in time equals multiplication in frequency. This converts the convolution operation into FFT-based multiplication. The convolution theorem is the mathematical foundation of CNNs, WaveNet, S4/Mamba, and Hyena - all architectures where the key operation is a learned filter applied by convolution. The full theory of LTI systems, frequency response, filter design, and the Wiener-Khinchin theorem belongs there.
Branch 3: Time-Frequency Localization -> Section 20-05 Wavelets. The FT has a fundamental limitation: tells you which frequencies are present but not when they occur. The uncertainty principle shows this is not fixable - any attempt to localize in both time and frequency is bounded by . Wavelets overcome this by accepting a principled tradeoff: high frequencies are analyzed at fine time resolution, low frequencies at coarse time resolution. This multi-resolution analysis (MRA) is the mathematical foundation of JPEG 2000, EEG analysis, and wavelet attention mechanisms.
POSITION IN THE FOURIER CURRICULUM
========================================================================
Section 01 FOURIER SERIES Section 02 FOURIER TRANSFORM (HERE)
--------------------- ---------------------------------
Periodic functions Aperiodic functions on R
Discrete spectrum {cn} Continuous spectrum f()
Parseval for series Plancherel's theorem
Dirichlet convergence Inversion theorem
Gibbs phenomenon Spectral leakage (-> Section 03)
RoPE derivation Full uncertainty principle
v v
+----------------------------------------------------------------+
| Section 02 -> THREE BRANCHES |
| |
| Discretize: Convolve: Time-localize: |
| Section 03 DFT/FFT -> Section 04 Convolution -> Section 05 Wavelets |
| $O(N\log N)$ CNNs, Mamba MRA, multi-scale |
| Whisper, FNO WaveNet, Hyena Scattering networks |
+----------------------------------------------------------------+
v
Section 21 Statistical Learning Theory
(spectral learning bounds, kernel methods)
========================================================================
Key Takeaways
The Fourier Transform is characterized by three central theorems:
-
Plancherel - the FT is a unitary isometry on : it preserves energy and inner products. The Fourier Transform changes how you describe a signal, not how much information it contains.
-
Uncertainty - : time and frequency concentration are fundamentally coupled. This is a theorem about analysis, not about physics, and it constrains every signal processing and ML system that operates in both domains simultaneously.
-
Inversion - the FT is invertible: given , you can recover exactly. No information is lost in the Fourier domain representation - it is a complete, lossless change of basis.
Together, these theorems say: the Fourier Transform is an invertible, energy-preserving, mathematically constrained change of representation. It reveals structure (frequency content) that is invisible in the time domain, converts convolution to multiplication, and converts differentiation to algebra - making it the single most powerful analytical tool for understanding and processing signals, functions, and the weight matrices of neural networks.
<- Back to Fourier Analysis | Previous: Fourier Series <- | Next: DFT and FFT ->
Appendix A: Extended Properties and Derivations
A.1 The Duality Theorem in Full
One of the most elegant properties of the Fourier Transform is self-duality: applying the FT twice returns the time-reversed original.
Theorem A.1 (Duality). For :
Equivalently: if , then .
Proof:
By Fubini (justified since ):
Corollary: Applying the FT four times: . The Fourier Transform has order 4 as an operator.
Using duality to compute new transforms:
If you know , then .
Example: We know . By duality: (since rect is even).
So : a sinc in time has a rectangular (bandlimited) spectrum. This is the ideal low-pass filter - a filter that passes frequencies perfectly and rejects all others.
A.2 Analytic Signals and the Hilbert Transform
Definition A.1 (Analytic Signal). For a real signal , its analytic signal is:
where is the Hilbert Transform: .
Fourier domain characterization:
The analytic signal has only non-negative frequencies: it is a one-sided spectrum signal. This is achieved by zeroing out the negative frequencies and doubling the positive ones - a frequency-domain operation.
Why this matters: The analytic signal enables the definition of instantaneous frequency - the frequency of the signal at a given instant. This is essential for frequency-modulated signals (FM radio, chirps) and is used in empirical mode decomposition for analyzing non-stationary signals like EEG and speech.
Hilbert Transform in spectral terms: . The Hilbert Transform applies a phase shift to positive frequencies and to negative frequencies - it is a "90-degree phase rotator."
A.3 The Fourier Transform on
The Fourier Transform extends naturally to dimensions. For with :
where is the frequency vector. All properties generalize:
- Shift:
- Scaling by matrix :
- Differentiation:
- Plancherel:
For AI: The 2D Fourier Transform is used in convolutional networks for images. A 2D convolution with filter is, in Fourier space, pointwise multiplication: . For FNet (Section 8.1), the 2D FT is applied over both the sequence dimension and the embedding dimension.
Radial functions: For radially symmetric , the Fourier Transform is also radial: . The RBF kernel is radially symmetric, so its spectral density is also radially symmetric - which is why sampling works for RFF with the RBF kernel.
A.4 The Fourier Transform and PDEs
The differentiation property converts partial differential equations into algebraic equations in frequency space. This is the primary tool for solving linear constant-coefficient PDEs.
Example 1: Heat Equation. Solve , .
Taking the Fourier Transform in :
This is a first-order ODE in for each fixed : .
Inverting:
where is the Gaussian heat kernel (a Gaussian with time-dependent width ).
Interpretation: Heat diffusion = convolution with a Gaussian. High frequencies decay as - much faster than low frequencies. Sharp features (high frequency content) are smoothed out rapidly; broad features (low frequency) persist.
For AI - Diffusion Models: The DDPM forward process is exactly discrete heat diffusion: the signal is progressively smoothed by adding Gaussian noise (which has a flat spectrum = all frequencies). The network learns to reverse this process. The Fourier perspective explains why the early denoising steps (low noise) must recover high-frequency details while later steps (high noise) recover the coarse structure - the exact inverse of heat diffusion.
Example 2: Wave Equation. Solve , , .
FT in :
Solution:
By d'Alembert's formula (inverting):
The wave equation propagates each frequency at the same speed - no dispersion. In contrast, the heat equation attenuates high frequencies faster than low frequencies - strong dispersion (dissipation).
Example 3: Schrodinger Equation. The free-particle Schrodinger equation is formally a heat equation with imaginary time. Its solution shows that each frequency component oscillates (rather than decays) with frequency - quantum dispersion. The uncertainty principle (where by de Broglie) is the physics version of the mathematical uncertainty principle in Section 6.
A.5 Windowed Fourier Transform (STFT)
The continuous Fourier Transform gives global frequency information but no time localization: tells you whether frequency is present somewhere in , but not when.
Definition A.2 (Short-Time Fourier Transform / Spectrogram). For a window function and signal :
This localizes the analysis around time using the window . The spectrogram is - a time-frequency power map.
The uncertainty principle applies: The time resolution and frequency resolution of the STFT are determined by the window :
- Narrow window -> good time resolution, poor frequency resolution
- Wide window -> poor time resolution, good frequency resolution
The Gaussian window minimizes the product for a given , making it the optimal STFT window (the resulting time-frequency representation is called the Gabor transform).
For AI - Audio Models: Whisper (Radford et al., 2022) processes audio as mel-spectrograms: it applies a 25ms STFT window (adequate time resolution for phoneme detection) with 10ms hop, then maps the frequency axis to the mel scale (logarithmic, matching human perception), and feeds the result as a 2D "image" to a Vision Transformer. The STFT parameters are fixed - the uncertainty principle determines the time-frequency resolution tradeoff.
STFT TIME-FREQUENCY TRADEOFF (UNCERTAINTY PRINCIPLE IN PRACTICE)
========================================================================
Short window (e.g., 2ms): Long window (e.g., 50ms):
------------------------- --------------------------
Fine time resolution (~2ms) Coarse time resolution (~50ms)
Poor freq resolution (~500 Hz) Fine freq resolution (~20 Hz)
Good for: transients, clicks Good for: sustained tones, pitch
Whisper (speech): 25ms window -> resolves individual phonemes (50-100ms)
while achieving ~40Hz frequency resolution (enough for pitch/formants)
The Gaussian window achieves the minimum t product - it is the
optimal window under the uncertainty principle.
========================================================================
A.6 Fourier Transform and Operator Theory
From the perspective of functional analysis, the Fourier Transform is a unitary operator on the Hilbert space . This abstract viewpoint connects Fourier analysis to the broader theory of self-adjoint operators.
The Fourier operator : is unitary: .
Since , the eigenvalues satisfy : .
The spectral decomposition of in terms of its eigenspaces: , where each is spanned by the Hermite functions with the appropriate eigenvalue.
Connection to quantum mechanics: The position operator and momentum operator (in natural units) are both self-adjoint operators on , related by . The commutator implies the uncertainty principle . The mathematical identity and the physics are the same theorem.
For AI: The language of operator theory is increasingly used to analyze neural networks. The weight matrix of a linear layer is a linear operator; its spectral norm is its "size" as an operator; its singular value decomposition expresses it in terms of rank-1 operators. The FT connects linear operator theory to signal processing, unifying the analysis of both classical filters and neural network layers.
Appendix B: Worked Examples and Techniques
B.1 The Fourier Transform Table - Complete Reference
The following table lists the most important Fourier Transform pairs, using the -convention . All functions are assumed in unless otherwise noted.
| # | Notes | ||
|---|---|---|---|
| 1 | Self-dual Gaussian | ||
| 2 | Scaled Gaussian | ||
| 3 | , | Double-sided exponential | |
| 4 | , | One-sided exponential | |
| 5 | Rectangular pulse of width | ||
| 6 | Ideal low-pass filter (by duality) | ||
| 7 | Triangle function | ||
| 8 | Lorentzian / Cauchy | ||
| 9 | Dirac impulse -> white spectrum | ||
| 10 | Constant -> DC spike (distributional) | ||
| 11 | Shifted impulse | ||
| 12 | Pure tone (distributional) | ||
| 13 | Cosine | ||
| 14 | Sine | ||
| 15 | Dirac comb | ||
| 16 | (distributional) | Sign function | |
| 17 | (unit step) | Heaviside step (distributional) | |
| 18 | , , | Polynomial exponential |
B.2 Four Full Worked Examples
Worked Example 1: Triangular Pulse
The triangular function is zero outside . We compute directly.
For the first integral, let :
Adding both integrals: .
Integrating by parts:
Using :
The triangular pulse is the FT of . Alternatively: (convolution of two rect functions), so by the convolution theorem. The spectrum decays faster than (as vs ) because is continuous (no jumps), while has jump discontinuities.
Worked Example 2: Gaussian via Differential Equation
An elegant alternative proof that uses the fact that satisfies .
Taking the FT of both sides and using the differentiation rules:
- LHS:
- RHS:
Wait - use:
So .
This gives the ODE: , with solution .
To find : (Gaussian integral). So and .
This proof is superior for teaching: it shows that the Gaussian's self-duality follows from the fact that satisfies the simplest possible first-order ODE, and the FT converts this ODE into the same ODE in frequency space.
Worked Example 3: FT of and a Contour Argument
For the first integral ():
For the second integral, let (, ):
Therefore:
This is a Lorentzian (Cauchy distribution) centered at with half-width . As , the time-domain signal approaches the constant , and its Fourier Transform approaches ... Actually: ... more carefully, and is the standard Poisson kernel. The limit confirms .
Worked Example 4: The FT and Convolution for the Heat Equation Kernel
The heat kernel at time is . We verify using the scaling theorem.
We know . The heat kernel is:
More directly: write . Setting :
The FT of a Gaussian is ... Using our convention:
, so .
For : where ... Let me just directly compute:
Write with , . Using :
This confirms that in the -convention, the heat kernel's FT is - exactly the solution we derived from the heat equation in Section A.4.
B.3 Decay Rates and Smoothness: The Complete Picture
The relationship between smoothness of and decay of is one of the deepest qualitative results in Fourier analysis. The complete picture:
Theorem B.1 (Smoothness Spectral Decay). Let .
| Condition on | Decay of as | |-----------------|----------------------------------------------| | has a jump discontinuity | | | (continuous, one derivative) | | | , | | | | Faster than any polynomial: for all | | analytic (holomorphic in a strip ) | Exponential decay: |
Proof sketch: For the -derivative case: . For the analytic case, shift the contour of integration to (for ) gaining an exponential factor .
Applications:
- Signal compression: Smooth signals (like audio with no sharp transients) have rapidly decaying spectra -> can be compressed by truncating high frequencies with small error.
- Neural network generalization: Smooth target functions (natural image statistics) have rapidly decaying spectra -> a network with limited bandwidth can approximate them well. This connects to the spectral bias of neural networks (Rahaman et al., 2019): networks preferentially learn low-frequency components first.
- Discretization error: The aliasing error when sampling a signal at rate is bounded by if - smoother signals alias less.
B.4 The Fourier Transform in Different Function Spaces
A complete picture of where the FT lives:
DOMAIN OF DEFINITION - FOURIER TRANSFORM
========================================================================
L1(R) -> C0(R) (bounded, continuous, vanishes at infty)
f_infty <= f_1
L2(R) -> L2(R) (Plancherel: FT is unitary)
f_2 = f_2
L1 L2 -> L2 (both apply; easiest to work with)
S(R) -> S(R) (Schwartz space: FT preserves rapidly
(bijection) decreasing smooth functions)
S'(R) -> S'(R) (tempered distributions: largest domain)
(bijection) includes , constants, periodic signals
Key inclusions: S L1 L2 L1 S'
S L1 L2 L2 S'
For practical computation: work in S' (distributions),
which unifies all cases under one theory.
========================================================================
B.5 Numerical Computation of the Fourier Transform
In practice, the Fourier Transform is computed numerically via the DFT/FFT (Section 20-03). Here we note the key relationships.
Approximation scheme. To numerically approximate for :
- Truncate the signal to for large enough (error: )
- Sample at rate to get equally-spaced samples where
- Apply DFT:
- The result approximates for .
Sources of error:
- Aliasing: Undersampling causes high-frequency components to fold back - reduced by sampling faster (higher )
- Leakage: Truncation at multiplies by a rect function -> convolves spectrum with sinc -> spectral leakage. Reduced by windowing (multiplying by a smooth window before DFT)
- Resolution: The frequency resolution is - the smallest frequency distinguishable in the DFT is . For finer resolution, use longer time windows.
All three error types are direct consequences of the Fourier Transform properties developed in this section. Full treatment in Section 20-03.
Appendix C: Connections, Extensions, and Deep Dives
C.1 The Fourier Transform on Locally Compact Abelian Groups
The Fourier Transform generalizes far beyond . For any locally compact abelian (LCA) group with Haar measure , there is a Fourier Transform where is the Pontryagin dual group (the group of continuous homomorphisms ).
| Group | Dual | Fourier Transform | Name |
|---|---|---|---|
| Fourier integral | |||
| Discrete-time FT (DTFT) | |||
| Fourier series | |||
| Discrete Fourier Transform (DFT) | |||
| Multidimensional FT |
Plancherel's theorem, the convolution theorem, and the inversion theorem all hold in this general setting. The unification of Fourier series and Fourier Transform as special cases of the same construction on different groups is one of the most elegant results in abstract harmonic analysis.
For AI: The group structure is what makes RoPE work. The circle group (the group of complex numbers of modulus 1 under multiplication) is a compact abelian group. Multiplying by is the group action of rotation by . The relative-position property of RoPE is the statement that the group action is transitive: the inner product only sees the group element corresponding to relative position.
C.2 Non-Commutative Fourier Analysis
For non-abelian groups (e.g., , permutation groups ), the Pontryagin duality breaks down but a Fourier Transform still exists, with irreducible unitary representations playing the role of frequencies.
Graph Fourier Transform: For a graph with Laplacian (where is the degree matrix), the eigenvectors play the role of complex exponentials. The graph Fourier Transform is where is the matrix of eigenvectors. The eigenvalues represent "graph frequencies" - how rapidly the eigenvector oscillates across the graph.
This is the foundation of spectral graph neural networks (Bruna et al., 2014; Defferrard et al., 2016 ChebNet; Kipf & Welling, 2017 GCN). The spectral GNN applies a learned filter in graph Fourier space: .
Full treatment in Section 11-04 Spectral Graph Theory.
C.3 The Fourier Transform and Learning Theory
The Fourier Transform connects deeply to several foundational results in learning theory:
Spectral Bias (Rahaman et al., 2019): Neural networks trained by gradient descent learn the Fourier components of the target function in order of increasing frequency. This is because the NTK (Neural Tangent Kernel) has eigenvalues that decay rapidly with frequency - the network converges faster on low-frequency components. Consequence: networks generalize well on smooth targets and may overfit on high-frequency features.
Frequency Principle: Let be the target function and the network at training step . Then decreases at rate (the NTK eigenvalue at frequency ), and decreases with . High-frequency components therefore converge last.
Implications:
- Data augmentation that adds high-frequency perturbations (e.g., random cropping, noise injection) improves generalization precisely because it forces the network to represent these frequencies.
- Adversarial examples often correspond to high-frequency perturbations of inputs that fool the network but are invisible to humans - the network hasn't fully learned the high-frequency structure.
- Neural networks with spectral inductive bias (e.g., SIREN - sinusoidal activation functions) can represent high-frequency targets more efficiently.
Rademacher Complexity and Fourier Norms: The generalization error of a hypothesis class can be bounded by its Fourier norm: . Functions with small Fourier norm have low complexity and generalize well. This connects to the theory of Barron functions (Barron, 1993): functions with can be approximated to accuracy by a two-layer neural network with neurons.
C.4 Phase Retrieval - When Magnitude Alone Is Insufficient
The FT consists of a magnitude and a phase . Phase retrieval asks: can you recover from alone (without the phase)?
In general, the answer is no - many different signals can have the same magnitude spectrum. For example, and have the same magnitude spectrum but may be entirely different. More strikingly, phase-randomized images (keeping but using random ) look like noise - the phase carries most of the perceptual information.
Phase is more important than magnitude for perception: An experiment: take image and image , swap their phase spectra (keep 's magnitude, use 's phase), and reconstruct. The result looks like image - demonstrating that phase dominates perception.
For AI: This has implications for:
- Adversarial robustness: Many adversarial attacks perturb the high-frequency phase of input images - imperceptible to humans but highly disruptive to CNNs.
- Generative models: Diffusion models and GANs must learn both the magnitude and phase of the data distribution's spectrum to generate perceptually realistic images.
- FNet limitation: FNet takes the real part of the FT output, effectively discarding half the phase information. This is one reason FNet performs worse than attention on tasks requiring precise token interactions.
C.5 Fast Algorithms Beyond FFT
The FFT computes the -point DFT in operations, a vast improvement over the naive . Several modern algorithms push further:
Sparse FFT: If the signal has only significant frequencies, the sparse FFT (Hassanieh et al., 2012; adopted in MIT SFFT) computes the DFT in - sublinear in . Applications: energy-efficient spectrum sensing, MRI acceleration.
Non-Uniform FFT (NUFFT): The standard FFT requires uniformly-spaced samples. The NUFFT handles non-uniform grids with complexity, using oversampling and interpolation. Used in MRI reconstruction (non-Cartesian k-space trajectories) and radio astronomy.
Butterfly factorization (Monarch matrices): The FFT matrix can be written as a product of sparse matrices (butterfly factors), each requiring operations. Monarch matrices (Dao et al., 2022) generalize this structure for learned efficient linear layers - they can represent any matrix with parameters that can be applied in time, interpolating between dense matrices () and diagonal ().
The FFT algorithm and its applications - Cooley-Tukey, Bluestein, Rader - are covered in full in Section 20-03 DFT and FFT.
C.6 Summary: The Central Theorems of the Fourier Transform
This section has developed the following central results, in order of importance:
1. Definition and well-posedness (Section 2):
Well-defined, bounded, continuous. Riemann-Lebesgue: as .
2. Properties (Section 3): Linearity, shift, modulation, scaling, time reversal, differentiation, convolution-multiplication duality. The Master Properties Table encodes all relationships between time and frequency operations.
3. Inversion (Section 4): Under sufficient conditions, . The Gauss-Weierstrass regularization gives a constructive proof.
4. Plancherel (Section 5): . The FT is a unitary isometry on - energy is preserved.
5. Uncertainty Principle (Section 6): , with equality iff is a Gaussian. Time-frequency concentration is fundamentally limited.
6. Distribution extension (Section 7): The FT extends to tempered distributions, giving , , , and the Dirac comb formula. Poisson summation unifies Fourier series and Fourier Transform.
These six results form a complete, self-consistent theory that has shaped mathematics, physics, engineering, and AI for two centuries - and shows no sign of becoming less central.
Appendix D: Machine Learning Deep Dives
D.1 FNet Architecture - Full Mathematical Analysis
We give a complete mathematical analysis of the FNet architecture (Lee-Thorp et al., 2022) using the formalism developed in this section.
Standard Transformer token mixing (attention):
Given token matrix (sequence length , embedding dimension ):
This is in memory and time (the attention matrix dominates for large ). The attention matrix is data-dependent: each row is a different weighted sum of the value vectors, where the weights depend on the specific input.
FNet token mixing:
where applies the DFT along the sequence dimension (rows) and along the embedding dimension (columns).
The DFT as a matrix multiplication:
The 1D DFT can be written as where with (unitary DFT matrix). The 2D FNet mixing is:
where and are DFT matrices.
Key properties:
-
Unitarity: and . The FNet mixing is an isometry (before taking real part): by Plancherel.
-
Global mixing: Each output position depends on all input values. FNet is a global mixer - exactly like attention.
-
No parameters: Unlike attention, FNet has zero parameters in the mixing layer ( and are fixed). All learning happens in the feed-forward network after mixing.
-
Complexity: via 2D FFT, vs for attention. For , : FNet is fewer operations in the mixing layer.
Why taking real part? The DFT output is complex even for real inputs. Taking projects back to real numbers (required since the subsequent feed-forward layers expect real inputs). This discards the imaginary part - half the phase information. This is one reason FNet is weaker than attention: it cannot separately process magnitude and phase information.
Theoretical analysis: The FNet paper shows that attention with random (untrained) weights performs similarly to FNet. This suggests the attention mechanism's power comes primarily from its (trained) data-dependent mixing, not merely from being a global mixer. FNet is the "global but data-independent" baseline.
When to use FNet vs Attention:
| Task type | FNet | Attention | Reason |
|---|---|---|---|
| Sentence classification | ~92-97% BERT | 100% | Global mixing is sufficient |
| Named entity recognition | ~90% | 100% | Moderate position sensitivity |
| Extractive QA (SQuAD) | ~80% | 100% | Requires precise token matching |
| Speed-critical inference | +7 | baseline | FNet wins when quality tradeoff acceptable |
D.2 Random Fourier Features - Implementation Details
Setup for RBF kernel: .
The spectral density is .
(With : , which is the standard form.)
Feature map construction (two variants):
Random Fourier Features (RFF): Sample , :
Random Fourier Features (paired, no bias): Sample :
Both give . The paired version has slightly lower variance.
Concentration bound:
where and .
Variance reduction via Quasi-Monte Carlo: Instead of random , use quasi-random sequences (Sobol, Halton) to improve coverage of frequency space. Gives error vs for random - significant improvement in practice.
Orthogonal Random Features (ORF) (Yu et al., 2016): Sample as rows of where is a Hadamard matrix, is diagonal , is a diagonal scaling. This gives:
- Same unbiasedness:
- Lower variance: vs but with better constants
- Faster computation: using fast Hadamard transform
Connection to Performer: The Performer attention (Choromanski et al., 2021) approximates (the unnormalized attention softmax kernel) using orthogonal random features:
This reduces attention from to - linear in sequence length .
D.3 Spectral Analysis of Neural Network Weight Matrices
The spectral norm and the spectral distribution of weight matrices encode important information about the training state of a neural network.
Heavy-tailed self-regularization (Martin & Mahoney, 2021): In well-trained neural networks, the eigenvalue distribution of follows a power law: for some . Under-trained or over-regularized networks have bulk (Marchenko-Pastur) distributions. This is detected by fitting a power law to the sorted singular values.
WeightWatcher: The open-source tool weightwatcher analyzes the spectral distribution of weight matrices in PyTorch/TensorFlow models and reports values as a proxy for model quality - without needing test data. Well-trained frontier models (GPT-4, LLaMA) have -4; poorly trained models have or non-power-law distributions.
Spectral norm and Lipschitz constant: For a multi-layer network , the Lipschitz constant satisfies:
For 1-Lipschitz activations (, ): .
Spectral normalization (Section 8.3) sets each , making the entire network 1-Lipschitz.
For LLMs: Analyzing the spectral properties of attention weight matrices () is an active area of mechanistic interpretability research. Low-rank structure in these matrices (small effective rank = few large singular values) indicates specialized computation. Anthropic's interpretability work has found "singular value signatures" of specific computational patterns in attention heads.
D.4 The Fourier Perspective on Attention
Attention and the Fourier Transform are both global linear mixing operations on sequences. Understanding their relationship clarifies when each is appropriate.
Attention as a data-dependent filter: The output of attention can be written as:
For fixed query , this is a weighted average of value vectors with data-dependent weights . In signal processing terms, it is a position-dependent filter: the filter coefficients change for each output position .
Fourier Transform as a fixed filter bank: FNet applies the DFT, which is a fixed (non-data-dependent) filter: the output at frequency is always , regardless of the content of . Each DFT frequency is a specific pattern of oscillation at rate .
Comparison table:
| Property | Self-Attention | FNet (DFT) |
|---|---|---|
| Filter type | Data-dependent | Fixed |
| Parameters in mixer | (W_Q, W_K, W_V) | 0 |
| Computational complexity | ||
| Expressive power | High (can select specific tokens) | Low (global uniform mixing) |
| Position awareness | Via PE or RoPE | Via DFT phases |
| Best for | Selective retrieval tasks | Global mixing tasks |
| Example success | Machine translation, QA | Text classification, embedding |
The analogy: Attention is like a learnable short-time Fourier Transform with adaptive windows - for each output position, it selects which "frequencies" (patterns) to attend to. The DFT in FNet is like a fixed global frequency analysis with no adaptivity. The power of attention comes from its ability to compute different effective filters for different positions and different inputs.
Appendix E: Reference Summary and Quick Checks
E.1 The Six Most Important Facts
If you retain only six things from this section, retain these:
1. Definition: (use the -convention; no in the inverse).
2. Self-dual Gaussian: . The Gaussian maps to itself. All other transforms can be derived from this one via properties.
3. Plancherel: . Energy is conserved. The FT is a unitary operator on .
4. Uncertainty principle: , equality iff Gaussian. You cannot have both time and frequency concentration.
5. Distributions: , , . Pure tones map to spikes. Spikes have flat spectra.
6. AI: The FT is used in RoPE (modulation theorem), FNet (global mixing via DFT), RFF (Bochner + spectral sampling), spectral normalization (spectral norm of weight matrices), and FNO (learn in Fourier space).
E.2 Quick Reference: Which Theorem to Use When
| Problem | Theorem/Tool |
|---|---|
| Compute FT of $e^{-a | t |
| Compute FT of | Time-shift + modulation properties |
| Show $\int | f |
| Find | Parseval applied to rect |
| Show as $ | \xi |
| Compute FT of | Differentiation property: multiply by |
| Compute FT of | Modulation theorem or distribution: two delta spikes |
| Verify recoverable from | Inversion theorem (check or use theory) |
| Lower bound on signal bandwidth | Uncertainty principle: |
| FT of a periodic signal | Distribute over sum: |
| FT of (rescaled signal) | Scaling theorem: $(1/ |
| Show kernel is PD | Bochner: show spectral density |
E.3 Sign Convention Reference
Different sources use different sign conventions. This table gives the conversion formulas.
| Convention | Forward FT | Inverse FT | maps to |
|---|---|---|---|
| (this section) | |||
| , no | |||
| , symmetric |
Conversion: If is the transform in the -convention, then for the non-symmetric -convention.
Properties in the -convention (no ):
- Differentiation: (not )
- Time-shift: (not )
- Parseval: (extra factor!)
The -convention is self-symmetric and avoids all these factors - strongly recommended for all new work.
E.4 Prerequisites Revisited: What You Now Know
This section assumed you knew Fourier series (Section 01) and functional analysis basics. Here is what each prerequisite contributed:
| Prerequisite | How it was used |
|---|---|
| Complex exponentials | The basis functions of the FT; the Euler formula is used constantly |
| inner product | Plancherel's theorem is the FT version of Parseval from Fourier series |
| Orthonormality of | Generalized: the FT is the "orthonormal expansion" in the continuous limit |
| Parseval's identity (series) | Generalized to Plancherel's theorem in Section 5 |
| Gibbs phenomenon | Reinterpreted as spectral leakage - the frequency-domain manifestation of truncation |
| Dirichlet conditions | Sufficient conditions for pointwise inversion (Section 4.1) |
| Integration / IBP | Proofs of differentiation theorem, inversion via approximate identity |
| Convergence of series | convergence in Plancherel proof; distributional convergence in Section 7 |
E.5 Glossary
| Term | Definition |
|---|---|
| Fourier Transform | ; maps time-domain functions to frequency-domain functions |
| Spectrum | The function ; describes which frequencies are present and at what amplitude/phase |
| Power spectrum | $ |
| Plancherel theorem | ; FT is a unitary isometry on |
| Parseval's relation | ; the FT preserves inner products |
| Uncertainty principle | ; time and frequency spread cannot simultaneously be small |
| Riemann-Lebesgue lemma | as $ |
| Dirac delta | Distributional unit impulse; ; FT is constant 1 |
| Tempered distribution | Continuous linear functional on Schwartz space; includes all functions, , and more |
| Schwartz space | Smooth functions decaying faster than any polynomial; FT maps to bijectively |
| Dirac comb | ; its FT is another comb with spacing |
| Poisson summation | ; connects Fourier series and FT |
| Modulation theorem | ; multiplication by tone = frequency shift |
| Sinc function | ; FT of rectangular pulse; ideal low-pass filter kernel |
| Spectral norm | ; largest singular value; Lipschitz constant of linear map |
| RFF | Random Fourier Features; via sampled frequencies from spectral density |
| STFT | Short-Time Fourier Transform; localizes FT analysis to a time window; gives spectrogram |
| FNet | Transformer variant replacing attention with 2D DFT; mixing |
| FNO | Fourier Neural Operator; learns PDE solutions by parameterizing the spectral operator |
| RoPE | Rotary Position Embedding; encodes position as rotation in complex plane |
Appendix F: The Fourier Transform in Practice - Case Studies
F.1 Case Study: Audio Preprocessing for Whisper
OpenAI's Whisper (Radford et al., 2022) is a speech recognition model that accepts raw audio and produces text. Its preprocessing pipeline is a direct application of STFT and spectral analysis.
Step 1: Resampling. Raw audio is resampled to 16kHz. By Shannon's sampling theorem (Poisson summation): a signal sampled at 16kHz can faithfully represent frequencies up to 8kHz - which covers all speech frequencies (human voice: 80Hz-8kHz, formants: 200Hz-4kHz).
Step 2: STFT. Apply a windowed Fourier Transform with:
- Window: Hann window (smooth tapering reduces spectral leakage)
- Window length: 25ms 16kHz = 400 samples
- Hop length: 10ms 16kHz = 160 samples (75% overlap)
- FFT size: 512 points (zero-padded for frequency resolution)
This gives a spectrogram of shape where 257 = 512/2 + 1 unique frequencies.
Step 3: Mel filterbank. Map the 257 linear frequencies to 80 mel-scale frequency bins using triangular filters. The mel scale is perceptually uniform - equal mel intervals are equally distinguishable to humans. The uncertainty principle is no longer limiting here because the window (25ms) is longer than the smallest perceptible phoneme duration (~10-20ms), so time resolution is slightly sacrificed for good frequency resolution.
Step 4: Log compression. Take . The log operation compresses the dynamic range of audio energy (from ~120 dB to ~12 dB).
Result: log-mel spectrogram, treated as a 2D "image" and fed to a CNN + Transformer encoder. The entire preprocessing is fixed (not learned) - it embeds centuries of signal processing knowledge into the model architecture.
Fourier analysis used:
- Fourier Transform -> STFT window analysis
- Uncertainty principle -> window length tradeoff
- Sampling theorem -> 16kHz minimum rate
- Spectral leakage -> Hann window choice
F.2 Case Study: LLaMA-3 Context Extension via RoPE Scaling
LLaMA-3 uses RoPE with base and supports up to 128K token context (compared to LLaMA-2's 4K with ).
The problem: Standard RoPE with has frequencies ranging from (dimension pair 0) to (last dimension pair). At frequency , the RoPE encoding completes one full rotation (period = tokens). The maximum distinguishable context is the longest period: tokens for - explaining why LLaMA-2 degrades beyond ~4K (it uses , so the longest period is ... the exact computation depends on ).
The Fourier interpretation: Each dimension pair is an oscillator at frequency . For two positions and to be distinguishable, their phase difference must be mod for at least one . If is larger than , all dimensions have wrapped around - positions become identical. This is aliasing in the temporal domain, caused by the frequency being too high.
The fix: Reduce the minimum frequency by increasing :
LLaMA-3 uses , giving - more than enough for 128K context.
The cost: Higher means the low-frequency dimensions ( large) oscillate very slowly, providing little positional information for short contexts. There is a fundamental tradeoff between context length and short-range positional precision - a manifestation of the uncertainty principle in the discrete position domain.
YaRN (Peng et al., 2023): Instead of uniformly rescaling all frequencies, YaRN uses a non-uniform rescaling: high-frequency dimensions (small ) are scaled minimally (they handle short-range position fine), while low-frequency dimensions (large ) are scaled aggressively. The interpolation factor is applied per-frequency, weighted by the "needed" extension. This maintains short-range precision while extending long-range capacity - a principled application of the time-frequency tradeoff.
F.3 Case Study: Spectral Normalization in Stable Diffusion
Modern diffusion models (Stable Diffusion, DALL-E 3, Flux) train a discriminator or guidance network to distinguish real from generated data. Spectral normalization (Section 8.3) is used to stabilize this training.
Setup: The discriminator has convolutional layers. Each convolution layer has weight . For a 2D convolution layer, the relevant spectral norm is:
For a convolution with kernel , the spectral norm equals - the maximum of the 2D Fourier Transform of the kernel. This connects spectral normalization directly to the FT: normalizing by the spectral norm makes the filter's frequency response bounded by 1 at every frequency.
Effect on generated images: A spectrally-normalized discriminator applies Lipschitz constraints that prevent it from exploiting arbitrary high-frequency artifacts in generated images. This forces the generator to eliminate high-frequency artifacts (checkerboard patterns from transposed convolutions, aliasing from bilinear upsampling) - improving sample quality.
Alias-free GAN / StyleGAN3 (Karras et al., 2021): Takes spectral control further by enforcing alias-free processing throughout the generator using carefully designed low-pass filters at every upsampling step. The key insight from Fourier analysis: any nonlinearity applied to a bandlimited signal creates harmonics beyond the original bandwidth - these must be filtered out before downsampling to avoid aliasing. StyleGAN3's architecture enforces this principle throughout, producing significantly more temporally consistent video outputs.
Appendix G: Further Reading and References
G.1 Foundational Texts
For the classical theory:
- Stein, E.M. & Weiss, G. (1971). Introduction to Fourier Analysis on Euclidean Spaces. Princeton University Press. - The standard reference for rigorous Fourier analysis.
- Katznelson, Y. (2004). An Introduction to Harmonic Analysis, 3rd ed. Cambridge. - Concise and rigorous; excellent for the abstract perspective.
- Dym, H. & McKean, H.P. (1972). Fourier Series and Integrals. Academic Press. - Beautiful, accessible treatment with applications to probability.
- Korner, T.W. (1988). Fourier Analysis. Cambridge. - The most readable and historically motivated account.
For distributions and Schwartz space:
- Schwartz, L. (1950). Theorie des Distributions. Herman, Paris. - The original; defines tempered distributions.
- Hormander, L. (1990). The Analysis of Linear Partial Differential Operators I. Springer. - Comprehensive; connects distributions to PDE theory.
For signal processing:
- Oppenheim, A.V. & Schafer, R.W. (2009). Discrete-Time Signal Processing, 3rd ed. Prentice Hall. - Standard DSP textbook.
- Mallat, S. (2008). A Wavelet Tour of Signal Processing, 3rd ed. Academic Press. - Bridges Fourier analysis, wavelets, and sparse signal processing.
G.2 Machine Learning Papers
FNet:
- Lee-Thorp, J., Ainslie, J., Eckstein, I. & Ontanon, S. (2022). FNet: Mixing Tokens with Fourier Transforms. NAACL-HLT. arXiv:2105.03824.
Random Fourier Features:
- Rahimi, A. & Recht, B. (2007). Random Features for Large-Scale Kernel Machines. NeurIPS. [Best Paper Award]
- Yu, F.X. et al. (2016). Orthogonal Random Features. NeurIPS.
- Choromanski, K. et al. (2021). Rethinking Attention with Performers. ICLR. arXiv:2009.14794.
Spectral Normalization:
- Miyato, T., Kataoka, T., Koyama, M. & Yoshida, Y. (2018). Spectral Normalization for Generative Adversarial Networks. ICLR. arXiv:1802.05957.
Fourier Neural Operator:
- Li, Z. et al. (2021). Fourier Neural Operator for Parametric Partial Differential Equations. ICLR. arXiv:2010.08895.
RoPE and Context Extension:
- Su, J. et al. (2021). RoFormer: Enhanced Transformer with Rotary Position Embedding. arXiv:2104.09864.
- Peng, B. et al. (2023). YaRN: Efficient Context Window Extension of Large Language Models. arXiv:2309.00071.
- Ding, Y. et al. (2024). LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens. arXiv:2402.13753.
Spectral Bias:
- Rahaman, N. et al. (2019). On the Spectral Bias of Neural Networks. ICML. arXiv:1806.08734.
- Tancik, M. et al. (2020). Fourier Features Let Networks Learn High Frequency Functions. NeurIPS. arXiv:2006.10739.
Audio Models:
- Radford, A. et al. (2022). Robust Speech Recognition via Large-Scale Weak Supervision (Whisper). arXiv:2212.04356.
Alias-free GAN:
- Karras, T. et al. (2021). Alias-Free Generative Adversarial Networks. NeurIPS. arXiv:2106.12423.
G.3 Online Resources
- MIT OCW 18.103 (Fourier Analysis) - rigorous mathematical treatment with lecture notes
- Stanford EE261 (The Fourier Transform and its Applications) - engineering-oriented, excellent visualization
- 3Blue1Brown "But what is the Fourier Transform?" - geometric intuition for absolute beginners
- Seeing Theory (Brown University) - interactive visualizations of Fourier series and transforms
Appendix H: Self-Assessment Questions
These questions test conceptual understanding without computation. Answer each in 2-3 sentences.
-
Explain in words why the Fourier Transform of a spike (Dirac delta) is a constant (flat spectrum). What physical interpretation does this have?
-
A signal engineer claims: "If I double the duration of my pulse, I can halve its bandwidth." Is this claim correct, and what theorem guarantees it? What is the fundamental limit on simultaneous time and frequency concentration?
-
What is the difference between the Parseval identity from Fourier series (Section 01) and Plancherel's theorem from this section? Are they the same result in different forms?
-
Why does the Fourier Transform of a real-valued function satisfy ? What does this mean for the magnitude and phase spectra?
-
Explain why FNet achieves near-BERT performance on classification tasks but not on extractive QA. What mathematical property of the DFT explains this performance gap?
-
In RoPE, the frequencies are chosen geometrically. Why geometric spacing? What would happen if the frequencies were linearly spaced?
-
Bochner's theorem says a shift-invariant kernel is PD iff it is the FT of a non-negative measure. What breaks down if the spectral density is not non-negative? Give an example.
-
Explain the connection between the Poisson summation formula and Shannon's sampling theorem. Why does undersampling (below the Nyquist rate) cause aliasing?
-
The heat equation solution is a convolution with a Gaussian that spreads over time. In the Fourier domain, how does the solution evolve? Which frequencies decay fastest and why?
-
The differentiation property states . Use this to explain why smoother functions have faster spectral decay. What happens for a function with a jump discontinuity?
-
In spectral normalization, the discriminator is normalized by . Why does this enforce a Lipschitz constraint? How does the Lipschitz constant relate to training stability?
-
The Gaussian is the unique minimizer of the uncertainty bound. Does this mean we should always use Gaussian signals in practice? What are the tradeoffs?
End of Section 20-02 Fourier Transform.
Next: Section 20-03 DFT and FFT - discretizing the Fourier Transform for computation via the Cooley-Tukey algorithm.