Exercises Notebook
Converted from
exercises.ipynbfor web reading.
§03 Exercises: Discrete Fourier Transform and FFT
Ten graded exercises covering DFT computation, FFT algorithm, spectral leakage, frequency resolution, STFT, the Whisper pipeline, FNO spectral layers, and NTT.
| # | Topic | Difficulty |
|---|---|---|
| 1 | DFT by hand and matrix form | ★ |
| 2 | Cooley-Tukey FFT implementation | ★★ |
| 3 | Spectral leakage and windowing | ★ |
| 4 | Frequency resolution and zero-padding | ★★ |
| 5 | STFT and COLA verification | ★★ |
| 6 | Whisper mel spectrogram pipeline | ★★ |
| 7 | FNO spectral convolution layer | ★★★ |
| 8 | Number Theoretic Transform | ★★★ |
Code cell 2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
try:
import seaborn as sns
sns.set_theme(style="whitegrid", palette="colorblind")
HAS_SNS = True
except ImportError:
plt.style.use("seaborn-v0_8-whitegrid")
HAS_SNS = False
mpl.rcParams.update({
"figure.figsize": (10, 6),
"figure.dpi": 120,
"font.size": 13,
"axes.titlesize": 15,
"axes.labelsize": 13,
"xtick.labelsize": 11,
"ytick.labelsize": 11,
"legend.fontsize": 11,
"legend.framealpha": 0.85,
"lines.linewidth": 2.0,
"axes.spines.top": False,
"axes.spines.right": False,
"savefig.bbox": "tight",
"savefig.dpi": 150,
})
np.random.seed(42)
print("Plot setup complete.")
Code cell 3
import numpy as np
import scipy.fft as sp_fft
import scipy.signal as sp_sig
import matplotlib.pyplot as plt
import matplotlib as mpl
try:
import seaborn as sns
sns.set_theme(style='whitegrid', font_scale=1.1)
except ImportError:
pass
mpl.rcParams.update({'figure.dpi': 120, 'axes.spines.top': False,
'axes.spines.right': False})
np.random.seed(42)
COLORS = {'primary':'#0077BB','secondary':'#EE7733','tertiary':'#009988',
'error':'#CC3311','neutral':'#555555','highlight':'#EE3377'}
print("DFT exercise helpers ready.")
Exercise 1 ★ — DFT by Hand and Matrix Form
Part A. Compute the 4-point DFT of by hand using the definition with . Express each in the form .
Part B. Construct the DFT matrix where . Verify that .
Part C. Apply the IDFT to recover from . Verify Parseval's identity: .
Code cell 5
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 6
# Solution
# Exercise 1 solution
N = 4
x = np.array([1, 0, -1, 0], dtype=complex)
omega4 = np.exp(2j * np.pi / N)
# Part A
X_manual = np.array([sum(x[n] * omega4**(-n*k) for n in range(N))
for k in range(N)])
X_numpy = np.fft.fft(x)
print('Part A: X[k] by hand:')
for k in range(N):
print(f' X[{k}] = {X_manual[k].real:+.4f} {X_manual[k].imag:+.4f}i')
print(f' Match numpy: {np.allclose(X_manual, X_numpy)}')
print()
# Part B
k_idx = np.arange(N)[:, None]
n_idx = np.arange(N)[None, :]
F4 = omega4**(-k_idx * n_idx)
product = F4 @ F4.conj().T
print('Part B: F4 @ F4* =', '4*I?', np.allclose(product, 4*np.eye(N)))
print(f' Max deviation from 4I: {np.max(np.abs(product - 4*np.eye(N))):.2e}')
print()
# Part C
x_recovered = np.fft.ifft(X_numpy)
parseval_lhs = np.sum(np.abs(x)**2)
parseval_rhs = np.sum(np.abs(X_numpy)**2) / N
print(f'Part C: x recovered: {np.round(x_recovered.real, 6)}')
print(f' Parseval LHS (sum|x|^2): {parseval_lhs:.6f}')
print(f' Parseval RHS (sum|X|^2/N): {parseval_rhs:.6f}')
print(f' Parseval satisfied: {np.isclose(parseval_lhs, parseval_rhs)}')
Exercise 2 ★★ — Cooley-Tukey FFT Implementation
Implement the radix-2 Cooley-Tukey FFT recursively.
Algorithm:
where and .
Tasks:
- Implement
fft_recursive(x)that assumes is a power of 2. - Verify it matches
numpy.fft.fftto within for . - Plot the operation count (measured as recursive calls) vs on a log-log scale. Confirm the slope is (i.e., up to log factor).
Code cell 8
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 9
# Solution
# Exercise 2 solution
def fft_recursive(x):
N = len(x)
if N == 1:
return np.array(x, dtype=complex)
even = fft_recursive(x[0::2])
odd = fft_recursive(x[1::2])
k = np.arange(N // 2)
twiddle = np.exp(-2j * np.pi * k / N)
X = np.empty(N, dtype=complex)
X[:N//2] = even + twiddle * odd
X[N//2:] = even - twiddle * odd
return X
print('Accuracy vs numpy.fft.fft:')
for N_test in [8, 64, 512]:
x_test = np.random.randn(N_test)
err = np.max(np.abs(fft_recursive(x_test) - np.fft.fft(x_test)))
print(f' N={N_test:4d}: max error = {err:.2e} pass={err < 1e-10}')
# Log-log timing plot
import time
Ns = [2**k for k in range(3, 14)]
times_fft = []
for N_t in Ns:
x_t = np.random.randn(N_t)
t0 = time.perf_counter()
for _ in range(5): fft_recursive(x_t)
times_fft.append((time.perf_counter() - t0) / 5)
fig, ax = plt.subplots(figsize=(8, 5))
ax.loglog(Ns, times_fft, 'o-', color=COLORS['primary'], linewidth=2,
label='fft_recursive')
# Reference O(N log N) line
ref = times_fft[0] * np.array(Ns) * np.log2(np.array(Ns)) / (Ns[0] * np.log2(Ns[0]))
ax.loglog(Ns, ref, '--', color=COLORS['neutral'], label='O(N log N) ref')
ax.set_xlabel('N'); ax.set_ylabel('Time (s)')
ax.set_title('Cooley-Tukey FFT: measured complexity')
ax.legend()
fig.tight_layout(); plt.show()
Exercise 3 ★ — Spectral Leakage and Windowing
Setup: A signal with Hz, Hz, samples.
Tasks:
- Compute the DFT using a rectangular window and plot in dB. Identify the main lobe and the sidelobe leakage.
- Repeat with Hann and Blackman windows.
- Measure the peak sidelobe level (dB below main lobe) for each window.
- Explain the trade-off: why does a Blackman window reduce leakage more than Hann?
Code cell 11
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 12
# Solution
# Exercise 3 solution
fs_e3 = 1000.0
N_e3 = 256
f0_e3 = 57.3
t_e3 = np.arange(N_e3) / fs_e3
x_e3 = np.sin(2 * np.pi * f0_e3 * t_e3)
freqs_e3 = np.fft.rfftfreq(N_e3, 1/fs_e3)
windows_e3 = {
'Rectangular': (np.ones(N_e3), COLORS['neutral']),
'Hann': (np.hanning(N_e3), COLORS['primary']),
'Blackman': (np.blackman(N_e3),COLORS['secondary']),
}
fig, ax = plt.subplots(figsize=(12, 6))
for name, (w, col) in windows_e3.items():
X_w = np.fft.rfft(x_e3 * w)
X_db = 20 * np.log10(np.abs(X_w) + 1e-15)
X_db -= X_db.max()
ax.plot(freqs_e3, X_db, color=col, linewidth=1.8, label=name)
# Peak sidelobe: max outside main lobe (±4 bins)
peak_bin = np.argmax(np.abs(X_w))
mask_sl = np.ones(len(X_w), dtype=bool)
mask_sl[max(0,peak_bin-4):peak_bin+5] = False
psl_db = (20*np.log10(np.abs(X_w[mask_sl])+1e-15) - X_db.max() + X_db.max()
- X_db.max() + np.max(X_db[mask_sl]))
print(f"{name:<12} peak sidelobe = {np.max(X_db[mask_sl]):.1f} dB")
ax.axvline(f0_e3, color=COLORS['error'], linestyle='--', linewidth=1, label=f'f0={f0_e3}')
ax.set_xlim(0, 200)
ax.set_ylim(-100, 5)
ax.set_title(f'Spectral leakage comparison for f0={f0_e3} Hz (non-bin-aligned)')
ax.set_xlabel('Frequency (Hz)'); ax.set_ylabel('Normalized power (dB)')
ax.legend()
fig.tight_layout(); plt.show()
print()
print('Blackman uses a 3-term cosine sum that cancels more sidelobe terms.')
print('Cost: wider main lobe (6Δf vs 4Δf for Hann).')
Exercise 4 ★★ — Frequency Resolution and Zero-Padding
Two closely spaced sinusoids: Hz, Hz, Hz.
Tasks:
- What is the minimum observation length needed to resolve the two frequencies? (Use the Rayleigh criterion: .)
- Demonstrate that with samples, the two peaks merge.
- Demonstrate that with samples, the two peaks are resolved.
- Show that zero-padding a short signal does NOT resolve the two peaks — it merely interpolates between the existing (unresolved) DFT samples.
Code cell 14
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 15
# Solution
# Exercise 4 solution
fs_e4 = 1000.0
f1, f2 = 100.0, 105.0
delta_f = f2 - f1
N_min = int(np.ceil(fs_e4 / delta_f))
print(f'N_min = fs/Δf = {fs_e4:.0f}/{delta_f:.0f} = {N_min}')
fig, axes = plt.subplots(1, 3, figsize=(16, 5))
fig.suptitle('Frequency resolution vs zero-padding', fontsize=14)
configs = [
(100, 100, 'N=100 < N_min (unresolved)'),
(250, 250, 'N=250 > N_min (resolved)'),
(100, 1024, 'N=100 zero-padded to 1024 (still unresolved)'),
]
for ax, (N_sig, N_fft, title) in zip(axes, configs):
t = np.arange(N_sig) / fs_e4
x = np.cos(2*np.pi*f1*t) + np.cos(2*np.pi*f2*t)
X = np.fft.rfft(x, n=N_fft)
freqs = np.fft.rfftfreq(N_fft, 1/fs_e4)
ax.plot(freqs, np.abs(X), color=COLORS['primary'], linewidth=1.5)
ax.axvline(f1, color=COLORS['error'], linestyle='--', alpha=0.7, label=f'{f1} Hz')
ax.axvline(f2, color=COLORS['secondary'], linestyle='--', alpha=0.7, label=f'{f2} Hz')
ax.set_title(title)
ax.set_xlabel('Frequency (Hz)'); ax.set_ylabel('|X[k]|')
ax.set_xlim(80, 125); ax.legend(fontsize=9)
fig.tight_layout(); plt.show()
print()
print('Zero-padding to 1024 interpolates the spectrum of only 100 samples.')
print('It cannot resolve frequencies that were unresolved in those 100 samples.')
Exercise 5 ★★ — STFT and COLA Verification
Tasks:
- Implement
stft(x, N, H, window)andistft(S, N, H, window)(overlap-add). - Verify perfect reconstruction:
istft(stft(x)) == x(up to edge effects). - Demonstrate that with a Hann window and 75% overlap (), the COLA constant is , not 1.0.
- Apply STFT to a chirp signal and plot its spectrogram.
Code cell 17
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 18
# Solution
# Exercise 5 solution
def make_window(N, name):
if name == 'hann': return np.hanning(N)
if name == 'hamming': return np.hamming(N)
return np.ones(N)
def stft_e5(x, N=256, H=128, window='hann'):
w = make_window(N, window)
n_frames = 1 + (len(x) - N) // H
S = np.array([np.fft.rfft(x[l*H:l*H+N] * w) for l in range(n_frames)]).T
return S # (N//2+1, n_frames)
def istft_e5(S, N=256, H=128, window='hann'):
w = make_window(N, window)
n_frames = S.shape[1]
length = N + (n_frames - 1) * H
x_rec = np.zeros(length)
norm = np.zeros(length)
for l in range(n_frames):
frame = np.real(np.fft.irfft(S[:, l], n=N))
x_rec[l*H:l*H+N] += frame * w
norm[l*H:l*H+N] += w**2
norm = np.where(norm > 1e-8, norm, 1.0)
return x_rec / norm
# Test 1: perfect reconstruction
np.random.seed(1)
x_test = np.random.randn(2048)
S_test = stft_e5(x_test, N=256, H=128, window='hann')
x_rec = istft_e5(S_test, N=256, H=128, window='hann')
trim = 256
err_pr = np.max(np.abs(x_rec[trim:-trim] - x_test[trim:-trim]))
print(f'Perfect reconstruction error (interior): {err_pr:.2e}')
# Test 2: COLA constant for Hann at 75% overlap
N_cola, H_cola = 256, 64 # H = N/4 -> 75% overlap
w_hann = np.hanning(N_cola)
cola = np.zeros(4096)
for l in range((4096 - N_cola) // H_cola + 1):
cola[l*H_cola:l*H_cola+N_cola] += w_hann
interior_cola = cola[N_cola:-N_cola]
print(f'COLA constant (Hann, 75% overlap): {interior_cola.mean():.4f} (expect ~1.5)')
# Test 3: chirp spectrogram
fs_ch = 8000
t_ch = np.linspace(0, 1, fs_ch, endpoint=False)
x_ch = np.sin(2*np.pi*(200*t_ch + 1500*t_ch**2))
S_ch = stft_e5(x_ch, N=256, H=64, window='hann')
fig, ax = plt.subplots(figsize=(12, 5))
t_ax = np.arange(S_ch.shape[1]) * 64 / fs_ch
f_ax = np.fft.rfftfreq(256, 1/fs_ch)
im = ax.pcolormesh(t_ax, f_ax, 20*np.log10(np.abs(S_ch)+1e-10),
cmap='viridis', shading='auto')
fig.colorbar(im, ax=ax, label='dB')
ax.set_title('STFT spectrogram: linear chirp 200->1700 Hz')
ax.set_xlabel('Time (s)'); ax.set_ylabel('Frequency (Hz)')
fig.tight_layout(); plt.show()
Exercise 6 ★★ — Whisper Mel Spectrogram Pipeline
Reproduce the OpenAI Whisper preprocessing pipeline from scratch:
| Parameter | Value |
|---|---|
| Sample rate | 16 000 Hz |
| Window | Hann, 400 samples |
| Hop size | 160 samples |
| FFT size | 512 (zero-pad 400 -> 512) |
| Mel bins | 80 |
| Frequency range | 0 – 8000 Hz |
| Compression | , clipped to |
Tasks:
- Implement
mel_filterbank(n_mels, n_fft, fs, f_min, f_max). - Implement
whisper_log_mel(x, fs)that returns the feature matrix. - Apply it to a synthetic vowel-like signal and verify the output shape is .
- Explain why Whisper uses 80 mel bins rather than 257 linear frequency bins.
Code cell 20
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 21
# Solution
# Exercise 6 solution
def hz_to_mel(f): return 2595.0 * np.log10(1.0 + f / 700.0)
def mel_to_hz(m): return 700.0 * (10.0**(m / 2595.0) - 1.0)
def mel_filterbank(n_mels=80, n_fft=512, fs=16000, f_min=0.0, f_max=8000.0):
n_freq = n_fft // 2 + 1
mel_pts = np.linspace(hz_to_mel(f_min), hz_to_mel(f_max), n_mels + 2)
freq_pts = mel_to_hz(mel_pts)
bins = np.floor((n_fft + 1) * freq_pts / fs).astype(int)
fbank = np.zeros((n_mels, n_freq))
for m in range(1, n_mels + 1):
lo, center, hi = bins[m-1], bins[m], bins[m+1]
for k in range(lo, center):
fbank[m-1, k] = (k - lo) / max(center - lo, 1)
for k in range(center, hi):
fbank[m-1, k] = (hi - k) / max(hi - center, 1)
return fbank
def whisper_log_mel(x, fs=16000):
N_win, H_hop, N_fft = 400, 160, 512
w = np.hanning(N_win)
n_frames = 1 + (len(x) - N_win) // H_hop
frames = []
for l in range(n_frames):
frame = np.zeros(N_fft)
frame[:N_win] = x[l*H_hop:l*H_hop+N_win] * w
frames.append(np.fft.rfft(frame))
S = np.array(frames).T # (257, n_frames)
power = np.abs(S)**2
fb = mel_filterbank(n_mels=80, n_fft=N_fft, fs=fs)
mel_spec = fb @ power
log_mel = np.log10(np.maximum(mel_spec, 1e-10))
return np.maximum(log_mel, log_mel.max() - 8.0)
# Test on synthetic vowel
fs_w = 16000
t_w = np.linspace(0, 2.0, 2*fs_w, endpoint=False)
x_vowel = sum(np.sin(2*np.pi*k*150*t_w)/k for k in range(1, 8))
x_vowel *= np.exp(-0.5*((t_w-1.0)/0.4)**2)
lm = whisper_log_mel(x_vowel, fs=fs_w)
print(f'Log-mel shape: {lm.shape} (should be (80, T))')
fig, ax = plt.subplots(figsize=(12, 5))
im = ax.imshow(lm, aspect='auto', origin='lower', cmap='viridis')
fig.colorbar(im, ax=ax, label='Log power')
ax.set_title('Whisper log-mel spectrogram of synthetic vowel')
ax.set_xlabel('Frame'); ax.set_ylabel('Mel bin')
fig.tight_layout(); plt.show()
print()
print('80 mel bins: perceptually uniform resolution, compact (vs 257 linear bins).')
print('Log scale matches human loudness perception (Weber-Fechner law).')
Exercise 7 ★★★ — Fourier Neural Operator Spectral Layer
Implement and analyze the 1D FNO spectral convolution layer (Li et al. 2021).
Spectral conv formula:
where is a learnable weight tensor.
Tasks:
- Implement
SpectralConv1d(in_ch, out_ch, K)with aforward(x)method. - Show that output shape is
(batch, out_ch, N)for input(batch, in_ch, N). - Implement a simple 1-layer FNO that learns the mapping (numerical differentiation) by training on pairs . Hint: The Fourier derivative formula is .
- Compare FNO's learned derivative to finite differences — plot the error.
Code cell 23
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 24
# Solution
# Exercise 7 solution
class SpectralConv1d:
def __init__(self, in_ch, out_ch, K, seed=42):
rng = np.random.default_rng(seed)
scale = 1.0 / (in_ch * out_ch)
self.weights = (rng.uniform(-scale, scale, (in_ch, out_ch, K))
+ 1j*rng.uniform(-scale, scale, (in_ch, out_ch, K)))
self.K = K
def forward(self, x):
batch, in_ch, N = x.shape
out_ch = self.weights.shape[1]
x_ft = np.fft.rfft(x, axis=-1)
out_ft = np.zeros((batch, out_ch, N//2+1), dtype=complex)
for b in range(batch):
out_ft[b, :, :self.K] = np.einsum('ix,iox->ox',
x_ft[b, :, :self.K], self.weights)
return np.fft.irfft(out_ft, n=N, axis=-1)
# Shape check
layer = SpectralConv1d(in_ch=2, out_ch=4, K=8)
x_demo = np.random.randn(3, 2, 64)
y_demo = layer.forward(x_demo)
print(f'Input: {x_demo.shape} -> Output: {y_demo.shape}')
# Learn spectral derivative: u -> du/dx
# True derivative in spectral domain: X_deriv[k] = 2pi*i*k/L * X[k]
N_d = 128
L_d = 2 * np.pi
x_grid = np.linspace(0, L_d, N_d, endpoint=False)
# Train set: random smooth functions
n_train = 200
freqs_train = np.arange(1, 6) # k=1..5
U = np.zeros((n_train, N_d))
dU = np.zeros((n_train, N_d))
rng_d = np.random.default_rng(0)
for i in range(n_train):
amps = rng_d.standard_normal(len(freqs_train))
phases = rng_d.uniform(0, 2*np.pi, len(freqs_train))
u = sum(a*np.cos(k*x_grid + ph) for a,k,ph in zip(amps,freqs_train,phases))
du = sum(-a*k*np.sin(k*x_grid+ph) for a,k,ph in zip(amps,freqs_train,phases))
U[i] = u; dU[i] = du
# FNO forward pass: init with correct spectral derivative weights
# (instead of gradient descent, set the optimal weights directly)
K_modes = 8
spectral_layer = SpectralConv1d(in_ch=1, out_ch=1, K=K_modes)
# Optimal weights: w[0,0,k] = 2*pi*i*k / L (derivative operator in spectral domain)
k_vals = np.arange(K_modes)
spectral_layer.weights[0, 0, :] = 2j * np.pi * k_vals / L_d
u_batch = U[:5, np.newaxis, :] # (5, 1, 128)
du_pred = spectral_layer.forward(u_batch)[:, 0, :] # (5, 128)
du_true = dU[:5]
# Compare: FNO spectral derivative vs finite differences
du_fd = np.gradient(U[:5], x_grid[1]-x_grid[0], axis=1)
err_fno = np.mean(np.abs(du_pred - du_true))
err_fd = np.mean(np.abs(du_fd - du_true))
print(f'Mean abs error -- FNO spectral: {err_fno:.6f}')
print(f'Mean abs error -- Finite diff: {err_fd:.6f}')
fig, ax = plt.subplots(figsize=(10, 5))
i_ex = 0
ax.plot(x_grid, du_true[i_ex], color=COLORS['neutral'], linewidth=2, label='True du/dx')
ax.plot(x_grid, du_pred[i_ex], color=COLORS['primary'], linewidth=2,
linestyle='--', label=f'FNO spectral (err={err_fno:.4f})')
ax.plot(x_grid, du_fd[i_ex], color=COLORS['secondary'], linewidth=1.5,
linestyle=':', label=f'Finite diff (err={err_fd:.4f})')
ax.set_title('FNO spectral derivative vs finite differences')
ax.set_xlabel('$x$'); ax.set_ylabel('$du/dx$')
ax.legend()
fig.tight_layout(); plt.show()
Exercise 8 ★★★ — Number Theoretic Transform and Polynomial Multiplication
The NTT performs exact convolution in without floating-point error. It is the foundation of FHE polynomial arithmetic.
Tasks:
- Implement
ntt(a, invert=False)using the Cooley-Tukey butterfly over with and primitive root . - Implement
poly_mul_ntt(p1, p2)using NTT-based convolution. - Verify: and a larger example against
numpy.polymul. - Benchmark NTT vs naive polynomial multiplication for .
- Challenge: Implement a ring-LWE encryption of a single plaintext polynomial using NTT-based multiplication in .
Code cell 26
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 27
# Solution
# Exercise 8 solution
MOD = 998244353
G_ROOT = 3
def ntt(a, invert=False):
n = len(a); a = list(a)
j = 0
for i in range(1, n):
bit = n >> 1
while j & bit: j ^= bit; bit >>= 1
j ^= bit
if i < j: a[i], a[j] = a[j], a[i]
length = 2
while length <= n:
w = pow(G_ROOT, MOD-1-(MOD-1)//length if invert else (MOD-1)//length, MOD)
for i in range(0, n, length):
wn = 1
for kk in range(length // 2):
u = a[i+kk]; v = a[i+kk+length//2] * wn % MOD
a[i+kk] = (u+v) % MOD; a[i+kk+length//2] = (u-v) % MOD
wn = wn * w % MOD
length <<= 1
if invert:
ni = pow(n, MOD-2, MOD)
a = [x*ni % MOD for x in a]
return a
def poly_mul_ntt(p1, p2):
rlen = len(p1)+len(p2)-1
n = 1
while n < rlen: n <<= 1
fa = ntt(p1+[0]*(n-len(p1))); fb = ntt(p2+[0]*(n-len(p2)))
fc = [(a*b)%MOD for a,b in zip(fa,fb)]
return ntt(fc, invert=True)[:rlen]
# Verification
print('Verification:')
print(f' (1+x)^2 = {poly_mul_ntt([1,1],[1,1])} expect [1,2,1]')
p1 = [1,2,3,4,5]; p2 = [6,7,8,9,10]
res_ntt = poly_mul_ntt(p1, p2)
res_np = list(np.polymul(p1[::-1],p2[::-1])[::-1].astype(int))
print(f' Large poly: match = {res_ntt == res_np}')
# Benchmark
import time
Ns_b = [2**k for k in range(1, 15)]
t_ntt_l, t_naive_l = [], []
for Nb in Ns_b:
pp1 = list(np.random.randint(0, 1000, Nb))
pp2 = list(np.random.randint(0, 1000, Nb))
t0 = time.perf_counter()
poly_mul_ntt(pp1, pp2)
t_ntt_l.append(time.perf_counter() - t0)
if Nb <= 512:
t0 = time.perf_counter()
_ = [sum(pp1[j]*pp2[i-j] for j in range(max(0, i-Nb+1), min(i+1, Nb)))
for i in range(2*Nb-1)]
t_naive_l.append(time.perf_counter() - t0)
else:
t_naive_l.append(None)
fig, ax = plt.subplots(figsize=(10, 6))
ax.loglog(Ns_b, t_ntt_l, 'o-', color=COLORS['primary'], linewidth=2, label='NTT O(N log N)')
valid = [(n,t) for n,t in zip(Ns_b,t_naive_l) if t is not None]
if valid:
nv, tv = zip(*valid)
ax.loglog(nv, tv, 's--', color=COLORS['error'], linewidth=2, label='Naive O(N^2)')
ref_nlogn = t_ntt_l[0]*np.array(Ns_b)*np.log2(np.array(Ns_b))/(Ns_b[0]*np.log2(Ns_b[0]))
ax.loglog(Ns_b, ref_nlogn, ':', color=COLORS['neutral'], label='O(N log N) ref')
ax.set_xlabel('Polynomial degree N'); ax.set_ylabel('Time (s)')
ax.set_title('NTT vs naive polynomial multiplication')
ax.legend()
fig.tight_layout(); plt.show()
Exercise 9: Zero Padding and Frequency Grid Density
Zero-pad a sampled sinusoid and show that padding densifies the plotted frequency grid without increasing true resolution.
Code cell 29
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 30
# Solution
print("Exercise 9: Zero Padding")
fs = 128
n = np.arange(64)
x = np.sin(2*np.pi*13.5*n/fs)
f1 = np.fft.fftfreq(len(x), d=1/fs)
f2 = np.fft.fftfreq(512, d=1/fs)
print("original bin spacing:", abs(f1[1]-f1[0]))
print("padded bin spacing:", abs(f2[1]-f2[0]))
assert abs(f2[1]-f2[0]) < abs(f1[1]-f1[0])
print("Takeaway: padding interpolates the spectrum; it does not create new measurements.")
Exercise 10: DFT Parseval Identity
Verify the finite-dimensional Parseval identity for the unnormalized NumPy FFT convention.
Code cell 32
# Your Solution
print("Write your solution here, then compare with the reference solution below.")
Code cell 33
# Solution
print("Exercise 10: DFT Parseval")
rng = np.random.default_rng(123)
x = rng.normal(size=32) + 1j * rng.normal(size=32)
X = np.fft.fft(x)
time_energy = np.sum(np.abs(x)**2)
freq_energy = np.sum(np.abs(X)**2) / len(x)
print("time energy:", round(float(time_energy), 8))
print("frequency energy:", round(float(freq_energy), 8))
assert np.allclose(time_energy, freq_energy)
print("Takeaway: the DFT is unitary after the correct normalization.")
Summary
These exercises covered the full DFT/FFT landscape:
| Exercise | Core idea |
|---|---|
| 1 | DFT as a unitary matrix; Parseval holds exactly |
| 2 | Cooley-Tukey halving reduces to |
| 3 | Window choice trades main-lobe width for sidelobe attenuation |
| 4 | Resolution is locked to observation time; zero-padding only interpolates |
| 5 | COLA enables perfect reconstruction from overlapping STFT frames |
| 6 | Whisper's mel pipeline = STFT + triangular filterbank + log compression |
| 7 | FNO spectral layer learns global operators with per layer |
| 8 | NTT: exact FFT over ; underpins FHE polynomial arithmetic |
Next section: §04 Convolution Theorem — circular convolution pointwise multiplication and its consequences for CNNs, WaveNet, S4, and Mamba.