Lesson overview | Previous part | Next part
Generalization Bounds: Part 3: Finite-Class Bounds to 4. Complexity-Based Bounds
3. Finite-Class Bounds
Finite-Class Bounds develops the part of generalization bounds specified by the approved Chapter 21 table of contents. The emphasis is statistical learning theory, not generic statistics, optimization recipes, or benchmark operations.
3.1 Hoeffding inequality recall
Hoeffding inequality recall is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For hoeffding inequality recall, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of hoeffding inequality recall:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for hoeffding inequality recall is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: hoeffding inequality recall will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using hoeffding inequality recall responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
3.2 union bound over
Union bound over is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For union bound over , a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of union bound over :
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for union bound over is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: union bound over will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using union bound over responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
3.3 realizable bound
Realizable bound is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For realizable bound, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of realizable bound:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for realizable bound is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: realizable bound will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using realizable bound responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
3.4 agnostic bound
Agnostic bound is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For agnostic bound, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of agnostic bound:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for agnostic bound is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: agnostic bound will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using agnostic bound responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
3.5 sample-size calculator
Sample-size calculator is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For sample-size calculator, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of sample-size calculator:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for sample-size calculator is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: sample-size calculator will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using sample-size calculator responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
4. Complexity-Based Bounds
Complexity-Based Bounds develops the part of generalization bounds specified by the approved Chapter 21 table of contents. The emphasis is statistical learning theory, not generic statistics, optimization recipes, or benchmark operations.
4.1 VC bound
Vc bound is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For vc bound, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of vc bound:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for vc bound is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: vc bound will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using vc bound responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
4.2 Rademacher preview
Rademacher preview is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For rademacher preview, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of rademacher preview:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for rademacher preview is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: rademacher preview will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using rademacher preview responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
4.3 covering number preview
Covering number preview is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For covering number preview, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of covering number preview:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for covering number preview is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: covering number preview will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using covering number preview responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
4.4 compression preview
Compression preview is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For compression preview, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of compression preview:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for compression preview is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: compression preview will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using compression preview responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.
4.5 stability preview
Stability preview is part of the canonical scope of Generalization Bounds. The purpose is to understand when finite data can justify a claim about unseen examples, not to replace empirical evaluation or production monitoring.
In this subsection the working scope is finite-class, VC, margin, norm, stability, compression, and PAC-Bayes style bounds as tools for interpreting empirical risk. We use a distribution , a sample , a hypothesis class , and a loss-derived risk. The core question is whether the behavior on can control the behavior under .
The formula should be read operationally. For stability preview, a learner is not certified by a story about model architecture. It is certified by assumptions, a class of hypotheses, a loss, a sample size, and a probability statement.
| Theory object | Meaning | AI interpretation |
|---|---|---|
| Unknown data distribution | User prompts, images, tokens, labels, or tasks the system will face | |
| Finite training or evaluation sample | The observed examples available to the learner or auditor | |
| Hypothesis class | Classifiers, probes, reward models, safety filters, or predictors | |
| Empirical risk | Error measured on the observed sample | |
| True risk | Error on the distribution that matters after deployment |
Three examples of stability preview:
- A binary safety classifier is evaluated on a sample of labeled prompts, but the team needs a bound on future violation-detection error.
- A linear probe is trained on hidden states, and learning theory asks how much the probe's validation behavior depends on sample size and class capacity.
- A small model is fine-tuned on limited domain data, and the practitioner wants to separate approximation error from estimation error.
Two non-examples are just as important:
- A leaderboard rank without a distributional statement is not a learnability guarantee.
- A production incident report without a hypothesis class, loss, or sampling assumption is not a statistical learning theorem.
The proof habit for stability preview is to identify the random object first. Sometimes the randomness is the sample . Sometimes it is Rademacher signs. Sometimes it is label noise. Once the random object is explicit, concentration and symmetrization tools can be used without hand-waving.
A useful ASCII picture for this subsection is:
unknown distribution D
| sample S
v
empirical learner h_S ----> empirical risk L_S(h_S)
|
v
true deployment risk L_D(h_S)
The gap between the last two quantities is the reason this chapter exists. Chapter 17 measures it empirically with benchmark protocols. Chapter 21 studies when mathematics can control it before all future examples are observed.
Implementation note for the companion notebook: stability preview will be demonstrated with synthetic finite samples. The code will not depend on external datasets; it will compute bounds, simulate class behavior, or plot risk decompositions so the theorem-level object is visible.
The modern AI caution is that very large models often violate the cleanest textbook assumptions. That does not make the mathematics useless. It means the reader should distinguish theorem-level guarantees from diagnostic metaphors and engineering heuristics.
Checklist for using stability preview responsibly:
- State the sample space and label space.
- State the hypothesis or function class.
- State the loss and risk definition.
- State whether the setting is realizable or agnostic.
- Track both accuracy tolerance and confidence.
- Identify whether the bound is distribution-free or data-dependent.
- Separate the theorem from the empirical measurement.
For AI systems, this discipline prevents a common confusion: empirical success is evidence, but learnability theory explains which kinds of evidence should scale with sample size, class capacity, margins, norms, and noise.
The subsection also prepares the later material. PAC learning motivates VC dimension. VC dimension motivates generalization bounds. Bias-variance decomposition gives a different error accounting. Rademacher complexity gives a data-dependent complexity view.