Part 1Math for LLMs

Red Teaming and Safety Evaluations: Part 1 - Intuition To 2 Formal Definitions

Alignment and Safety / Red Teaming and Safety Evaluations

Private notes
0/8000

Notes stay private to your browser until account sync is configured.

Part 1
30 min read12 headingsSplit lesson page

Lesson overview | Lesson overview | Next part

Red Teaming and Safety Evaluations: Part 1: Intuition to 2. Formal Definitions

1. Intuition

Intuition develops the part of red teaming and safety evaluations that the approved TOC assigns to Chapter 18. The emphasis is alignment behavior, safety constraints, and feedback loops, not generic fine-tuning or production monitoring.

1.1 Red teaming searches before users find failures

Red teaming searches before users find failures belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For red teaming searches before users find failures, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat red teaming searches before users find failures as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for red teaming searches before users find failures:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Red teaming searches before users find failures is part of the post- training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

1.2 Safety failures are rare-event measurements

Safety failures are rare-event measurements belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For safety failures are rare-event measurements, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat safety failures are rare-event measurements as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for safety failures are rare-event measurements:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Safety failures are rare-event measurements is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

1.3 Attackers adapt to defenses

Attackers adapt to defenses belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For attackers adapt to defenses, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat attackers adapt to defenses as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for attackers adapt to defenses:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Attackers adapt to defenses is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

1.4 Red-team findings become training data

Red-team findings become training data belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For red-team findings become training data, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat red-team findings become training data as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for red-team findings become training data:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Red-team findings become training data is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

1.5 Safety evaluation versus general robustness

Safety evaluation versus general robustness belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For safety evaluation versus general robustness, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat safety evaluation versus general robustness as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for safety evaluation versus general robustness:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Safety evaluation versus general robustness is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

2. Formal Definitions

Formal Definitions develops the part of red teaming and safety evaluations that the approved TOC assigns to Chapter 18. The emphasis is alignment behavior, safety constraints, and feedback loops, not generic fine-tuning or production monitoring.

2.1 Harm taxonomy

Harm taxonomy belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For harm taxonomy, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat harm taxonomy as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for harm taxonomy:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Harm taxonomy is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

2.2 Attack prompt

Attack prompt belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For attack prompt, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat attack prompt as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for attack prompt:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Attack prompt is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

2.3 Target model

Target model belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For target model, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat target model as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for target model:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Target model is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

2.4 Violation score

Violation score belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For violation score, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat violation score as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for violation score:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Violation score is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

2.5 Attack success rate

Attack success rate belongs in the canonical scope of red teaming and safety evaluations. The object is the safety attack and evaluation protocol, not merely a prompt trick or a moderation label. We study how data, losses, policies, review processes, and safety constraints shape a model's conditional distribution over responses.

A compact way to read this subsection is through the local symbol v(x,y). It marks the alignment object being transformed: an instruction policy, a preference pair, a violation classifier, a guardrail action, or a feedback event. The details differ, but the discipline is the same: state the object, state the loss or decision rule, then audit the behavioral side effects.

ASR^=1ni=1n1{v(xi,yi)=1}.\widehat{\operatorname{ASR}} = \frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{v(x_i,y_i)=1\}.

For attack success rate, this formula should not be treated as a slogan. It defines which tokens, responses, comparisons, or decisions receive gradient or operational weight. A change in masking, sampling, rubric wording, or thresholding changes the effective objective even if the model architecture is unchanged.

Alignment objectMathematical questionEngineering question
DataWhich examples define the target behavior?Who wrote, filtered, and approved them?
ObjectiveWhich terms receive weight?Are masks, margins, and thresholds logged?
PolicyWhich actions are allowed or disallowed?Can reviewers reproduce the decision?
EvaluationWhich metric detects regression?Is the test private, stable, and sliced?
FeedbackWhich new evidence changes training?How does it enter the next dataset version?

Examples:

  • Treat attack success rate as part of the model contract and store the exact data version.
  • Record the prompt template, role format, policy version, and decoder settings.
  • Compare aligned and reference policies on both helpfulness and safety slices.
  • Use held-out examples that were not used to tune refusals or rewards.
  • Inspect failure cases before declaring the objective successful.

Non-examples:

  • Calling a model aligned because it sounds polite on a few prompts.
  • Training on refusals without measuring over-refusal on benign requests.
  • Using a reward model as ground truth without calibration or adversarial checks.
  • Shipping a guardrail threshold without measuring false positive and false negative rates.
  • Letting feedback logs change training without provenance or consent controls.

A useful implementation pattern is to separate policy, data, and measurement. The policy says what behavior is desired. The data supplies examples, comparisons, attacks, or feedback events. The measurement checks whether the updated system moved in the intended direction without unacceptable regressions.

policy text/rubric
      |
      v
training or guardrail data  ->  objective/threshold  ->  aligned system
      |                                                   |
      v                                                   v
audit metadata                                      held-out safety eval

Worked reasoning pattern for attack success rate:

  1. Name the target behavior in plain language.
  2. Write the mathematical variable that represents it.
  3. Specify which examples or comparisons estimate it.
  4. Choose the optimization loss or runtime decision rule.
  5. Define the regression metric that would prove the change became worse.

Three details are especially easy to miss in alignment work. First, the user intent distribution is not the same as the pretraining distribution. Second, safety labels are not ordinary class labels; they encode policy judgments that can change by context. Third, optimization pressure finds shortcuts, so every proxy must be monitored for Goodhart-style failures.

Failure pressureTypical symptomMitigation
Proxy rewardHigh reward but worse human judgmentHoldout preferences and adversarial review
Refusal shortcutSafe but unhelpful responsesMeasure benign refusal rate separately
Template overfitGood on training chat format onlyEvaluate alternate templates and languages
Policy ambiguityInconsistent labelsAdjudication and rubric revision
Feedback driftNew labels change old policy silentlyVersion policy, rubric, and dataset together

AI connection: Attack success rate is part of the post-training stack used by modern assistant systems. It links the base language model to human intent, safety policy, and deployment constraints without pretending that a single loss can capture all values. The goal is not perfect alignment by formula; it is a repeatable loop where evidence, objectives, and safeguards improve together.

Skill Check

Test this lesson

Answer 4 quick questions to lock in the lesson and feed your adaptive practice queue.

--
Score
0/4
Answered
Not attempted
Status
1

Which module does this lesson belong to?

2

Which section is covered in this lesson content?

3

Which term is most central to this lesson?

4

What is the best way to use this lesson for real learning?

Your answers save locally first, then sync when account storage is available.
Practice queue