Lesson overview | Previous part | Next part
Reinforcement Learning: Part 3: Returns Policies and Value Functions to 4. Bellman Equations
3. Returns Policies and Value Functions
Returns Policies and Value Functions is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
3.1 Discounted return
Purpose. Discounted return focuses on why is central. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Value functions summarize future consequences. Advantage functions compare an action with the policy's average behavior at the same state.
Worked reading.
Occupancy measures explain why RL gradients weight states by how often the current policy visits them.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- .
- .
- .
Non-examples:
- instant reward only.
- a metric computed on states never reached by the policy.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
3.2 Deterministic and stochastic policies
Purpose. Deterministic and stochastic policies focuses on how controls the data distribution. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
3.3 State-value and action-value functions
Purpose. State-value and action-value functions focuses on what and estimate. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Function approximation replaces tables with parameterized models so the agent can generalize across large state spaces.
Worked reading.
Deep RL is powerful because neural networks share statistical strength, but unstable because approximate bootstrapping can amplify errors.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- DQN target networks.
- experience replay.
- critic networks.
Non-examples:
- exact dynamic programming in a tiny known MDP.
- memorizing every state-action value in a table.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
3.4 Advantage functions
Purpose. Advantage functions focuses on why reduces variance. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Function approximation replaces tables with parameterized models so the agent can generalize across large state spaces.
Worked reading.
Deep RL is powerful because neural networks share statistical strength, but unstable because approximate bootstrapping can amplify errors.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- DQN target networks.
- experience replay.
- critic networks.
Non-examples:
- exact dynamic programming in a tiny known MDP.
- memorizing every state-action value in a table.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
3.5 Occupancy measures
Purpose. Occupancy measures focuses on how policies induce weighted state-action distributions. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Value functions summarize future consequences. Advantage functions compare an action with the policy's average behavior at the same state.
Worked reading.
Occupancy measures explain why RL gradients weight states by how often the current policy visits them.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- .
- .
- .
Non-examples:
- instant reward only.
- a metric computed on states never reached by the policy.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
4. Bellman Equations
Bellman Equations is part of the core mathematical path from Markov chains to modern AI agents. The emphasis is on the object definitions and update equations a learner must be able to inspect in code.
4.1 Bellman expectation equation
Purpose. Bellman expectation equation focuses on recursive consistency for a fixed policy. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Bellman equations express a global return as immediate reward plus the value of the next state. They are recursive consistency equations.
Worked reading.
The Bellman backup replaces a value estimate by a reward-plus-next-value target under either a fixed policy or an optimal action.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- policy evaluation.
- value iteration.
- TD target construction.
Non-examples:
- a one-step supervised label.
- a loss that ignores future value.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
4.2 Bellman optimality equation
Purpose. Bellman optimality equation focuses on recursive consistency for the best policy. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Bellman equations express a global return as immediate reward plus the value of the next state. They are recursive consistency equations.
Worked reading.
The Bellman backup replaces a value estimate by a reward-plus-next-value target under either a fixed policy or an optimal action.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- policy evaluation.
- value iteration.
- TD target construction.
Non-examples:
- a one-step supervised label.
- a loss that ignores future value.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
4.3 Contraction mapping intuition
Purpose. Contraction mapping intuition focuses on why dynamic programming converges when . This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
This concept is part of the bridge from sequential probability models to practical RL algorithms.
Worked reading.
The key habit is to name the state, action, reward, transition, policy, and value object before writing an update.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- small tabular MDPs.
- neural value functions.
- preference-optimized language policies.
Non-examples:
- static regression.
- uncontrolled simulation traces.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
4.4 Matrix form for finite MDPs
Purpose. Matrix form for finite MDPs focuses on how policy evaluation becomes a linear system. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
An MDP is the formal object that makes sequential decision-making mathematically precise.
Worked reading.
The Markov property says the current state contains the predictive information needed for the next transition and reward.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- finite gridworld.
- inventory control.
- dialogue state tracking.
Non-examples:
- raw observations that omit hidden state.
- a static labeled dataset with no actions.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.
4.5 Bellman residuals
Purpose. Bellman residuals focuses on how to diagnose approximate value functions. This is not optional vocabulary: it determines which variables are random, which distribution creates the data, and which recursive target is valid.
Operational definition.
Bellman equations express a global return as immediate reward plus the value of the next state. They are recursive consistency equations.
Worked reading.
The Bellman backup replaces a value estimate by a reward-plus-next-value target under either a fixed policy or an optimal action.
| Object | Mathematical role | ML interpretation |
|---|---|---|
| state space | task context, simulator state, dialogue state | |
| action space | moves, controls, generated tokens, tool choices | |
| transition kernel | environment dynamics or next-context distribution | |
| reward function | scalar training signal, preference score, task score | |
| policy | behavior rule or neural action distribution | |
| value functions | estimates of future performance |
Examples:
- policy evaluation.
- value iteration.
- TD target construction.
Non-examples:
- a one-step supervised label.
- a loss that ignores future value.
Derivation habit.
- State whether the policy is fixed, greedy-improved, or being optimized.
- Write the return or Bellman target before writing code.
- Identify whether the target is sampled, model-based, bootstrapped, or exact.
- Check whether the data are on-policy or off-policy.
- Mask terminal states so that value is not propagated beyond the episode.
Implementation lens.
In a small tabular MDP, every object can be printed: transition matrices, reward vectors, value arrays, and greedy actions. In a deep RL system, the same objects exist but are represented by neural networks, replay buffers, sampled rollouts, and approximate losses. The names should not change just because the implementation becomes larger.
For LLM work, the state is usually a prompt plus generated prefix, the action is the next token or structured action, and the policy is an autoregressive distribution. The reward may arrive only after a full response, which means the return is sequence-level even though actions are token-level.
Useful checks:
- Does the transition or sample contain the next state?
- Is the update using , , an advantage, or a reward-only signal?
- Is the current policy also the data-collection policy?
- Does discounting express time preference or just numerical convenience?
- Is the reward model being optimized beyond its trustworthy region?
A common failure mode is to copy an update rule while silently changing the sampling assumption. For example, SARSA and Q-learning can look nearly identical in code, but one updates from the action actually taken and the other updates from a greedy next action. That distinction changes both the mathematics and the behavior.
The notebook version of this idea keeps the environment small enough to inspect by hand. If a concept is unclear, compute it first in the chain MDP and only then move to neural approximators.