Welcome to the start of a new series on Reinforcement Learning! Our journey begins with a fundamental question: if we knew all the rules of a game, how could we determine the best possible strategy? This post dives into the mathematical framework for answering that, exploring how to find optimal solutions in known environments using the cornerstone algorithms of Policy and Value Iteration.
Table of contents
Open Table of contents
- 1. The Framework: Markov Decision Processes (MDPs)
- 2. The Heart of RL: The Bellman Equations
- 3. Algorithm 1: Policy Iteration
- 4. Algorithm 2: Value Iteration
- 5. Why Does Value Iteration Converge? The Contraction Mapping Proof
- 6. Practical Considerations
- 7. How Good is a Greedy Policy? Performance Bounds
- 8. A Deeper Look: Horizons and Stationarity
- What’s Next?
1. The Framework: Markov Decision Processes (MDPs)
Before we can solve anything, we need a formal language to describe the problem. In RL, this language is the Markov Decision Process (MDP). An MDP is a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision-maker.
An MDP is defined by a tuple :
- : A finite set of states. This is a complete description of the world at a given time.
- : A finite set of actions available to the agent.
- : The transition probability function. This is the probability of transitioning to state after taking action in state . This is the “model” of the environment.
- : The reward function. This is the immediate reward received after transitioning from state to state as a result of action .
- : The discount factor (). This value determines the importance of future rewards. A value of 0 means the agent is myopic and only cares about immediate rewards, while a value closer to 1 means the agent is farsighted.
The agent’s goal is to learn a policy, denoted by , which is a strategy that specifies what action to take in each state. Our ultimate objective is to find the optimal policy, , that maximizes the cumulative discounted reward.
2. The Heart of RL: The Bellman Equations
To find the optimal policy, we first need a way to measure how good a policy is. We do this with value functions.
- State-Value Function : The expected return when starting in state and following policy thereafter.
- Action-Value Function : The expected return when starting in state , taking action , and then following policy thereafter.
These two functions are connected by the Bellman Expectation Equation, which provides a recursive definition of value:
This equation says that the value of a state under a policy is the sum of the expected immediate rewards and the expected discounted future rewards.
While this tells us the value of a given policy, it doesn’t tell us how to find the best policy. For that, we need the Bellman Optimality Equation:
The only difference is the max operator. Instead of averaging over the policy’s actions, it chooses the best action at each step. If we can solve this equation, we can find the optimal policy. The algorithms below are methods for solving it.
3. Algorithm 1: Policy Iteration
Policy Iteration is a “two-step dance” that reliably finds the optimal policy. It iterates between evaluating a policy and then improving it.
Step 1: Policy Evaluation
First, we take a policy and compute its value function . We start with an arbitrary value function and iteratively apply the Bellman Expectation Equation as an update rule:
We repeat this for all states until the value function converges (i.e., ).
Step 2: Policy Improvement
Now that we have the value function for our policy, we can improve the policy. For each state, we find the action that leads to the highest expected return, according to our just-calculated value function. This gives us a new, greedy policy :
The Policy Improvement Theorem guarantees that this new policy is at least as good as, if not better than, the original policy .
We repeat these two steps—evaluation and improvement—until the policy stabilizes. At that point, the policy is optimal.
4. Algorithm 2: Value Iteration
Value Iteration is a more direct approach that combines the two steps of Policy Iteration into one. It directly iterates on the Bellman Optimality Equation:
We start with an arbitrary value function and repeatedly apply this update to all states. This process is guaranteed to converge to the optimal value function because the Bellman operator is a contraction mapping, which mathematically ensures it has a unique fixed point that the iterations will converge to.
Once we have , we can extract the optimal policy by acting greedily with respect to it:
5. Why Does Value Iteration Converge? The Contraction Mapping Proof
We mentioned that Value Iteration is guaranteed to converge because the Bellman operator is a contraction mapping. This is a fundamental concept from mathematics that, when applied here, provides the theoretical bedrock for our algorithm. Let’s briefly walk through the proof.
A mapping (or operator) is a contraction if, for any two vectors and , the following inequality holds:
where . In plain English, applying the operator to any two value functions always brings them closer together (scaled by ). The Banach Fixed-Point Theorem states that if an operator is a contraction on a complete metric space, it has a unique fixed point, and iterating the operator from any starting point will converge to that fixed point.
Proof for the Bellman Optimality Operator ()
Let’s prove this for the Bellman optimality operator used in Value Iteration.
Consider two arbitrary value functions, and . Let’s look at the difference for a single state :
This holds for any state , so it must also hold for the maximum over all states:
This completes the proof that the Bellman optimality operator is a contraction. A nearly identical, but slightly simpler proof (since it doesn’t involve the max operator) shows that the policy-specific Bellman operator is also a contraction. This is why both Value Iteration and Policy Evaluation are guaranteed to converge.
6. Practical Considerations
Applying these algorithms brings up important practical issues:
-
Reward Hacking: The agent will always find the optimal way to maximize the reward you give it. If that reward function doesn’t perfectly capture your true goal, the agent’s “optimal” behavior might be surprising and undesirable. Designing good reward functions is a critical and challenging part of RL.
-
Horizons and Discounting: The discount factor defines the agent’s “effective horizon.” A smaller leads to a more short-sighted agent, while a closer to 1 makes it more farsighted. The choice of can dramatically change the optimal policy, especially in problems with delayed rewards.
7. How Good is a Greedy Policy? Performance Bounds
A fascinating theoretical question is: if we stop Value Iteration early, we get an imperfect value function . If we then extract the greedy policy from this imperfect , how bad can that policy be compared to the true optimal policy ?
This is where the Bellman Residual (or Bellman error) comes in. It measures how much a value function fails to satisfy the Bellman Optimality Equation:
If , the error is zero. For any other , it’s positive. It turns out we can use this error to create a powerful performance guarantee.
The Performance Bound Theorem
The theorem states that for a greedy policy extracted from an arbitrary value function :
This result tells us that the performance of our greedy policy () is guaranteed to be no worse than a certain amount below the true optimal performance (). This performance gap is directly proportional to the Bellman error of the value function we started with.
A Quick But Important Clarification: The Cast of Characters
A common point of confusion in these proofs is keeping track of the different value functions and policies. Let’s be explicit:
- : An arbitrary, imperfect value function. Think of this as the output of running Value Iteration for a few steps.
- : The policy we get by being greedy once with respect to . After this single extraction, is now fixed.
- : The true value of following the fixed policy forever. We find this by solving the Bellman Expectation Equation for (i.e., finding the fixed point of the operator). The iterative process to find is Policy Evaluation; the policy does not change during this process.
- : The true optimal value function for the MDP.
- : The true optimal policy, which is greedy with respect to .
The core of the proof relies on the fact that is greedy with respect to , but its actual long-term value is . The goal is to bound the difference between and .
The Proof
Let’s walk through the proof, as it’s a great example of how these theoretical concepts connect. Our goal is to bound the “value loss,” .
-
Start with the value loss and apply the triangle inequality:
-
Bound each term separately. To do this, we need to prove a general and very useful lemma that bounds the distance between an arbitrary value function and a policy’s true value function . The lemma states:
Let’s prove this, as it’s a classic RL proof technique.
Rearranging the final inequality gives the result:
An identical proof (just replacing with ) shows the same relationship for the optimal value function:
-
Substitute the bounds back in.
-
Use the greedy policy definition. Our policy was extracted greedily from . This means that for any state, applying the policy-specific operator to is the same as applying the full optimality operator . In other words, . We can now substitute this into our inequality:
-
Final Result. Letting , we have shown that for any state :
Rearranging this gives the final performance bound:
This provides a crucial stopping condition for our algorithms and a certificate of quality for our final policy.
Numerical Intuition: The Cost of Myopia
The term is a crucial part of these bounds. It represents how much the single-step Bellman error can be magnified into the final performance gap.
Consider a discount factor . This represents an agent that is fairly farsighted. The error magnification term becomes . Let’s see how this plays out.
-
Value Function Error (10ε): One of our key lemmas showed that the error of our value function
Vcompared to the optimalV*is bounded by: Plugging in , we get . This means the total error in our value function estimate can be up to 10 times the single-step Bellman error we can measure. -
Policy Performance Loss (20ε): The final performance bound theorem states that the performance loss of the greedy policy is bounded by: Plugging in here gives us . The final performance loss of our greedy policy is bounded by 20 times the Bellman error.
If we have a more myopic agent, with , the magnification is only . A smaller means future errors are discounted more heavily, so the single-step error has less impact on the total value. Conversely, as approaches 1, the magnification factor approaches infinity, meaning that even tiny single-step errors can accumulate into a massive performance loss over an extremely long horizon. This gives us a concrete way to understand the trade-off between horizon and the precision required in our value function approximation.
8. A Deeper Look: Horizons and Stationarity
A crucial distinction in MDPs is whether the problem has a finite or infinite horizon. This choice fundamentally changes the nature of the optimal policy.
-
Finite Horizon (H): The agent has a fixed number of timesteps,
H, to complete its task. The optimal policy in this setting is non-stationary. This means the best action for a statescan change depending on how much time is left. -
Infinite Horizon (γ): The episode continues until a terminal state is reached, and future rewards are discounted by
γ. The optimal policy here is stationary. The best action for a statesis the same regardless of whether it’s the 1st timestep or the 1000th.
Can we find a γ that makes the infinite-horizon policy match a finite-horizon one?
- Sometimes, yes. For the simple case of
H=1, the agent is completely myopic. This is perfectly replicated by settingγ=0, which also makes the agent completely myopic. - But not always. In general, you cannot find a single
γthat will replicate the time-dependent, non-stationary logic of an arbitrary finite-horizon policyH. The two frameworks solve fundamentally different problems.
What’s Next?
Both Policy and Value Iteration are powerful algorithms, but they share a critical requirement: we must have a perfect model of the environment (the transition probabilities and reward function ). In many real-world scenarios, we don’t have this.
In the next post, we’ll explore model-free reinforcement learning, where we learn optimal policies directly from experience, without ever needing to know the underlying dynamics of the world. This will lead us to foundational algorithms like Monte Carlo methods and Temporal Difference learning.