Policy gradient methods are a class of reinforcement learning algorithms.
Policy gradient methods are a sub-class of policy optimization methods. Unlike value-based methods which learn a value function to derive a policy, policy optimization methods directly learn a policy function π {\displaystyle \pi } that selects actions without consulting a value function. For policy gradient to apply, the policy function π θ {\displaystyle \pi _{\theta }} is parameterized by a differentiable parameter θ {\displaystyle \theta } .
In policy-based RL, the actor is a parameterized policy function π θ {\displaystyle \pi _{\theta }} , where θ {\displaystyle \theta } are the parameters of the actor. The actor takes as argument the state of the environment s {\displaystyle s} and produces a probability distribution π θ ( ⋅ ∣ s ) {\displaystyle \pi _{\theta }(\cdot \mid s)} .
If the action space is discrete, then ∑ a π θ ( a ∣ s ) = 1 {\displaystyle \sum _{a}\pi _{\theta }(a\mid s)=1} . If the action space is continuous, then ∫ a π θ ( a ∣ s ) d a = 1 {\displaystyle \int _{a}\pi _{\theta }(a\mid s)\mathrm {d} a=1} .
The goal of policy optimization is to find some θ {\displaystyle \theta } that maximizes the expected episodic reward J ( θ ) {\displaystyle J(\theta )} : J ( θ ) = E π θ [ ∑ t ∈ 0 : T γ t R t | S 0 = s 0 ] {\displaystyle J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\gamma ^{t}R_{t}{\Big |}S_{0}=s_{0}\right]} where γ {\displaystyle \gamma } is the discount factor, R t {\displaystyle R_{t}} is the reward at step t {\displaystyle t} , s 0 {\displaystyle s_{0}} is the starting state, and T {\displaystyle T} is the time-horizon (which can be infinite).
The policy gradient is defined as ∇ θ J ( θ ) {\displaystyle \nabla _{\theta }J(\theta )} . Different policy gradient methods stochastically estimate the policy gradient in different ways. The goal of any policy gradient method is to iteratively maximize J ( θ ) {\displaystyle J(\theta )} by gradient ascent. Since the key part of any policy gradient method is the stochastic estimation of the policy gradient, they are also studied under the title of "Monte Carlo gradient estimation".[1]
The REINFORCE algorithm was the first policy gradient method.[2] It is based on the identity for the policy gradient ∇ θ J ( θ ) = E π θ [ ∑ t ∈ 0 : T ∇ θ ln π θ ( A t ∣ S t ) ∑ t ∈ 0 : T ( γ t R t ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})\;\sum _{t\in 0:T}(\gamma ^{t}R_{t}){\Big |}S_{0}=s_{0}\right]} which can be improved via the "causality trick"[3] ∇ θ J ( θ ) = E π θ [ ∑ t ∈ 0 : T ∇ θ ln π θ ( A t ∣ S t ) ∑ τ ∈ t : T ( γ τ R τ ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau }){\Big |}S_{0}=s_{0}\right]}
Lemma—The expectation of the score function is zero, conditional on any present or past state. That is, for any 0 ≤ i ≤ j ≤ T {\displaystyle 0\leq i\leq j\leq T} and any state s i {\displaystyle s_{i}} , we have E π θ [ ∇ θ ln π θ ( A j | S j ) | S i = s i ] = 0. {\displaystyle \mathbb {E} _{\pi _{\theta }}[\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})|S_{i}=s_{i}]=0.}
Further, if Ψ i {\textstyle \Psi _{i}} is a random variable that is independent of A i , S i + 1 , A i + 1 , … {\textstyle A_{i},S_{i+1},A_{i+1},\dots } , then E π θ [ ∇ θ ln π θ ( A j | S j ) ⋅ Ψ i | S i = s i ] = 0. {\displaystyle \mathbb {E} _{\pi _{\theta }}[\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\cdot \Psi _{i}|S_{i}=s_{i}]=0.}
Use the reparameterization trick.
E π θ [ ∇ θ ln π θ ( A j | S j ) | S i = s i ] = ∑ s P r ( S j = s | S i = s i ) ∑ a π θ ( a | s ) ∇ θ ln π θ ( a | s ) = ∑ s P r ( S j = s | S i = s i ) ∑ a π θ ( a | s ) ∇ θ π θ ( a | s ) π θ ( a | s ) = ∑ s P r ( S j = s | S i = s i ) ∑ a ∇ θ π θ ( a | s ) = ∑ s P r ( S j = s | S i = s i ) ∇ θ ∑ a π θ ( a | s ) {\displaystyle {\begin{aligned}\mathbb {E} _{\pi _{\theta }}[\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})|S_{i}=s_{i}]&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})\sum _{a}\pi _{\theta }(a|s)\nabla _{\theta }\ln \pi _{\theta }(a|s)\\&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})\sum _{a}\pi _{\theta }(a|s){\frac {\nabla _{\theta }\pi _{\theta }(a|s)}{\pi _{\theta }(a|s)}}\\&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})\sum _{a}\nabla _{\theta }\pi _{\theta }(a|s)\\&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})\nabla _{\theta }\sum _{a}\pi _{\theta }(a|s)\end{aligned}}} Since the policy π θ ( a | s ) {\displaystyle \pi _{\theta }(a|s)} is a probability distribution over actions for a given state, ∑ a π θ ( a | s ) = 1 {\textstyle \sum _{a}\pi _{\theta }(a|s)=1} . E π θ [ ∇ θ ln π θ ( A | S ) ] = ∑ s P r ( S j = s | S i = s i ) ∇ θ ( 1 ) = ∑ s P r ( S j = s | S i = s i ) 0 = 0 {\displaystyle {\begin{aligned}\mathbb {E} _{\pi _{\theta }}[\nabla _{\theta }\ln \pi _{\theta }(A|S)]&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})\nabla _{\theta }(1)\\&=\sum _{s}Pr(S_{j}=s|S_{i}=s_{i})0\\&=0\end{aligned}}}
By the tower law and the previous lemma.
E π θ [ Ψ i ∇ θ ln π θ ( A j | S j ) | S i = s i ] = E π θ [ E π θ [ Ψ i ∇ θ ln π θ ( A j | S j ) | S j ] | S i = s i ] = E π θ [ Ψ i E π θ [ ∇ θ ln π θ ( A j | S j ) | S j ] | S i = s i ] = E π θ [ Ψ i 0 | S i = s i ] = 0 {\displaystyle {\begin{aligned}\mathbb {E} _{\pi _{\theta }}\left[\Psi _{i}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j}){\Big |}S_{i}=s_{i}\right]&=\mathbb {E} _{\pi _{\theta }}\left[\mathbb {E} _{\pi _{\theta }}[\Psi _{i}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})|S_{j}]{\Big |}S_{i}=s_{i}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\Psi _{i}\mathbb {E} _{\pi _{\theta }}[\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})|S_{j}]{\Big |}S_{i}=s_{i}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\Psi _{i}0{\Big |}S_{i}=s_{i}\right]\\&=0\end{aligned}}}
Applying the reparameterization trick,
∇ θ J ( θ ) = ∇ θ E π θ [ ∑ i ∈ 0 : T γ i R i | S 0 = s 0 ] = E π θ [ ( ∑ i ∈ 0 : T γ i R i ) ∇ θ ln ( π θ ( A 0 , A 1 , … , A T | S 0 , S 1 , … , S T ) ) | S 0 = s 0 ] = E π θ [ ( ∑ i ∈ 0 : T γ i R i ) ∑ j ∈ 0 : T ∇ θ ln ( π θ ( A j | S j ) ) | S 0 = s 0 ] = E π θ [ ∑ i , j ∈ 0 : T ( γ i R i ) ∇ θ ln π θ ( A j | S j ) | S 0 = s 0 ] {\displaystyle {\begin{aligned}\nabla _{\theta }J(\theta )&=\nabla _{\theta }\mathbb {E} _{\pi _{\theta }}\left[\sum _{i\in 0:T}\gamma ^{i}R_{i}{\Big |}S_{0}=s_{0}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\left(\sum _{i\in 0:T}\gamma ^{i}R_{i}\right)\nabla _{\theta }\ln(\pi _{\theta }(A_{0},A_{1},\dots ,A_{T}|S_{0},S_{1},\dots ,S_{T})){\Big |}S_{0}=s_{0}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\left(\sum _{i\in 0:T}\gamma ^{i}R_{i}\right)\sum _{j\in 0:T}\nabla _{\theta }\ln(\pi _{\theta }(A_{j}|S_{j})){\Big |}S_{0}=s_{0}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\sum _{i,j\in 0:T}(\gamma ^{i}R_{i})\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j}){\Big |}S_{0}=s_{0}\right]\end{aligned}}} which is the first equation.
By the lemma, E π θ [ ( γ i R i ) ∇ θ ln π θ ( A j | S j ) | S 0 = s 0 ] 0 {\displaystyle \mathbb {E} _{\pi _{\theta }}\left[(\gamma ^{i}R_{i})\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j}){\Big |}S_{0}=s_{0}\right]0} for any 0 ≤ i < j ≤ T {\textstyle 0\leq i<j\leq T} . Plugging this into the previous formula, we zero out a whole triangle of terms, to get ∇ θ J ( θ ) = E π θ [ ∑ 0 ≤ j ≤ i ≤ T ( γ i R i ) ∇ θ ln π θ ( A j | S j ) | S 0 = s 0 ] = E π θ [ ∑ j ∈ 0 : T ∇ θ ln π θ ( A j | S j ) ∑ i ∈ j : T ( γ i R i ) | S 0 = s 0 ] {\displaystyle {\begin{aligned}\nabla _{\theta }J(\theta )&=\mathbb {E} _{\pi _{\theta }}\left[\sum _{0\leq j\leq i\leq T}(\gamma ^{i}R_{i})\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j}){\Big |}S_{0}=s_{0}\right]\\&=\mathbb {E} _{\pi _{\theta }}\left[\sum _{j\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\sum _{i\in j:T}(\gamma ^{i}R_{i}){\Big |}S_{0}=s_{0}\right]\end{aligned}}} which is the second equation.
Thus, we have an unbiased estimator of the policy gradient: ∇ θ J ( θ ) ≈ 1 N ∑ n = 1 N [ ∑ t ∈ 0 : T ∇ θ ln π θ ( A t , n ∣ S t , n ) ∑ τ ∈ t : T ( γ τ R τ , n ) ] {\displaystyle \nabla _{\theta }J(\theta )\approx {\frac {1}{N}}\sum _{n=1}^{N}\left[\sum _{t\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{t,n}\mid S_{t,n})\sum _{\tau \in t:T}(\gamma ^{\tau }R_{\tau ,n})\right]} where the index n {\displaystyle n} ranges over N {\displaystyle N} rollout trajectories using the policy π θ {\displaystyle \pi _{\theta }} .
The score function ∇ θ ln π θ ( A t ∣ S t ) {\displaystyle \nabla _{\theta }\ln \pi _{\theta }(A_{t}\mid S_{t})} can be interpreted as the direction in the parameter space that increases the probability of taking action A t {\displaystyle A_{t}} in state S t {\displaystyle S_{t}} . The policy gradient, then, is a weighted average of all possible directions to increase the probability of taking any action in any state, but weighted by reward signals, so that if taking a certain action in a certain state is associated with high reward, then that direction would be highly reinforced, and vice versa.
The REINFORCE algorithm is a loop:
Here, α t {\displaystyle \alpha _{t}} is the learning rate at update step t {\displaystyle t} .
REINFORCE is an on-policy algorithm, meaning that the trajectories used for the update must be sampled from the current policy π θ {\displaystyle \pi _{\theta }} . This can lead to high variance in the updates, as the returns R ( τ ) {\displaystyle R(\tau )} can vary significantly between trajectories. Many variants of REINFORCE has been introduced, under the title of variance reduction.
A common way for reducing variance is the REINFORCE with baseline algorithm, based on the following identity: ∇ θ J ( θ ) = E π θ [ ∑ j ∈ 0 : T ∇ θ ln π θ ( A j | S j ) ( ∑ i ∈ j : T ( γ i R i ) − b ( S j ) ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{j\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\left(\sum _{i\in j:T}(\gamma ^{i}R_{i})-b(S_{j})\right){\Big |}S_{0}=s_{0}\right]} for any function b : States → R {\displaystyle b:{\text{States}}\to \mathbb {R} } . This can be proven by applying the previous lemma.
The algorithm uses the modified gradient estimator g t ← 1 N ∑ k = 1 N [ ∑ j ∈ 0 : T ∇ θ t ln π θ ( A j , k | S i , k ) ( ∑ i ∈ j : T ( γ i R i , k ) − b t ( S j , k ) ) ] {\displaystyle g_{t}\leftarrow {\frac {1}{N}}\sum _{k=1}^{N}\left[\sum _{j\in 0:T}\nabla _{\theta _{t}}\ln \pi _{\theta }(A_{j,k}|S_{i,k})\left(\sum _{i\in j:T}(\gamma ^{i}R_{i,k})-b_{t}(S_{j,k})\right)\right]} and the original REINFORCE algorithm is the special case where b t = 0 {\displaystyle b_{t}=0} .
If b t {\textstyle b_{t}} is chosen well, such that b t ( S j ) ≈ ∑ i ∈ j : T ( γ i R i ) = γ j V π θ t ( S j ) {\textstyle b_{t}(S_{j})\approx \sum _{i\in j:T}(\gamma ^{i}R_{i})=\gamma ^{j}V^{\pi _{\theta _{t}}}(S_{j})} , this could significantly decrease variance in the gradient estimation. That is, the baseline should be as close to the value function V π θ t ( S j ) {\displaystyle V^{\pi _{\theta _{t}}}(S_{j})} as possible, approaching the ideal of: ∇ θ J ( θ ) = E π θ [ ∑ j ∈ 0 : T ∇ θ ln π θ ( A j | S j ) ( ∑ i ∈ j : T ( γ i R i ) − γ j V π θ ( S j ) ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{\pi _{\theta }}\left[\sum _{j\in 0:T}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\left(\sum _{i\in j:T}(\gamma ^{i}R_{i})-\gamma ^{j}V^{\pi _{\theta }}(S_{j})\right){\Big |}S_{0}=s_{0}\right]} Note that, as the policy π θ t {\displaystyle \pi _{\theta _{t}}} updates, the value function V π θ t ( S j ) {\displaystyle V^{\pi _{\theta _{t}}}(S_{j})} updates as well, so the baseline should also be updated. One common approach is to train a separate function that estimates the value function, and use that as the baseline. This is one of the actor-critic methods, where the policy function is the actor and the value function is the critic.
The Q-function Q π {\displaystyle Q^{\pi }} can also be used as the critic, since ∇ θ J ( θ ) = E π θ [ ∑ 0 ≤ j ≤ T γ j ∇ θ ln π θ ( A j | S j ) ⋅ Q π θ ( S j , A j ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq j\leq T}\gamma ^{j}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\cdot Q^{\pi _{\theta }}(S_{j},A_{j}){\Big |}S_{0}=s_{0}\right]} by a similar argument using the tower law.
Subtracting the value function as a baseline, we find that the advantage function A π ( S , A ) = Q π ( S , A ) − V π ( S ) {\displaystyle A^{\pi }(S,A)=Q^{\pi }(S,A)-V^{\pi }(S)} can be used as the critic as well: ∇ θ J ( θ ) = E π θ [ ∑ 0 ≤ j ≤ T γ j ∇ θ ln π θ ( A j | S j ) ⋅ A π θ ( S j , A j ) | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq j\leq T}\gamma ^{j}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\cdot A^{\pi _{\theta }}(S_{j},A_{j}){\Big |}S_{0}=s_{0}\right]} In summary, there are many unbiased estimators for ∇ θ J θ {\textstyle \nabla _{\theta }J_{\theta }} , all in the form of: ∇ θ J ( θ ) = E π θ [ ∑ 0 ≤ j ≤ T ∇ θ ln π θ ( A j | S j ) ⋅ Ψ j | S 0 = s 0 ] {\displaystyle \nabla _{\theta }J(\theta )=E_{\pi _{\theta }}\left[\sum _{0\leq j\leq T}\nabla _{\theta }\ln \pi _{\theta }(A_{j}|S_{j})\cdot \Psi _{j}{\Big |}S_{0}=s_{0}\right]} where Ψ j {\textstyle \Psi _{j}} is any linear sum of the following terms:
Some more possible Ψ j {\textstyle \Psi _{j}} are as follows, with very similar proofs.
The natural policy gradient method is a variant of the policy gradient method, proposed by Sham Kakade in 2001.[5] Unlike standard policy gradient methods, which depend on the choice of parameters θ {\displaystyle \theta } (making updates coordinate-dependent), the natural policy gradient aims to provide a coordinate-free update, which is geometrically "natural".
Standard policy gradient updates θ t + 1 = θ t + α ∇ θ J ( θ t ) {\displaystyle \theta _{t+1}=\theta _{t}+\alpha \nabla _{\theta }J(\theta _{t})} solve a constrained optimization problem: { max θ t + 1 J ( θ t ) + ( θ t + 1 − θ t ) T ∇ θ J ( θ t ) ‖ θ t + 1 − θ t ‖ ≤ α ⋅ ‖ ∇ θ J ( θ t ) ‖ {\displaystyle {\begin{cases}\max _{\theta _{t+1}}J(\theta _{t})+(\theta _{t+1}-\theta _{t})^{T}\nabla _{\theta }J(\theta _{t})\\\|\theta _{t+1}-\theta _{t}\|\leq \alpha \cdot \|\nabla _{\theta }J(\theta _{t})\|\end{cases}}} While the objective (linearized improvement) is geometrically meaningful, the Euclidean constraint ‖ θ t + 1 − θ t ‖ {\displaystyle \|\theta _{t+1}-\theta _{t}\|} introduces coordinate dependence. To address this, the natural policy gradient replaces the Euclidean constraint with a Kullback–Leibler divergence (KL) constraint: { max θ t + 1 J ( θ t ) + ( θ t + 1 − θ t ) T ∇ θ J ( θ t ) D ¯ K L ( π θ t + 1 ‖ π θ t ) ≤ ϵ {\displaystyle {\begin{cases}\max _{\theta _{t+1}}J(\theta _{t})+(\theta _{t+1}-\theta _{t})^{T}\nabla _{\theta }J(\theta _{t})\\{\bar {D}}_{KL}(\pi _{\theta _{t+1}}\|\pi _{\theta _{t}})\leq \epsilon \end{cases}}} where the KL divergence between two policies is averaged over the state distribution under policy π θ t {\displaystyle \pi _{\theta _{t}}} . That is, D ¯ K L ( π θ t + 1 ‖ π θ t ) := E s ∼ π θ t [ D K L ( π θ t + 1 ( ⋅ | s ) ‖ π θ t ( ⋅ | s ) ) ] {\displaystyle {\bar {D}}_{KL}(\pi _{\theta _{t+1}}\|\pi _{\theta _{t}}):=\mathbb {E} _{s\sim \pi _{\theta _{t}}}[D_{KL}(\pi _{\theta _{t+1}}(\cdot |s)\|\pi _{\theta _{t}}(\cdot |s))]} This ensures updates are invariant to invertible affine parameter transformations.
For small ϵ {\displaystyle \epsilon } , the KL divergence is approximated by the Fisher information metric: D ¯ K L ( π θ t + 1 ‖ π θ t ) ≈ 1 2 ( θ t + 1 − θ t ) T F ( θ t ) ( θ t + 1 − θ t ) {\displaystyle {\bar {D}}_{KL}(\pi _{\theta _{t+1}}\|\pi _{\theta _{t}})\approx {\frac {1}{2}}(\theta _{t+1}-\theta _{t})^{T}F(\theta _{t})(\theta _{t+1}-\theta _{t})} where F ( θ ) {\displaystyle F(\theta )} is the Fisher information matrix of the policy, defined as: F ( θ ) = E s , a ∼ π θ [ ∇ θ ln π θ ( a | s ) ( ∇ θ ln π θ ( a | s ) ) T ] {\displaystyle F(\theta )=\mathbb {E} _{s,a\sim \pi _{\theta }}\left[\nabla _{\theta }\ln \pi _{\theta }(a|s)\left(\nabla _{\theta }\ln \pi _{\theta }(a|s)\right)^{T}\right]} This transforms the problem into a problem in quadratic programming, yielding the natural policy gradient update: θ t + 1 = θ t + α F ( θ t ) − 1 ∇ θ J ( θ t ) {\displaystyle \theta _{t+1}=\theta _{t}+\alpha F(\theta _{t})^{-1}\nabla _{\theta }J(\theta _{t})} The step size α {\displaystyle \alpha } is typically adjusted to maintain the KL constraint, with α ≈ 2 ϵ ( ∇ θ J ( θ t ) ) T F ( θ t ) − 1 ∇ θ J ( θ t ) {\textstyle \alpha \approx {\sqrt {\frac {2\epsilon }{(\nabla _{\theta }J(\theta _{t}))^{T}F(\theta _{t})^{-1}\nabla _{\theta }J(\theta _{t})}}}} .
Inverting F ( θ ) {\displaystyle F(\theta )} is computationally intensive, especially for high-dimensional parameters (e.g., neural networks). Practical implementations often use approximations.
Trust Region Policy Optimization (TRPO) is a policy gradient method that extends the natural policy gradient approach by enforcing a trust region constraint on policy updates.[6] Developed by Schulman et al. in 2015, TRPO improves upon the natural policy gradient method.
The natural gradient descent is theoretically optimal, if the objective is truly a quadratic function, but this is only an approximation. TRPO's line search and KL constraint attempts to restrict the solution to within a "trust region" in which this approximation does not break down. This makes TRPO more robust in practice.
Like natural policy gradient, TRPO iteratively updates the policy parameters θ {\displaystyle \theta } by solving a constrained optimization problem specified coordinate-free: { max θ L ( θ , θ t ) D ¯ K L ( π θ ‖ π θ t ) ≤ ϵ {\displaystyle {\begin{cases}\max _{\theta }L(\theta ,\theta _{t})\\{\bar {D}}_{KL}(\pi _{\theta }\|\pi _{\theta _{t}})\leq \epsilon \end{cases}}} where
Note that in general, other surrogate advantages are possible: L ( θ , θ t ) = E s , a ∼ π θ t [ π θ ( a | s ) π θ t ( a | s ) Ψ π θ t ( s , a ) ] {\displaystyle L(\theta ,\theta _{t})=\mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[{\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}}\Psi ^{\pi _{\theta _{t}}}(s,a)\right]} where Ψ {\displaystyle \Psi } is any linear sum of the previously mentioned type. Indeed, OpenAI recommended using the Generalized Advantage Estimate, instead of the plain advantage A π θ {\displaystyle A^{\pi _{\theta }}} .
The surrogate advantage L ( θ , θ t ) {\displaystyle L(\theta ,\theta _{t})} is designed to align with the policy gradient ∇ θ J ( θ ) {\displaystyle \nabla _{\theta }J(\theta )} . Specifically, when θ = θ t {\displaystyle \theta =\theta _{t}} , ∇ θ L ( θ , θ t ) {\displaystyle \nabla _{\theta }L(\theta ,\theta _{t})} equals the policy gradient derived from the advantage function: ∇ θ J ( θ ) = E ( s , a ) ∼ π θ [ ∇ θ ln π θ ( a | s ) ⋅ A π θ ( s , a ) ] = ∇ θ L ( θ , θ t ) {\displaystyle \nabla _{\theta }J(\theta )=\mathbb {E} _{(s,a)\sim \pi _{\theta }}\left[\nabla _{\theta }\ln \pi _{\theta }(a|s)\cdot A^{\pi _{\theta }}(s,a)\right]=\nabla _{\theta }L(\theta ,\theta _{t})} However, when θ ≠ θ t {\displaystyle \theta \neq \theta _{t}} , this is not necessarily true. Thus it is a "surrogate" of the real objective.
As with natural policy gradient, for small policy updates, TRPO approximates the surrogate advantage and KL divergence using Taylor expansions around θ t {\displaystyle \theta _{t}} : L ( θ , θ t ) ≈ g T ( θ − θ t ) , D ¯ KL ( π θ ‖ π θ t ) ≈ 1 2 ( θ − θ t ) T H ( θ − θ t ) , {\displaystyle {\begin{aligned}L(\theta ,\theta _{t})&\approx g^{T}(\theta -\theta _{t}),\\{\bar {D}}_{\text{KL}}(\pi _{\theta }\|\pi _{\theta _{t}})&\approx {\frac {1}{2}}(\theta -\theta _{t})^{T}H(\theta -\theta _{t}),\end{aligned}}} where:
This reduces the problem to a quadratic optimization, yielding the natural policy gradient update: θ t + 1 = θ t + 2 ϵ g T F − 1 g F − 1 g . {\displaystyle \theta _{t+1}=\theta _{t}+{\sqrt {\frac {2\epsilon }{g^{T}F^{-1}g}}}F^{-1}g.} So far, this is essentially the same as natural gradient method. However, TRPO improves upon it by two modifications:
A further improvement is proximal policy optimization (PPO), which avoids even computing F ( θ ) {\displaystyle F(\theta )} and F ( θ ) − 1 {\displaystyle F(\theta )^{-1}} via a first-order approximation using clipped probability ratios.[7]
Specifically, instead of maximizing the surrogate advantage max θ L ( θ , θ t ) = E s , a ∼ π θ t [ π θ ( a | s ) π θ t ( a | s ) A π θ t ( s , a ) ] {\displaystyle \max _{\theta }L(\theta ,\theta _{t})=\mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[{\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}}A^{\pi _{\theta _{t}}}(s,a)\right]} under a KL divergence constraint, it directly inserts the constraint into the surrogate advantage: max θ E s , a ∼ π θ t [ { min ( π θ ( a | s ) π θ t ( a | s ) , 1 + ϵ ) A π θ t ( s , a ) if A π θ t ( s , a ) > 0 max ( π θ ( a | s ) π θ t ( a | s ) , 1 − ϵ ) A π θ t ( s , a ) if A π θ t ( s , a ) < 0 ] {\displaystyle \max _{\theta }\mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[{\begin{cases}\min \left({\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}},1+\epsilon \right)A^{\pi _{\theta _{t}}}(s,a)&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a)>0\\\max \left({\frac {\pi _{\theta }(a|s)}{\pi _{\theta _{t}}(a|s)}},1-\epsilon \right)A^{\pi _{\theta _{t}}}(s,a)&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a)<0\end{cases}}\right]} and PPO maximizes the surrogate advantage by stochastic gradient descent, as usual.
In words, gradient-ascending the new surrogate advantage function means that, at some state s , a {\displaystyle s,a} , if the advantage is positive: A π θ t ( s , a ) > 0 {\displaystyle A^{\pi _{\theta _{t}}}(s,a)>0} , then the gradient should direct θ {\displaystyle \theta } towards the direction that increases the probability of performing action a {\displaystyle a} under the state s {\displaystyle s} . However, as soon as θ {\displaystyle \theta } has changed so much that π θ ( a | s ) ≥ ( 1 + ϵ ) π θ t ( a | s ) {\displaystyle \pi _{\theta }(a|s)\geq (1+\epsilon )\pi _{\theta _{t}}(a|s)} , then the gradient should stop pointing it in that direction. And similarly if A π θ t ( s , a ) < 0 {\displaystyle A^{\pi _{\theta _{t}}}(s,a)<0} . Thus, PPO avoids pushing the parameter update too hard, and avoids changing the policy too much.
To be more precise, to update θ t {\displaystyle \theta _{t}} to θ t + 1 {\displaystyle \theta _{t+1}} requires multiple update steps on the same batch of data. It would initialize θ = θ t {\displaystyle \theta =\theta _{t}} , then repeatedly apply gradient descent (such as the Adam optimizer) to update θ {\displaystyle \theta } until the surrogate advantage has stabilized. It would then assign θ t + 1 {\displaystyle \theta _{t+1}} to θ {\displaystyle \theta } , and do it again.
During this inner-loop, the first update to θ {\displaystyle \theta } would not hit the 1 − ϵ , 1 + ϵ {\displaystyle 1-\epsilon ,1+\epsilon } bounds, but as θ {\displaystyle \theta } is updated further and further away from θ t {\displaystyle \theta _{t}} , it eventually starts hitting the bounds. For each such bound hit, the corresponding gradient becomes zero, and thus PPO avoid updating θ {\displaystyle \theta } too far away from θ t {\displaystyle \theta _{t}} .
This is important, because the surrogate loss assumes that the state-action pair s , a {\displaystyle s,a} is sampled from what the agent would see if the agent runs the policy π θ t {\displaystyle \pi _{\theta _{t}}} , but policy gradient should be on-policy. So, as θ {\displaystyle \theta } changes, the surrogate loss becomes more and more off-policy. This is why keeping θ {\displaystyle \theta } proximal to θ t {\displaystyle \theta _{t}} is necessary.
If there is a reference policy π ref {\displaystyle \pi _{\text{ref}}} that the trained policy should not diverge too far from, then additional KL divergence penalty can be added: − β E s , a ∼ π θ t [ log ( π θ ( a | s ) π ref ( a | s ) ) ] {\displaystyle -\beta \mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[\log \left({\frac {\pi _{\theta }(a|s)}{\pi _{\text{ref}}(a|s)}}\right)\right]} where β {\displaystyle \beta } adjusts the strength of the penalty. This has been used in training reasoning language models with reinforcement learning from human feedback.[8] The KL divergence penalty term can be estimated with lower variance using the equivalent form (see f-divergence for details):[9] − β E s , a ∼ π θ t [ log ( π θ ( a | s ) π ref ( a | s ) ) + π ref ( a | s ) π θ ( a | s ) − 1 ] {\displaystyle -\beta \mathbb {E} _{s,a\sim \pi _{\theta _{t}}}\left[\log \left({\frac {\pi _{\theta }(a|s)}{\pi _{\text{ref}}(a|s)}}\right)+{\frac {\pi _{\text{ref}}(a|s)}{\pi _{\theta }(a|s)}}-1\right]}
The Group Relative Policy Optimization (GRPO) is a minor variant of PPO that omits the value function estimator V {\displaystyle V} . Instead, for each state s {\displaystyle s} , it samples multiple actions a 1 , … , a G {\displaystyle a_{1},\dots ,a_{G}} from the policy π θ t {\displaystyle \pi _{\theta _{t}}} , then calculate the group-relative advantage[9] A π θ t ( s , a j ) = r ( s , a j ) − μ σ {\displaystyle A^{\pi _{\theta _{t}}}(s,a_{j})={\frac {r(s,a_{j})-\mu }{\sigma }}} where μ , σ {\displaystyle \mu ,\sigma } are the mean and standard deviation of r ( s , a 1 ) , … , r ( s , a G ) {\displaystyle r(s,a_{1}),\dots ,r(s,a_{G})} . That is, it is the standard score of the rewards.
Then, it maximizes the PPO objective, averaged over all actions: max θ 1 G ∑ i = 1 G E ( s , a 1 , … , a G ) ∼ π θ t [ { min ( π θ ( a i | s ) π θ t ( a i | s ) , 1 + ϵ ) A π θ t ( s , a i ) if A π θ t ( s , a i ) > 0 max ( π θ ( a i | s ) π θ t ( a i | s ) , 1 − ϵ ) A π θ t ( s , a i ) if A π θ t ( s , a i ) < 0 ] {\displaystyle \max _{\theta }{\frac {1}{G}}\sum _{i=1}^{G}\mathbb {E} _{(s,a_{1},\dots ,a_{G})\sim \pi _{\theta _{t}}}\left[{\begin{cases}\min \left({\frac {\pi _{\theta }(a_{i}|s)}{\pi _{\theta _{t}}(a_{i}|s)}},1+\epsilon \right)A^{\pi _{\theta _{t}}}(s,a_{i})&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a_{i})>0\\\max \left({\frac {\pi _{\theta }(a_{i}|s)}{\pi _{\theta _{t}}(a_{i}|s)}},1-\epsilon \right)A^{\pi _{\theta _{t}}}(s,a_{i})&{\text{ if }}A^{\pi _{\theta _{t}}}(s,a_{i})<0\end{cases}}\right]} Intuitively, each policy update step in GRPO makes the policy more likely to respond to each state with an action that performed relatively better than other actions tried at that state, and less likely to respond with one that performed relatively worse.
As before, the KL penalty term can be applied to encourage the trained policy to stay close to a reference policy. GRPO was first proposed in the context of training reasoning language models by researchers at DeepSeek.[9]