RedTachyon commited on
Commit
3774332
1 Parent(s): 85a8d7f

Upload folder using huggingface_hub

Browse files
loaWwnhYaS/11_image_0.png ADDED

Git LFS Details

  • SHA256: a308adec9c511c6ae76e3ab4c4736ed14274d1bf6de960eb68e5acbd153b29c4
  • Pointer size: 130 Bytes
  • Size of remote file: 33.2 kB
loaWwnhYaS/11_image_1.png ADDED

Git LFS Details

  • SHA256: b69e33d6c804ab8c712c0d160f0bbd3cfda1d8b6f620d4a7d352d79f48edce65
  • Pointer size: 130 Bytes
  • Size of remote file: 31.7 kB
loaWwnhYaS/19_image_0.png ADDED

Git LFS Details

  • SHA256: b620b33e2065052463ac08df93027e5bbd5d077b178c6ec2fe48a3ab0c4f7192
  • Pointer size: 130 Bytes
  • Size of remote file: 20.7 kB
loaWwnhYaS/20_image_0.png ADDED

Git LFS Details

  • SHA256: 3bdecdde16af259a8b3b690f1398fdd84cdb6656f8548fc109358456124955f3
  • Pointer size: 130 Bytes
  • Size of remote file: 11.7 kB
loaWwnhYaS/20_image_1.png ADDED

Git LFS Details

  • SHA256: 0c7b52483ac035668be1b0c86dbc04a8a3a98fbcfbc0cfe11e3cff3009049dbf
  • Pointer size: 129 Bytes
  • Size of remote file: 6.71 kB
loaWwnhYaS/21_image_0.png ADDED

Git LFS Details

  • SHA256: 63a0eedd084f856c560a16dfb481270c479a5c91eb24957a150dd8ba4995f184
  • Pointer size: 131 Bytes
  • Size of remote file: 120 kB
loaWwnhYaS/8_image_0.png ADDED

Git LFS Details

  • SHA256: b7c80caa507666c2ba333e89f1501d9860ae75e58df97fa26e9ee934657e157a
  • Pointer size: 130 Bytes
  • Size of remote file: 17.7 kB
loaWwnhYaS/9_image_0.png ADDED

Git LFS Details

  • SHA256: 3fca763d369697f89089de8fb97051999433b5eaa6d4f5540d14afcea8282115
  • Pointer size: 130 Bytes
  • Size of remote file: 18.9 kB
loaWwnhYaS/9_image_1.png ADDED

Git LFS Details

  • SHA256: 9b76caf0224f5d77b9890782c7213cf59364aa4be7bae16d29b2a2f245c5fd40
  • Pointer size: 130 Bytes
  • Size of remote file: 26 kB
loaWwnhYaS/9_image_2.png ADDED

Git LFS Details

  • SHA256: c97d22abc1b986c3f94270696daa6e05a6c638d13c9170b8186a29cbb2904a22
  • Pointer size: 130 Bytes
  • Size of remote file: 26.2 kB
loaWwnhYaS/9_image_3.png ADDED

Git LFS Details

  • SHA256: c64739d223f12483c4433bad2a17376652b4846798ab027ff50776ed4e90f568
  • Pointer size: 129 Bytes
  • Size of remote file: 1.93 kB
loaWwnhYaS/loaWwnhYaS.md ADDED
@@ -0,0 +1,790 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Variance Reduced Smoothed Functional Reinforce Policy Gradient Algorithms
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ We revisit the REINFORCE policy gradient algorithm from the literature. This algorithm typically works with reward (or cost) returns obtained over episodes or trajectories. We propose a major enhancement to the basic algorithm where we estimate the policy gradient using a smoothed functional (random perturbation) gradient estimator requiring one function measurement over a perturbed parameter. Subsequently, we also propose a two-simulation counterpart of the algorithm that has lower estimator bias. Like REINFORCE, our algorithms are trajectory-based Monte-Carlo schemes and usually suffer from high variance. To handle this issue, we propose two independent enhancements to the basic scheme: (i) use the sign of the increment instead of the original (full) increment that results in smoother albeit possibly slower convergence and (ii) use clipped costs or rewards as proposed in the Proximal Policy Optimization (PPO) based scheme. We analyze the asymptotic convergence of the algorithm in the one-simulation case as well as the case where signed updates are used and briefly discuss the changes in analysis when two-simulation estimators are used.
8
+
9
+ Finally, we show the results of several experiments on various Grid-World settings wherein we compare the performance of the various algorithms with REINFORCE as well as PPO
10
+ and observe that both our one and two simulation SF algorithms show better performance than these algorithms. Further, the versions of these algorithms with clipped gradients and signed updates show good performance with lower variance.
11
+
12
+ Key Words: REINFORCE policy gradient algorithm, smoothed functional gradient estimation, one and two simulation algorithms, stochastic gradient search, stochastic shortest path Markov decision processes, signed updates, objective function clipping.
13
+
14
+ ## 1 Introduction
15
+
16
+ Policy gradient methods, see Sutton & Barto (2018); Sutton et al. (1999), are a popular class of approaches in reinforcement learning (Bertsekas (2019); Meyn (2022)). Randomized policy is normally used in these approaches that is however parameterized and one updates the policy parameter along the gradient search direction. The policy gradient theorem, cf. Sutton et al. (1999); Marbach & Tsitsiklis (2001); Cao (2007),
17
+ which is a fundamental result in these approaches relies on an interchange of the gradient and expectation operators and in such cases turns out to be the expectation of the gradient of noisy performance functions much like the earlier studied perturbation analysis based sensitivity approaches for simulation optimization, see Ho & Cao (1991); Chong & Ramadge (1993).
18
+
19
+ The REINFORCE algorithm, cf. Williams (1992); Sutton & Barto (2018) is a noisy gradient scheme for which the expectation of the gradient is the policy gradient, i.e., the gradient of the expected objective w.r.t. the policy parameters. The updates of the policy parameter are however obtained once after the full return on an episode has been found. Actor-critic algorithms, see Sutton & Barto (2018); Konda & Borkar
20
+ (1999); Konda & Tsitsiklis (2003); Bhatnagar et al. (2009; 2007), have been presented in the literature as alternatives to the REINFORCE algorithm as they perform incremental parameter updates at every instant but do so using two-timescale stochastic approximation algorithms.
21
+
22
+ In this paper, we revisit the REINFORCE algorithm and present new algorithms for the case of episodic tasks, also referred to as the stochastic shortest path setting. Our algorithms perform parameter updates upon termination of episodes, that is when the goal or terminal states are reached. As with REINFORCE,
23
+ parameter updates are performed only at instants of visit to a prescribed recurrent state, see Cao (2007);
24
+ Marbach & Tsitsiklis (2001). Our first algorithm is based on a single function measurement or simulation at a perturbed parameter value where the perturbations are obtained using independent Gaussian random variates. The problem, however, is that it suffers from a large bias in the gradient estimator. We show analytically the reason for the large bias here. Subsequently, we present the two-function measurement variant of this scheme which we show has lower bias. Our algorithms rely on a diminishing sensitivity parameter sequence {δn} that appears in the denominator of an increment term in our algorithms. This can result in high variance in the iterates at least in the initial iterates. To tackle this problem, we introduce the signed analogs of these algorithms where we only consider the sign of the increment terms (the ones that multiply the learning rates in the updates). We analyze fully the asymptotic convergence of these schemes including the ones with signed updates. Subsequently, for our experiments, we also incorporate variants that use gradient clipping as with the proximal policy optimization (PPO), see Schulman et al. (2017a).
25
+
26
+ A similar scheme as our first (single-measurement) algorithm is briefly presented in Bhatnagar (2023) that however does not present any analysis of convergence or experiments. Our paper, on the other hand, not only provides a detailed analysis and experiments with the one-measurement scheme, but also analyzes several other related algorithms both for their convergence as well as empirical performance.
27
+
28
+ The REINFORCE algorithm relies on an interchange between the expectation and gradient operators. In other words, it is assumed that the gradient of the expectation of the noisy performance objective equals the expectation of the gradient of the same. While such an interchange is easily justifiable in the case of finite state systems, it is not easy to justify this interchange when the number of states is infinite resulting in an infinite summation or an integral when computing the expectation. The approaches we present bypass this problem altogether by directly estimating the gradient of the expectation of the noisy objective.
29
+
30
+ Gradient estimation in our algorithm is performed using the smoothed functional (SF) technique for gradient estimation (Rubinstein, 1981; Bhatnagar & Borkar, 2003; Bhatnagar, 2007; Bhatnagar et al., 2013). The basic problem in this setting is the following: Given an objective function J : Rd → R such that J(θ) = Eξ[h(*θ, ξ*)],
31
+ where θ ∈ Rdis the parameter to be tuned and ξ is the noise element, the goal is to find θ
32
+ ∗ ∈ Rdsuch that J(θ
33
+ ∗) = min θ∈Rd J(θ). Since the objective function J(·) can be highly nonlinear, one often settles for a lesser goal - that of finding a local instead of a global minimum. In this setting, the Kiefer-Wolfowitz (Kiefer &
34
+ Wolfowitz, 1952) finite difference estimates for the gradient of J require 2d function measurements. This approach does not perform well in practice when d is large.
35
+
36
+ Random search methods such as simultaneous perturbation stochastic approximation (SPSA) (Spall, 1992; 1997; Bhatnagar, 2005), smoothed functional (SF) (Katkovnik & Kulchitsky, 1972; Bhatnagar & Borkar, 2003; Bhatnagar, 2007) or random directions stochastic approximation (RDSA) (Kushner & Clark, 1978; Prashanth et al., 2017) typically require much less number of simulations. For instance, the gradient based algorithms in these approaches require only one or two simulations regardless of the parameter dimension d. A textbook treatment of random search approaches (including both gradient and Newton algorithms) for stochastic optimization is available in Bhatnagar et al. (2013).
37
+
38
+ Before we proceed further, we present the basic Markov decision process (MDP) setting and recall the REINFORCE algorithm that we consider for the episodic setting. We remark here that there are not many analyses of REINFORCE type algorithms in the literature in the episodic or stochastic shortest path setting.
39
+
40
+ ## 2 The Basic Mdp Setting
41
+
42
+ By a Markov decision process (MDP), we mean a controlled stochastic process {Xn} whose evolution is governed by an associated control-valued sequence {Zn}. It is assumed that Xn, n ≥ 0 takes values in a set S called the state-space. Let A(s) be the set of feasible actions in state s ∈ S and A
43
+ △= ∪s∈SA(s) denote the set of all actions. When the state is say s and a feasible action a is chosen, the next state seen is s
44
+ ′ with a probability p(s
45
+ ′|*s, a*)
46
+ △= P(Xn+1 = s
47
+ ′| Xn = *s, Z*n = a), ∀n. Such a process satisfies the controlled Markov property, i.e., P(Xn+1 = s
48
+ ′| Xn, Zn, . . . , X0, Z0) = p(s
49
+ ′| Xn, Zn) a.s., ∀n ≥ 0.
50
+
51
+ By an admissible policy or simply a policy, we mean a sequence of functions π = {µ0, µ1, µ2*, . . .*}, with µk : S → A, k ≥ 0, such that µk(s) ∈ A(s), ∀s ∈ S. When following policy π, a decision maker selects action µk(s) at instant k, when the state is s. A stationary policy π is one for which µk = µl
52
+ △= µ (a time-invariant function), ∀*k, l* = 0, 1*, . . .*. Associated with any transition to a state s
53
+ ′from a state s under action a, is a
54
+ 'single-stage' cost g(*s, a, s*′) where g : S × A × S → R is called the cost function. The goal of the decision maker is to select actions ak, k ≥ 0 in response to the system states sk, k ≥ 0, observed one at a time, so as to minimize a long-term cost objective. We assume here that the number of states and actions is finite.
55
+
56
+ ## 2.1 The Episodic Or Stochastic Shortest Path Setting
57
+
58
+ We consider here the episodic or the stochastic shortest path problem where decision making terminates once a goal or terminal state is reached. We let 1*, . . . , p* denote the set of non-terminal or regular states and t be the terminal state. Thus, S = {1, 2*, . . . , p, t*} denotes the state space for this problem Bertsekas (2019).
59
+
60
+ Our basic setting here is similar to Chapter 3 of Bertsekas (2012) (see also Bertsekas (2019)), where it is assumed that under any policy there is a positive probability of hitting the goal state t in at most p steps starting from any initial (non-terminal) state, that would in turn signify that the problem would terminate in a finite though random amount of time.
61
+
62
+ Under a given policy π, define
63
+
64
+ $$V_{\pi}(s)=E_{\pi}\left[\sum_{k=0}^{T}g(X_{k},\mu_{k}(X_{k}),X_{k+1})\mid X_{0}=s\right],\tag{1}$$
65
+ $$\left(2\right)$$
66
+
67
+ where T > 0 is a finite random time at which the process enters the terminal state t. Here Eπ[·] indicates that all actions are chosen according to policy π depending on the system state at any instant. We assume that there is no action that is feasible in the state t and so the process terminates once it reaches t. Let Π denote the set of all admissible policies. The goal here is to find the optimal value function V
68
+ ∗(i), i ∈ S,
69
+
70
+ where
71
+ $$V^{*}(i)=\operatorname*{min}_{\pi\in\Pi}V_{\pi}(i)=V_{\pi^{*}}(i),\ i\in S,$$
72
+ Vπ(i) = Vπ∗ (i), i ∈ S, (2)
73
+ with π
74
+ ∗ being the optimal policy. A related goal then would be to search for the optimal policy π
75
+ ∗. It turns out that in these problems, there exist stationary policies that are optimal, and so it is sufficient to restrict the search to the class of stationary policies.
76
+
77
+ A stationary policy π is called a proper policy (cf. pp.174 of Bertsekas (2012)) if
78
+
79
+ $${\hat{p}}_{\pi}\stackrel{\triangle}{=}\operatorname*{max}_{s=1,\ldots,p}P(X_{p}\neq t\mid X_{0}=s,\pi)<1.$$
80
+
81
+ In other words, regardless of the initial state i, there is a positive probability of termination after at most p stages when using a proper policy π and moreover P(T < ∞) = 1 under such a policy. An admissible policy
82
+ (and so also a stationary policy) can be randomized as well. A randomized admissible policy or simply a randomized policy is the sequence ψ = {ϕ0, ϕ1*, . . .*} with each ϕi: S → P(A). In other words, given a state s, a randomized policy would provide a distribution ϕi(s) = (ϕi(s, a), a ∈ A(s)) for the action to be chosen in the ith stage. A stationary randomized policy is one for which ϕj = ϕk
83
+ △= ϕ, ∀*j, k* = 0, 1*, . . .*. Here and in the rest of the paper, we shall assume that the policies are stationary randomized and are parameterized via a certain parameter θ ∈ C ⊂ Rd, a compact and convex set. We make the following assumption:
84
+ Assumption 1 All stationary randomized policies ϕθ parameterized by θ ∈ C *are proper.*
85
+ In practice, one might be able to relax this assumption (as with the model-based analysis of Bertsekas (2012))
86
+ by (a) assuming that for policies that are not proper, Vπ(i) = ∞ for at least one non-terminal state i and
87
+ (b) there exists a proper policy. The optimal value function satisfies the following Bellman equation: For s = 1*, . . . , p*,
88
+
89
+ $$V^{*}(s)=\min_{a\in A(s)}\left(\bar{g}(s,a)+\sum_{j=1}^{p}p(j\mid s,a)V^{*}(j)\right),\tag{3}$$
90
+ where $\bar{g}(s,a)=$.
91
+ where g¯(*s, a*) = Xp j=1 p(j|s, a)g(*s, a, j*) + p(t|s, a)g(*s, a, t*) is the expected single-stage cost in a non-terminal state s when a feasible action a is chosen. It can be shown, see Bertsekas (2012), that an optimal stationary proper policy exists.
92
+
93
+ ## 2.2 The Policy Gradient Theorem
94
+
95
+ Policy gradient methods perform a gradient search within the prescribed class of parameterized policies.
96
+
97
+ Let ϕθ(*s, a*) denote the probability of selecting action a ∈ A(s) when the state is s ∈ S and the policy parameter is θ ∈ C. We assume that ϕθ(*s, a*) is continuously differentiable in θ. A common example here is of the parameterized Boltzmann or softmax policies. Let ϕθ(s)
98
+ △= (ϕθ(s, a), a ∈ A(s)), s ∈ S and ϕθ
99
+ △= (ϕθ(s), s ∈ S).
100
+
101
+ We assume that trajectories of states and actions are available either as real data or from a simulation. Let Gk =
102
+ T
103
+ X−1 j=k gj denote the sum of costs until termination (likely when a goal state is reached) on a trajectory starting from instant k. Note that if all actions are chosen according to a policy ϕ, then the value and Q-value functions (under ϕ) would be Vϕ(s) = Eϕ[Gk | Xk = s] and Qϕ(*s, a*) = Eϕ[Gk | Xk = *s, Z*k = a],
104
+ respectively. In what follows, for ease of notation, we let Vθ ≡ Vϕθ and Qθ ≡ Qϕθ
105
+ , respectively.
106
+
107
+ The policy gradient theorem for episodic problems has the following form, cf. Chapter 13 of Sutton & Barto (2018):
108
+
109
+ $$\nabla V_{\theta}(s_{0})=\sum_{s\in S}\mu(s)\sum_{a\in A(s)}\nabla_{\theta}\pi(s,a)Q_{\theta}(s,a),$$
110
+ $$\left(4\right)$$
111
+ where $\mu(s),s\in S$, is defined as $\mu(s)=\mu(s)$.
112
+ $${\frac{\eta(s)}{\sum_{s^{\prime}\in S}\eta(s^{\prime})}}{\mathrm{~where~}}\eta(s)=\sum_{k=0}^{\infty}p^{k}(s|s_{0},\phi_{\theta}),\;s\in S,{\mathrm{~with~}}p^{k}(s|s_{0},\phi_{\theta})$$
113
+
114
+ being the k-step transition probability of going to state s from s0 under the policy ϕθ. Proving the policy gradient theorem when the state-action spaces are finite is relatively straight forward (Sutton et al. (1999); Sutton & Barto (2018)). However, one would require strong regularity assumptions on the system dynamics and performance function as with infinitesimal perturbation analysis (IPA) or likelihood ratio approaches (Chong & Ramadge (1994); Ho & Cao (1991)) if the state-action spaces are either countably infinite or continuously-valued sets.
115
+
116
+ The REINFORCE algorithm (Sutton & Barto (2018); Williams (1992)) makes use of the policy gradient theorem as the latter is based on an interchange between the gradient and expectation operators (since the value function is an expectation over noisy cost returns). In what follows, we present an alternative algorithm based on REINFORCE that incorporates a one-measurement SF gradient estimator. Our algorithm does not incorporate the policy gradient theorem and thus does not require an interchange between the aforementioned operators. Our basic algorithm incorporates a zero-order gradient approximation using the smoothed functional method and like the REINFORCE algorithm, requires one sample trajectory. However, since our algorithm caters to episodic tasks, it performs updates whenever a certain prescribed recurrent state is visited, see Cao (2007); Marbach & Tsitsiklis (2001). We refer to our one-simulation algorithm as the One-SF-REINFORCE (SFR-1) algorithm.
117
+
118
+ ## 3 The One-Simulation Sf Reinforce (Sfr-1) Algorithm
119
+
120
+ We assume that data on the mth trajectory is represented in the form of the tuples (s m k
121
+ , am k
122
+ , gm k
123
+ , sm k+1),
124
+ k = 0, 1*, . . . , T*m with Tm being the termination instant on the mth trajectory, m ≥ 1. Also, s m jis the state at instant j in the mth trajectory. Further, a m k and g m k are the action chosen and the cost incurred, respectively, at instant k in the mth trajectory. Let Γ : Rd → C denote a projection operator that projects any x = (x1*, . . . , x*d)
125
+ T ∈ Rdto its nearest point in C. For ease of exposition, we assume that C is a d-dimensional rectangle having the form C =
126
+ Y
127
+ d i=1
128
+ [ai,min, ai,max], where −∞ < ai,min < ai,max < ∞, ∀i = 1*, . . . , d*. Then Γ(x) = (Γ1(x1)*, . . . ,* Γd(xd))T with Γi: R → [ai,min, ai,max] such that Γi(xi) = min(ai,max, max(ai,min, xi)),
129
+ i = 1*, . . . , d*. Also, let C(C) denote the space of all continuous functions from C to Rd.
130
+
131
+ In what follows, we present a procedure that incrementally updates the parameter θ. Let θ(n) denote the parameter value obtained after the nth update of this procedure which depends on the nth episode and which is run using the policy parameter Γ(θ(n)+δn∆(n)), for n ≥ 0, where θ(n) = (θ1(n), . . . , θd(n))T ∈ Rd, δn > 0 ∀n with δn → 0 as n → ∞ and ∆(n) = (∆1(n), . . . , ∆d(n))T, n ≥ 0, where ∆i(n), i = 1*, . . . , d, n* ≥ 0 are independent random variables distributed according to the N(0, 1) distribution.
132
+
133
+ Algorithm (5) below is used to update the parameter θ ∈ C ⊂ Rd. Let χ n denote the nth state-action trajectory χ n = {s n 0
134
+ , an 0
135
+ , sn 1
136
+ , an 1
137
+ , . . . , sn T −1
138
+ , an T −1
139
+ , sn T
140
+ }, n ≥ 0 where the actions a n 0
141
+ , . . . , an T −1 in χ n are obtained using the policy parameter θ(n) + δn∆(n). The instant T denotes the termination instant in the trajectory χ n and corresponds to the instant when the terminal or goal state t is reached. Note that the various actions in the trajectory χ n are chosen according to the policy ϕ(θ(n)+δn∆(n)). The initial state is assumed to be sampled from a given initial distribution ν = (ν(i), i ∈ S) over states. Let G
142
+ n =
143
+ T
144
+ X−1 k=0 g n k denote the sum of costs until termination on the trajectory χ n with g n k ≡ g(s n k
145
+ , an k
146
+ , sn k+1). The update rule that we consider here is the following: For n ≥ 0, i = 1*, . . . , d*,
147
+
148
+ $$\theta_{i}(n+1)=\Gamma_{i}\left(\theta_{i}(n)-a(n)\left(\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}\right)\right).\tag{5}$$
149
+
150
+ Assumption 2 The step-size sequence {a(n)} satisfies a(n) > 0, ∀n*. Further,*
151
+
152
+ $$\sum_{n}a(n)=\infty,\;\;\sum_{n}\left({\frac{a(n)}{\delta_{n}}}\right)^{2}<\infty.$$
153
+
154
+ After the (n − 1)st episode, θ(n) is computed using (5). The perturbed parameter θ(n) + δn∆(n) is then obtained after sampling ∆(n) from the multivariate Gaussian distribution as explained previously and thereafter a new trajectory governed by this perturbed parameter is generated with the initial state in each episode sampled according to a given distribution ν.
155
+
156
+ ## 4 Variants For Improved Performance
157
+
158
+ We present here two variants of this algorithm that result in improved performance. The first variant reduces the bias in the estimator by using two simulations instead of one, while the second variant uses the sign of the increments instead of the increments themselves and this helps reduces the estimator variance. When applied on two-simulation SF, the second variant helps reduce both the bias and variance.
159
+
160
+ ## 4.1 Two-Sided Sf Reinforce Algorithm
161
+
162
+ The idea here is to use two system simulations instead of one in order to reduce the estimator bias. As with the one-simulation SF algorithm, we assume that we have access to trajectories of data that are used for performing the parameter updates. Let χ n+ and χ n− denote two state-action trajectories or episodes generated after the nth update of the parameter. These correspond to χ n+ = {s n+
163
+ 0, an+
164
+ 0, sn+
165
+ 1, an+
166
+ 1*, . . . , s*n+
167
+ T −1
168
+ , an+
169
+ T −1
170
+ , sn+
171
+ T}, n ≥ 0 where the actions a n+
172
+ 0*, . . . , a*n+
173
+ T −1 in χ n+ are obtained using the policy parameter θ(n) + δn∆(n). Likewise, the actions a n−
174
+ 0*, . . . , a*n−
175
+ T −1 in χ n− are obtained using the policy parameter θ(n) − δn∆(n). As before, a new random vector ∆(n) is generated after θ(n) is obtained using the algorithm but the same ∆(n) is used in both the policy parameters used to generate the two trajectories.
176
+
177
+ The initial state in both these episodes is independently sampled from the same initial distribution $\nu=(\nu(i),i\in S)$ over states. Let $G^{n+}=\sum\limits_{k=0}^{T-1}g_k^{n+}$ denote the return or the sum of costs until termination on the trajectory $\chi^{n+}$, with $g_k^{n+}=g(g_k^{n+},g_k^{n+},g_{k+1}^{n+})$. Similarly, we let $G^{n-}=\sum\limits_{k=0}^{T-1}g_k^{n-}$ denote the return or the sum of costs until termination on the trajectory $\chi^{n-}$, with $g_k^{n-}=g(g_k^{n-},g_k^{n+},g_{k+1}^{n+})$. The update rule that we consider here is the following: For $n\geq0,i=1,\ldots,d$,
178
+ consider here is the following: For $n\geq0,i=1,\ldots,d$, $$\theta_{i}(n+1)=\Gamma_{i}\left(\theta_{i}(n)-a(n)\left(\Delta_{i}(n)\frac{(G^{n+}-G^{n-})}{2\delta_{n}}\right)\right).\tag{6}$$
179
+
180
+ ## 4.2 Sf Reinforce With Signed Updates
181
+
182
+ As expected and (also) reported in the literature (Sutton & Barto (2018)), the REINFORCE algorithm typically suffers from high iterate-variance. We observe this problem even when SF-REINFORCE is used.
183
+
184
+ To counter the problem of high iterate-variance, we use the sign function sgn(·) in the updates defined as follows: sgn(x) = +1 if x > 0 and sgn(x) = −1 otherwise. The update rules for the one and two simulation SF with signed updates are as follows:
185
+
186
+ ## 4.2.1 One-Sf With Signed Updates
187
+
188
+ The update rule is exactly the same as (5) except that only the sign of the increment is used in the update:
189
+ ∀i = 1*, . . . , d*,
190
+ $$\theta_{i}(n+1)=\Gamma_{i}\left(\theta_{i}(n)-a(n)sgn\left(\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}\right)\right).\tag{7}$$
191
+
192
+ ## 4.2.2 Two-Sf With Signed Updates
193
+
194
+ As with the One-SF case, the update rule here is the same as (6) except that the update rule involves the sign of the update increment. Thus, we have ∀i = 1*, . . . , d*,
195
+
196
+ $$\theta_{i}(n+1)=\Gamma_{i}\left(\theta_{i}(n)-a(n)sgn\left(\Delta_{i}(n)\frac{\left(G^{n+}-G^{n-}\right)}{2\delta_{n}}\right)\right).\tag{8}$$
197
+
198
+ ## 5 Convergence Analysis
199
+
200
+ We present here the main convergence results for the algorithms: one-simulation SF, two-simulation SF, and two-simulation signed SF, respectively. The proofs of all these results are provided in Appendix A.
201
+
202
+ ## 5.1 Convergence Of One-Simulation Sf
203
+
204
+ We begin by rewriting the recursion (5) as follows:
205
+
206
+ We begin by rewriting the recursion (9) as follows: $$\theta_{i}(n+1)=\Gamma_{i}\left(\theta_{i}(n)-a(n)E\left[\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}|\mathcal{F}_{n}\right]+M_{n+1}^{i}\right),\tag{9}$$ where $M_{n+1}^{i}=\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}-E\left[\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}|\mathcal{F}_{n}\right],n\geq0.$ Here, we let $\mathcal{F}_{n}\stackrel{{\triangle}}{{=}}\sigma(\theta(m),m\leq n,\Delta(m),x^{m},m<n),n\geq1$ as a sequence of increasing sigma fields with $\mathcal{F}_{n}=\sigma(\theta(0)).$ Let $M_{n}\stackrel{{\triangle}}{{=}}(M_{n+1}^{1},\ldots,M_{n}^{d})^{T},n\geq0.$
207
+ n
208
+ n
209
+ Lemma 1 (Mn, Fn), n ≥ 0 *is a martingale difference sequence.*
210
+ Proposition 1 *We have*
211
+
212
+ $$E\left[\Delta_{i}(n){\frac{G^{n}}{\delta_{n}}}\mid{\mathcal{F}}_{n}\right]=\sum_{s\in S}\nu(s)\nabla_{i}V_{\theta(n)}(s)+o(\delta_{n})\,\,\,a.s.$$
213
+
214
+ In the light of Proposition 1, we can rewrite (5) as follows:
215
+
216
+ $$\theta(n+1)=\Gamma(\theta(n)-a(n)(\sum_{s}\nu(s)\nabla V_{\theta(n)}(s)+M_{n+1}$$ $$+\beta(n))),$$
217
+ $$(10)$$
218
+ where $\beta_i(n)=E\left[\Delta_i(n)\frac{G}{n}\right]$
219
+ Gn
220
+ δ| Fn
221
+ −Ps
222
+ $\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s)$ and $\beta(n)=(\beta_{1}(n),\ldots,\beta_{d}(n))^{T}$. From Proposition 1,
223
+ it follows that β(n) = o(δn).
224
+
225
+ Lemma 2 The function ∇Vθ(s) is Lipschitz continuous in θ. Further, ∃ a constant K1 > 0 *such that*
226
+ ∥ ∇Vθ(s) ∥≤ K1(1+ ∥ θ ∥).
227
+
228
+ Lemma 3 The sequence (Mn, Fn), n ≥ 0 *satisfies* E[∥Mn+1∥
229
+ * [10] M. C. Gonzalez-Garcia, M. C. Gonzalez-Garcia, M.
230
+
231
+ 2n
232
+ , for some constant L >ˆ 0.
233
+ Define now a sequence $ Z_n,n\geq0$ according to $ Z_n=\sum_{m=0}^{n-1}a(m)M_{m+1}$, $ n\geq1$, with $ Z_0=0$.
234
+ Lemma 4 (Zn, Fn), n ≥ 0 *is an almost surely convergent martingale sequence.*
235
+
236
+ Consider now the following ODE:
237
+ $$\dot{\theta}(t)=\bar{\Gamma}(-\sum_{s}\nu(s)\nabla V_{\theta}(s)),\tag{1}$$
238
+ where Γ : ¯ C(C) → C(Rd) is defined according to
239
+
240
+ $$(11)$$
241
+ $$\bar{\Gamma}(v(x))=\operatorname*{lim}_{\eta\to0}\left({\frac{\Gamma(x+\eta v(x))-x}{\eta}}\right).$$
242
+ $$(12)$$
243
+
244
+ Let H
245
+ △= {θ | Γ( ¯ −Ps ν(s)∇Vθ(s)) = 0} denote the set of all equilibria of (11). By Lemma 11.1 of Borkar
246
+ (2022), the only possible ω-limit sets that can occur as invariant sets for the ODE (11) are subsets of H. Let H¯ ⊂ H be the set of all internally chain recurrent points of the ODE (11). Our main result below is based on Theorem 5.3.1 of Kushner & Clark (1978) for projected stochastic approximation algorithms. We state this theorem in Appendix A along with the assumptions needed there that we verify for our analysis.
247
+
248
+ Theorem 1 The iterates θ(n), n ≥ 0 *governed by (5) converge almost surely to* H¯ .
249
+
250
+ ## 5.2 Convergence Of Two-Simulation Sf
251
+
252
+ The analysis proceeds in a similar manner here as for the one-simulation SF. Let
253
+
254
+ $$H_{i}(\theta(n),\Delta(n))=\Delta_{i}(n)\left[\frac{V_{\theta(n)+\delta(n)\Delta(n)}-V_{\theta(n)-\delta(n)\Delta(n))}}{2\delta(n)}\right].$$
255
+
256
+ Proposition 2
257
+
258
+ $$E\left[\Delta_{i}(n)\left(\frac{G^{n+}-G^{n-}}{2\delta_{n}}\right)\mid\mathcal{F}_{n}\right]=\sum_{s}\nu(s)E[H_{i}(\theta(n),\Delta(n))|\mathcal{F}_{n}]=\sum_{s\in S}\nu(s)\nabla_{i}V_{\theta(n)}(s)+o(\delta_{n})\,\,\,a.s.$$
259
+
260
+ 7 The main result on convergence of the stochastic recursions is the following:
261
+ Theorem 2 The iterates θ(n), n ≥ 0 *governed by (6) converge almost surely to* H¯ .
262
+
263
+ ## 5.3 Convergence Of Two-Simulation Signed Sf Reinforce
264
+
265
+ We present here the convergence analysis of the two-simulation signed SF REINFORCE algorithm. The analysis of the one-simulation counterpart is similar and hence is not provided.
266
+
267
+ Define now ei(n) = Hi(θ(n), ∆(n)) − ∇iVθ(n), and let Fi(e|θ) = P(ei(n) ≤ e|θ(n) = θ) be the conditional distribution of ei(n) given θ(n) = θ. We make the following assumptions:
268
+
269
+ $$i(n)\leq e|t|$$
270
+
271
+ (A1) P(ei(n) ≤ e|θ(m), m ≤ n) = Fi(e|θ(n)) independent of n. (A2) The maps (*e, θ*) 7→ Fi(e|θ) and θ *7→ ∇*iVθ are Lipschitz continuous.
272
+
273
+ (A3) For all θ and i = 1*, . . . , N*, Fi(0|θ) = 1/2.
274
+
275
+ $${\mathrm{(A4)}}\ a(n)>0,\,\forall n,\,\sum_{n}a(n)=\infty,\sum_{n}\left({\frac{a(n)}{\delta(n)}}\right)^{2}<\infty.$$
276
+ $$(13)$$
277
+
278
+ Consider the following ODE associated with the above recursion:
279
+
280
+ $$\dot{\theta}_{i}(t)=\bar{\Gamma}_{i}(-(1-2F_{i}(-\sum_{s}\nu(s)\nabla_{i}V_{\theta}(s)|\theta))),\ t\geq0,\ i=1,\ldots,N.\tag{1}$$
281
+
282
+ For x = (x1*, . . . , x*d)
283
+ T, let Γ( ¯ x) = (Γ¯1(x1)*, . . . ,* Γ¯d(xd))T. Also, let F(−∇Vθ)|θ)
284
+ △= (F1(−Ps ν(s)∇1Vθ(s)|θ),
285
+ . . ., Fd(−Ps ν(s)∇N Vθ(s)|θ)) and let K = {θ|(Γ( ¯ −(1 − 2F(−Ps ν(s)∇Vθ(s)|θ)) = 0) denote the set of equilibria of (13). Further, let K¯ ⊂ K ⊂ {θ|Γ( ¯ ⟨(1 − 2F(−Ps ν(s)∇Vθ(s)|θ)),Ps ν(s)∇Vθ(s)⟩) = 0} denote the largest invariant set contained in K.
286
+
287
+ Theorem 3 (Convergence of Signed SFR-2) {θ(n)} governed as per (8) converges as n → ∞ almost surely to K¯ .
288
+
289
+ Remark 1 Suppose θ ∈ K is such that θ *is in the interior of the constraint set. Then, from Assumptions (A2)-(A3) and Theorem 3,* Ps P ν(s)∇Vθ(s) = 0. For θ *on the boundary of the constraint set, either* s ν(s)∇Vθ(s) = 0 or Ps ν(s)∇Vθ(s) ̸= 0 *but in the latter case,* Γ( ¯ Ps ν(s)∇Vθ(s) = 0). The latter are spurious fixed points that occur at the boundary of the constraint set, see Kushner & Yin (1997).
290
+
291
+ ## 6 Numerical Results
292
+
293
+ We investigate the performance of our proposed SF-REINFORCE algorithms (both one and two sided variants: SFR-1 and SFR-2) on a stochastic grid world setting. The specifics of the environment setup, policy estimator and the algorithm parameters are discussed in Appendix B.1, B.2 and B.3, respectively.
294
+
295
+ Initially, we experiment with various grid sizes and compare our SFR-1 and SFR-2 with REINFORCE and PPO. To ensure the results are reproducible, we run every setting 10 times with different seeds, and we plot the mean as the dark line, and shade using the standard deviation around it. Although our algorithms show good performance, they also display large variance. We refer the reader to Appendix B.4 for these experiments. We further try out various schedules for perturbations and learning rates, as well as gradient clipping techniques. In the process of trying out methods to improve on the variance of the objective, we illustrate the tradeoff between the performance of the iterates and their variance.
296
+
297
+ For brevity, we show the best performance on our largest (most difficult) grid in Figure 1 and also compare how fast these algorithms achieve a performance threshold, by counting the number of updates in Table 2.
298
+
299
+ Table 1 summarizes the best performance of variants of SFR-1 and SFR-2 on 50 × 50 grid. Notably, SFR-2
300
+
301
+ ![8_image_0.png](8_image_0.png)
302
+
303
+ Figure 1: Best runs across all algorithms on 50 × 50 grid
304
+
305
+ | algo | vanilla | const_delta | signed | clip_by_value | clip_by_norm |
306
+ |--------|----------------|----------------|----------------|-----------------|----------------|
307
+ | SFR-1 | 376.29 ± 401.4 | 677.55 ± 233.1 | 424.93 ± 254.5 | 483.68 ± 215.3 | 395.38 ± 279.6 |
308
+ | SFR-2 | 562.89 ± 387.7 | 665.5 ± 229.1 | 823.68 ± 3.1 | 824.96 ± 5.6 | 830.4 ± 3.7 |
309
+
310
+ Table 1: Best Performance of variants, on large grid, L = 50
311
+ with gradient clipping by norm and SFR-1 with constant perturbation size emerge as the leaders in their respective best runs. Both algorithms require substantially fewer updates compared to PPO. While neither SFR-1 nor PPO reach thresholds of 700 and 800 (resp.), SFR-2 achieves a target of 800 within just 11,000 updates. This demonstrates the superior efficacy of SFR-2, particularly when incorporating techniques like gradient clipping.
312
+
313
+ ## 6.1 Perturbation Sizes
314
+
315
+ Since our algorithm uses stochastic perturbations to estimate the gradient of the objective function, we expect the dynamics of our policy weights to be intimately related to the perturbation size. To investigate this, we experiment with both decaying and constant perturbation schedules δ(n).
316
+
317
+ ## 6.1.1 Decaying Perturbations
318
+
319
+ | Algorithm / Threshold | 200 | 400 | 600 | 700 | 800 |
320
+ |-------------------------|-------|-------|-------|--------|-------|
321
+ | SFR-2 | 1000 | 1000 | 2000 | 3000 | 11000 |
322
+ | SFR-1 | 9000 | 19000 | 63000 | - | - |
323
+ | PPO | 51000 | 68000 | 99000 | 134000 | - |
324
+
325
+ Since we are using δ(n) = δ0(1 50000+n
326
+ )
327
+ d, we set δ0 = 1, α0 = 2 × 10−6 and vary d ∈ {0.15, 0.25, 0.35, 0.45}.
328
+
329
+ These runs are made on the medium grid size L = 10. Note that for a fixed δ0, lower values of d correspond to higher values of δ(n). It is clear that for larger perturbations (d ∈ {0.101, 0.15, 0.25}), there is lower variance, both across runs (in terms of the converged value) and lower variance within the run (compare the shaky blue lines from Figures 2c and 2d with the steady lines from Figures 2a and 2b). It is clear from Figure 2 that we can conclude that SFR-1 is more sensitive to decay parameter since it affects the variance in the iterates within the same run too.
330
+
331
+ Table 2: Number of updates from different algorithms required to cross reward threshold on 50 × 50 grid
332
+
333
+ ![9_image_0.png](9_image_0.png)
334
+
335
+ (a) | S |= 100, d = 0.101 (b) | S |= 100, d = 0.15 (c) | S |= 100, d = 0.35 (d) | S |= 100, d = 0.45 Figure 2: Iterate performance for different decay exponents d for perturbations δ(n) on 10×10 grid. Other parameters are set to δ0 = 1, α0 = 2 × 10−6.
336
+
337
+ ![9_image_1.png](9_image_1.png)
338
+
339
+ (a) | S |= 2500, δ(n) = 0.175 (b) | S |= 2500, δ(n) = 0.5
340
+
341
+ ![9_image_3.png](9_image_3.png)
342
+
343
+ Figure 4: Runs showing constant schemes for δ(n), for the large grid size L = 50, capturing the tradeoff between performance and variance.
344
+
345
+ | Algorithm | d = 0.101 | d = 0.15 | d = 0.25 | d = 0.35 | d = 0.45 |
346
+ |-------------|-------------|-------------|--------------|--------------|--------------|
347
+ | SFR-1 | 68.72 ± 1.5 | 74.07 ± 0.7 | 59.45 ± 35.0 | 62.60 ± 28.4 | 31.33 ± 34.0 |
348
+ | SFR-2 | 68.10 ± 1.3 | 74.18 ± 0.6 | 68.49 ± 26.3 | 60.40 ± 35.4 | 60.46 ± 35.4 |
349
+
350
+ Table 3: Performance for different decay exponents d for perturbations δ(n) on 10×10 grid. Other parameters are set to δ0 = 1, α0 = 2 × 10−6.
351
+
352
+ Figure 3: Runs showing constant schemes for δ(n), for grid size L = 10.
353
+
354
+ ![9_image_2.png](9_image_2.png)
355
+
356
+ 10
357
+
358
+ | Algorithm / Decay | 0.175 | 0.500 |
359
+ |---------------------|-------------|-------------|
360
+ | SFR-1 | 72.66 ± 3.6 | 52.60 ± 2.8 |
361
+ | SFR-2 | 73.29 ± 1.9 | 53.10 ± 3.2 |
362
+
363
+ | Algorithm | δ(n) = 0.175 | δ(n) = 0.5 | δ(n) = 0.7 |
364
+ |-------------|----------------|---------------|---------------|
365
+ | SFR-1 | 677.55 ± 233.1 | 446.65 ± 26.8 | 254.74 ± 35.6 |
366
+ | SFR-2 | 665.50 ± 229.1 | 437.26 ± 21.7 | 247.09 ± 33.0 |
367
+
368
+ Table 4: Performance for different constant values of δ(n) on 10×10 grid over 10 seeded runs Table 5: Performance for different constant values of δ(n) on 50×50 grid over 10 seeded runs
369
+
370
+ ## 6.1.2 Constant Perturbations
371
+
372
+ We also tried experiments with constant values of δ(n) = 0.175 and δ(n) = 0.5 (they correspond to d = 0 and δ0 ∈ {0.175, 0.5}). We run on both medium and large grids (L ∈ {10, 50}). For both grids, SFR-1 and SFR-2 show similar results, see Figures 3 and 4. Larger value of δ(n) shows stable iterates that however do not reach a high enough reward compared to those with a smaller value of δ(n). Tables 4 and 5 capture the trade-off between the mean and variance of the converged values of this setting. Since δ(n) = 0.175 gives the best performance on the large grid-size, we try experimenting with signed and clipped gradients to improve on the consistency of the total reward across runs.
373
+
374
+ ## 6.2 On Transforming Updates
375
+
376
+ To control the dynamics of the weights, we employ variants of the proposed scheme geared towards transforming the gradients to mitigate the effects of exploding or vanishing gradients problem. The effect of signed as well as clipped gradients is investigated.
377
+
378
+ ## 6.2.1 Signed Updates
379
+
380
+ We experiment with the signed update scheme described and analyzed previously. For this, we maintain a constant value for δ(n) = 0.175, while varying the step-sizes (α(n)) as shown in Figure 5 and Table 6. Since each component is changed by a magnitude of α(n), the step size controls how far the iterates can go in the span of fixed iterations, from a given random initialization. As expected, for very small step-sizes, the performance is not good, but improves with increasing step-sizes. We note that for large step-sizes, the two-sided variant does extremely well and shows good reward performance with low standard deviation as well. In the case of the one-sided algorithm, large step-sizes also improve the discounted reward performance but at the same time increase variance.
381
+
382
+ | Algorithm | α0 = 2 × 10−6 | α0 = 2 × 10−4 | α0 = 2 × 10−3 | α0 = 2 × 10−2 |
383
+ |-------------|-----------------|-----------------|-----------------|-----------------|
384
+ | SFR-1 | -26.03 ± 8.2 | 141.32 ± 144.6 | 392.66 ± 234.3 | 424.93 ± 254.5 |
385
+ | SFR-2 | -22.87 ± 6.4 | 456.54 ± 161.1 | 761.28 ± 6.7 | 823.68 ± 3.1 |
386
+
387
+ ## 6.2.2 Gradient Clipping
388
+
389
+ To maintain stability of the iterates, we need to ensure that gradients do not explode. One way is to clip each component of the gradient by value, whereas the other method involves normalizing the gradient such that its norm does not exceed a previously agreed value. Figure 6 shows the best runs in both cases. Results of more detailed experiments on different gridsizes are provided in Appendix B.4.
390
+
391
+ Table 6: Performance of signed updates on large grids by varying step-sizes
392
+
393
+ ![11_image_0.png](11_image_0.png)
394
+
395
+ Figure 5: Runs for signed updates with different values of step-sizes, α(n)
396
+
397
+ ![11_image_1.png](11_image_1.png)
398
+
399
+ Figure 6: Runs for gradients clipped by value and norm
400
+
401
+ ## 7 Conclusions
402
+
403
+ We presented model-free smoothed functional algorithms as suitable Monte-Carlo based alternatives to the popular REINFORCE algorithm for the setting of episodic tasks. We first presented the one-simulation SF
404
+ REINFORCE algorithm followed by its two-simulation counterpart. We also presented the signed variant of these algorithms. Subsequently, we provided a complete convergence analysis of the one-simulation SF
405
+ REINFORCE algorithm, and described the changes in the analysis needed for the two-simulation variant.
406
+
407
+ Next, we also provided the convergence analysis of the two-simulation signed variant, the one with onesimulation being analogous.
408
+
409
+ We showed detailed empirical results of the various algorithms on different grid world settings and under various choices of the setting parameters. We also compared the performance of our algorithms with the REINFORCE algorithm as well as PPO, and observed that both one-sided and two-sided SF REINFORCE
410
+ algorithms show better performance than these other algorithms. We also found that signed updates and gradient clipping are effective procedures that help significantly alleviate the problems of high variance in Monte-Carlo algorithms such as REINFORCE. As future work, it would be of interest to theoretically study the convergence rate results of the algorithms presented here.
411
+
412
+ ## References
413
+
414
+ D. P. Bertsekas. *Dynamic Programming and Optimal Control, Vol.II*. Athena Scientific, 2012. Dimitri Bertsekas. *Reinforcement learning and optimal control*, volume 1. Athena Scientific, 2019. S. Bhatnagar. Adaptive multivariate three-timescale stochastic approximation algorithms for simulation based optimization. *ACM Transactions on Modeling and Computer Simulation*, 15(1):74–107, 2005.
415
+
416
+ S. Bhatnagar. Adaptive Newton-based smoothed functional algorithms for simulation optimization. ACM
417
+ Transactions on Modeling and Computer Simulation, 18(1):2:1–2:35, 2007.
418
+
419
+ S. Bhatnagar. The reinforce policy gradient algorithm revisited. In 2023 Ninth Indian Control Conference
420
+ (ICC), pp. 177–177. IEEE, 2023.
421
+
422
+ S. Bhatnagar and V.S. Borkar. Multiscale chaotic SPSA and smoothed functional algorithms for simulation optimization. *Simulation*, 79(10):568–580, 2003.
423
+
424
+ S. Bhatnagar, R.S. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms.
425
+
426
+ Advances in neural information processing systems, 20, 2007.
427
+
428
+ S. Bhatnagar, R.S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor–critic algorithms. *Automatica*, 45
429
+ (11):2471–2482, 2009.
430
+
431
+ S Bhatnagar, H. L. Prasad, and L. A. Prashanth. *Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods (Lecture Notes in Control and Information Sciences)*, volume 434. Springer, 2013.
432
+
433
+ V. S. Borkar. *Probability Theory: An Advanced Course*. Springer, New York, 1995.
434
+
435
+ V. S. Borkar. *Stochastic Approximation: A Dynamical Systems Viewpoint, 2'nd Edition*. Cambridge University Press, 2022.
436
+
437
+ X.-R. Cao. *Stochastic Learning and Optimization: A Sensitivity-Based Approach*. Springer, 2007. E. K. P. Chong and P. J. Ramadge. Optimization of queues using an infinitesimal perturbation analysis-based stochastic algorithm with general update times. *SIAM J. Cont. and Optim.*, 31(3):698–732, 1993.
438
+
439
+ Edwin KP Chong and Peter J Ramadge. Stochastic optimization of regenerative systems using infinitesimal perturbation analysis. *IEEE transactions on automatic Control*, 39(7):1400–1410, 1994.
440
+
441
+ T. Furmston, G. Lever, and D. Barber. Approximate Newton methods for approximate policy search in Markov decision processes. *Journal of Machine Learning Research*, 17:1–51, 2016.
442
+
443
+ Y. C. Ho and X. R. Cao. *Perturbation Analysis of Discrete Event Dynamical Systems*. Kluwer, Boston, 1991.
444
+
445
+ V. Ya Katkovnik and Yu Kulchitsky. Convergence of a class of random search algorithms. *Automation* Remote Control, 8:1321–1326, 1972.
446
+
447
+ J. Kiefer and J. Wolfowitz. Stochastic estimation of the maximum of a regression function. *Ann. Math.*
448
+ Statist., 23:462–466, 1952.
449
+
450
+ V.R. Konda and V.S. Borkar. Actor-critic–type learning algorithms for markov decision processes. *SIAM*
451
+ Journal on control and Optimization, 38(1):94–123, 1999.
452
+
453
+ V.R. Konda and J.N. Tsitsiklis. Onactor-critic algorithms. *SIAM journal on Control and Optimization*, 42
454
+ (4):1143–1166, 2003.
455
+
456
+ H. J. Kushner and D. S. Clark. *Stochastic Approximation Methods for Constrained and Unconstrained* Systems. Springer Verlag, 1978.
457
+
458
+ H. J. Kushner and G. G. Yin. *Stochastic Approximation Algorithms and Applications*. Springer Verlag, New York, 1997.
459
+
460
+ P. Marbach and J.N. Tsitsiklis. Simulation-based optimization of Markov reward processes. *IEEE Transactions on Automatic Control*, 46(2):191–209, 2001.
461
+
462
+ Sean Meyn. *Control systems and reinforcement learning*. Cambridge University Press, 2022.
463
+
464
+ L A Prashanth, S. Bhatnagar, M.C. Fu, and S.I. Marcus. Adaptive system optimization using random directions stochastic approximation. *IEEE Transactions on Automatic Control*, 62(5):2223–2238, 2017.
465
+
466
+ Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann.
467
+
468
+ Stable-baselines3: Reliable reinforcement learning implementations. *Journal of Machine Learning Research*, 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html.
469
+
470
+ R. Y. Rubinstein. *Simulation and the Monte Carlo Method*. Wiley, New York, 1981.
471
+
472
+ J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms.
473
+
474
+ arXiv preprint arXiv:1707.06347, 2017a.
475
+
476
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017b.
477
+
478
+ J.C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation.
479
+
480
+ IEEE Transactions on Automatic Control, 37(3):332–341, 1992.
481
+
482
+ J.C. Spall. A one-measurement form of simultaneous perturbation stochastic approximation. *Automatica*,
483
+ 33(1):109–112, 1997.
484
+
485
+ R. S. Sutton and A. W. Barto. *Reinforcement Learning, 2'nd Edition*. MIT Press, 2018.
486
+
487
+ R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In *NIPS*, volume 99, pp. 1057–1063, 1999.
488
+
489
+ R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
490
+
491
+ Reinforcement learning, pp. 5–32, 1992.
492
+
493
+ ## A Details Of The Convergence Analysis
494
+
495
+ We present here the details of the convergence analysis and give the proofs of the various results. We begin first with the results for the One-Simulation SF algorithm. We will subsequently sketch the analysis of the two-simulation SF algorithm. Finally, we shall discuss the convergence analysis of the algorithms with signed updates.
496
+
497
+ ## A.1 Convergence Of One-Simulation Sf
498
+
499
+ Proof of Lemma 1: Notice that
500
+
501
+ $$M_{n}^{i}=\Delta_{i}(n-1)\frac{G^{n-1}}{\delta_{n-1}}-E\left[\Delta_{i}(n-1)\frac{G^{n-1}}{\delta_{n-1}}\mid\mathcal{F}_{n-1}\right].$$
502
+
503
+ The first term on the RHS above is clearly measurable Fn while the second term is measurable Fn−1 and
504
+ hence measurable Fn as well. Further, from Assumption 1, each Mn is integrable. Finally, it is easy to verify
505
+ that
506
+ $$E[M_{n+1}^{i}\mid{\mathcal{F}}_{n}]=0,\ \forall i.$$
507
+ The claim follows. Proof of Proposition 1: Note that
508
+
509
+ $$E\left[\Delta_{i}(n){\frac{G^{n}}{\delta_{n}}}\mid{\mathcal{F}}_{n}\right]=E\left[E\left[\Delta_{i}(n){\frac{G^{n}}{\delta_{n}}}\mid{\mathcal{G}}_{n}\right]\mid{\mathcal{F}}_{n}\right],$$
510
+
511
+ where Gn
512
+ △= σ(θ(m), ∆(m), m ≤ n, χm, m < n), n ≥ 1 is a sequence of increasing sigma fields with G0 =
513
+ σ(θ(0), ∆(0)). It is clear that Fn ⊂ Gn, ∀n ≥ 0. Now,
514
+
515
+ $$E\left[\Delta_{i}(n){\frac{G^{n}}{\delta_{n}}}\mid{\mathcal{G}}_{n}\right]={\frac{\Delta_{i}(n)}{\delta_{n}}}E[G^{n}\mid{\mathcal{G}}_{n}].$$
516
+
517
+ Let s n 0 = s denote the initial state in the trajectory χ n. Recall that the initial state s is chosen randomly from the distribution ν. Thus,
518
+
519
+ $$E[G^{n}\mid{\mathcal{G}}_{n}]=\sum_{s}\nu(s)E[G^{n}\mid s_{0}^{n}=s,\phi_{\theta(n)+\delta_{n}\Delta(n)}]$$
520
+ $$=\sum_{s}\nu(s)V_{\theta(n)+\delta_{n}\Delta(n)}(s).$$
521
+
522
+ Thus, with probability one,
523
+
524
+ $$E\left[\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}\mid{\mathcal G}_{n}\right]=\sum_{s}\nu(s)\left(\Delta_{i}(n)\frac{V_{\theta(n)+\delta_{n}\Delta(n)}(s)}{\delta_{n}}\right).$$
525
+
526
+ Hence, it follows almost surely that
527
+
528
+ $$E\left[\Delta_{i}(n)\frac{G^{n}}{\delta_{n}}\mid{\mathcal{F}}_{n}\right]=\sum_{s}\nu(s)E\left[\Delta_{i}(n)\frac{V_{\theta(n)+\delta_{n}\Delta(n)}(s)}{\delta_{n}}\mid{\mathcal{F}}_{n}\right].$$
529
+
530
+ Using a Taylor's expansion of Vθ(n)+δn∆(n)(s) around θ(n) gives us
531
+
532
+ $$V_{\theta(n)+\delta_{n}\Delta(n)}(s_{n})$$
533
+ Vθ(n)+δn∆(n)(sn) = Vθ(n)(sn) + δn∆(n)
534
+ T ∇Vθ(n)(sn)
535
+ $$s_{n})=V_{\theta(n)}(s_{n})+\delta_{n}\Delta(n)^{T}\nabla V_{\theta}$$
536
+ $$+\frac{\delta_{n}^{2}}{2}\Delta(n)^{T}\nabla^{2}V_{\theta(n)}(s_{n})\Delta(n)+o(\delta_{n}^{2}).$$ $\delta=1,\ldots,d)^{T}$. Thus,
537
+ Now recall that ∆(n) = (∆i(n), i = 1*, . . . , d*)
538
+
539
+ $$\Delta(n)\frac{V_{\theta(n)+\delta_{n}\Delta(n)}(s_{n})}{\delta_{n}}=\frac{1}{\delta_{n}}\Delta(n)V_{\theta(n)}(s_{n})$$ $$+\Delta(n)\Delta(n)^{T}\nabla V_{\theta(n)}(s_{n})$$ $$\delta_{n}=-\delta_{n}$$
540
+ $$+\frac{\delta_{n}}{2}\Delta(n)\Delta(n)^{T}\nabla^{2}V_{\theta(n)}(s_{n})\Delta(n)+o(\delta_{n}).$$
541
+
542
+ Now observe from the properties of ∆i(n), ∀*i, n,* that (i) E[∆(n)] = 0 (the zero-vector), ∀n, since ∆i(n) ∼ N(0, 1), ∀*i, n*.
543
+
544
+ (ii) E[∆(n)∆(n)
545
+ T] = I (the identity matrix), ∀n.
546
+
547
+ (iii) $E\left[\sum_{i,j,k=1}^{d}\Delta_{i}(n)\Delta_{j}(n)\Delta_{k}(n)\right]=0$.
548
+ Property (iii) follows from the facts that (a) E[∆i(n)∆j (n)∆k(n)] = 0, ∀i ̸= j ̸= k, (b) E[∆i(n)∆2 j
549
+ (n)] = 0,
550
+ ∀i ̸= j (this pertains to the case where i ̸= j but j = k above) and (c) E[∆3 i
551
+ (n)] = 0 (for the case when i = j = k above). These properties follow from the independence of the random variables ∆i(n), i = 1*, . . . , d* and n ≥ 0, as well as the fact that they are all distributed N(0, 1). The claim now follows from (i)-(iii)
552
+ above.
553
+
554
+ Recall that from Proposition 1, it follows that β(n) = o(δn).
555
+
556
+ Proof of Lemma 2: It can be seen from (4) that Vθ(s) is continuously differentiable in θ. It can also be shown as in Theorem 3 of Furmston et al. (2016) that ∇2Vθ(s) exists and is continuous. Since θ takes values in C, a compact set, it follows that ∇2Vθ(s) is bounded and thus ∇Vθ(s) is Lipschitz continuous.
557
+
558
+ Finally, let L
559
+ s 1 > 0 denote the Lipschitz constant for the function ∇Vθ(s). Then, for a given θ0 ∈ C,
560
+
561
+ $$\parallel\nabla V_{\theta}(s)\parallel-\parallel\nabla V_{\theta_{0}}(s)\parallel\leq\parallel\nabla V_{\theta}(s)-\nabla V_{\theta_{0}}(s)\parallel$$
562
+ $$\leq L_{1}^{s}\parallel\theta-\theta_{0}\parallel$$
563
+ $$\leq L_{1}^{s}\parallel\theta\parallel+L_{1}^{s}\parallel\theta_{0}\parallel.$$
564
+
565
+ Thus, ∥ ∇Vθ(s) *∥≤∥ ∇*Vθ0
566
+ (s) ∥ +L
567
+ s 1 ∥ θ0 ∥ +L
568
+ s 1 ∥ θ ∥ . Let Ks
569
+ △= ∥∇Vθ0
570
+ (s)∥ + L
571
+ s 1∥θ0∥ and K1
572
+ △=
573
+ max(Ks, Ls1
574
+ , s ∈ S). Thus, ∥ ∇Vθ(s) ∥≤ K1(1+ ∥ θ ∥). Note here that since |S| < ∞, K1 < ∞ as well. The claim follows. Proof of Lemma 3: Note that
575
+
576
+ $$\|M_{n+1}\|^{2}=\sum_{i=1}^{d}(M_{n+1}^{i})^{2}$$
577
+ $$\begin{array}{c}{{i=1}}\\ {{=\sum_{i=1}^{d}\left(\Delta_{i}^{2}(n)\frac{(G^{n})^{2}}{\delta_{n}^{2}}+\frac{1}{\delta_{n}^{2}}E\left[\Delta_{i}(n)G^{n}\mid\mathcal{F}_{n}\right]^{2}\right.}}\\ {{\left.-2\Delta_{i}(n)\frac{G^{n}}{\delta_{n}^{2}}E\left[\Delta_{i}(n)G^{n}\mid\mathcal{F}_{n}\right]\right).}}\end{array}$$
578
+
579
+ Thus,
580
+
581
+ $$E[\|M_{n+1}\|^{2}\mid{\mathcal{F}}_{n}]=\frac{1}{\delta_{n}^{2}}\sum_{i=1}^{d}\Big(E[\Delta_{i}^{2}(n)(G^{n})^{2}\mid{\mathcal{F}}_{n}]\Big)$$ $$-E^{2}[\Delta_{i}(n)G^{n}\mid{\mathcal{F}}_{n}]\Big).$$
582
+ $$(14)$$
583
+
584
+ The claim now follows from Assumption 1 and the fact that all single-stage costs are bounded (cf. pp.174, Chapter 3 of Bertsekas (2012)).
585
+
586
+ Proof of Lemma 4: It is easy to see that Zn is Fn-measurable ∀n. Further, it is integrable for each n and moreover E[Zn+1 | Fn] = Zn almost surely since (Mn+1, Fn), n ≥ 0 is a martingale difference sequence by Lemma 1. It is also square integrable from Lemma 3. The quadratic variation process of this martingale will be convergent almost surely if
587
+
588
+ $$\sum_{n=0}^{\infty}E[\|Z_{n+1}-Z_{n}\|^{2}\mid{\cal F}_{n}]<\infty\ {\rm a.s.}\tag{10}$$
589
+
590
+ Thus,
591
+
592
+ Note that
593
+ $$E[\left\|Z_{n+1}-Z_{n}\right\|^{2}\mid{\mathcal{F}}_{n}]=a(n)^{2}E[\left\|M_{n+1}\right\|^{2}\mid{\mathcal{F}}_{n}].$$
594
+ $$\sum_{n=0}^{\infty}E[\left\|Z_{n+1}-Z_{n}\right\|^{2}\mid{\mathcal{F}}_{n}]=\sum_{n=0}^{\infty}a(n)^{2}E[\left\|M_{n+1}\right\|^{2}\mid{\mathcal{F}}_{n}]$$ $$\leq\hat{L}\sum_{n=0}^{\infty}\left(\frac{a(n)}{\delta_{n}}\right)^{2},$$
595
+ by Lemma 3. (14) now follows as a consequence of Assumption 2. Now (Zn, Fn), n ≥ 0 can be seen to be convergent from the martingale convergence theorem for square integrable martingales Borkar (1995). Our main result below is based on Theorem 5.3.1 of Kushner & Clark (1978) for projected stochastic approximation algorithms. Before we proceed further, we recall that result below.
596
+
597
+ Let C ⊂ Rd be a compact and convex set as before and Γ : Rd → C denote the projection operator that projects any x = (x1, . . . , xd)
598
+ T ∈ Rdto its nearest point in C.
599
+
600
+ Consider now the following the d-dimensional stochastic recursion
601
+
602
+ $$X_{n+1}=\Gamma(X_{n}+a(n)(h(X_{n})+\xi_{n}+\beta_{n})),$$
603
+ Xn+1 = Γ(Xn + a(n)(h(Xn) + ξn + βn)), (15)
604
+ under the assumptions listed below. Also, consider the following ODE associated with (15):
605
+
606
+ $${\dot{X}}(t)={\bar{\Gamma}}(h(X(t))).$$
607
+ X˙ (t) = Γ( ¯ h(X(t))). (16)
608
+ $$(15)$$
609
+ $$(16)$$
610
+ Let C(C) denote the space of all continuous functions from C to Rd. The operator Γ : ¯ C(C) → C(Rd) is
611
+ defined according to
612
+ $$\bar{\Gamma}(v(x))=\lim_{\eta\to0}\left(\frac{\Gamma(x+\eta v(x))-x}{\eta}\right),\tag{17}$$
613
+
614
+ for any continuous v : C → Rd. The limit in (17) exists and is unique since C is a convex set. In case this limit is not unique, one may consider the set of all limit points of (17). Note also that from its definition, Γ( ¯ v(x)) = v(x) if x ∈ C
615
+ o(the interior of C). This is because for such an x, one can find η > 0 sufficiently small so that x + ηv(x) ∈ C
616
+ o as well and hence Γ(x + ηv(x)) = x + ηv(x). On the other hand, if x ∈ ∂C
617
+ (the boundary of C) is such that x + ηv(x) ̸∈ C, for any small η > 0, then Γ( ¯ v(x)) is the projection of v(x)
618
+ to the tangent space of ∂C at x.
619
+
620
+ Consider now the assumptions listed below.
621
+
622
+ (B1) The function $h:\mathcal{R}^d\to\mathcal{R}^d$ is continuous.
623
+ (B2) The step-sizes $a(n),n\geq0$ satisfy .
624
+ $$a(n)>0\forall n,\ \sum_{n}a(n)=\infty,\ a(n)\to0{\mathrm{~as~}}n\to\infty.$$
625
+ (B3) The sequence βn, n ≥ 0 is a bounded random sequence with βn → 0 almost surely as n → ∞.
626
+
627
+ (B4) There exists T > 0 such that ∀ϵ > 0,
628
+
629
+ $$\operatorname*{lim}_{n\to\infty}P\left(\operatorname*{sup}_{j\geq n}\operatorname*{max}_{t\leq T}\left|\sum_{i=m(j T)}^{m(j T+t)-1}a(i)\xi_{i}\right|\geq\epsilon\right)=0.$$
630
+
631
+ (B5) The ODE (16) has a compact subset K of RN as its set of asymptotically stable equilibrium points.
632
+
633
+ Let t(n), n ≥ 0 be a sequence of positive real numbers defined according to t(0) = 0 and for n ≥ 1, t(n) =
634
+ nX−1 j=0 a(j). By Assumption (B2), t(n) → ∞ as n → ∞. Let m(t) = max{n | t(n) ≤ t}. Thus, m(t) → ∞
635
+ as t → ∞. Assumptions (B1)-(B3) correspond to A5.1.3-A5.1.5 of Kushner & Clark (1978) while (B4)-(B5)
636
+ correspond to A5.3.1-A5.3.2 there.
637
+
638
+ (Kushner & Clark, 1978, Theorem 5.3.1 (pp. 191-196)) essentially says the following:
639
+ Theorem 4 (Kushner and Clark Theorem:) *Under Assumptions (B1)–(B5), almost surely,* Xn → K
640
+ as n → ∞.
641
+
642
+ Finally, we come to the proof of our main result.
643
+
644
+ Proof of Theorem 1: In lieu of the foregoing, we rewrite (5) according to
645
+
646
+ $$\theta_{i}(n+1)=\Gamma_{i}{\Big(}\theta_{i}(n)-a(n)\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s)$$
647
+ $$-a(n)\beta_{i}(n)+M_{n+1}^{i}\Big),$$
648
+ −a(n)βi(n) + Min+1, (18)
649
+ where βi(n) is as in (10). We shall proceed by verifying Assumptions (B1)-(B5) and subsequently appeal to Theorem 5.3.1 of Kushner & Clark (1978) (i.e., Theorem 1 above) to claim convergence of the scheme.
650
+
651
+ Note that Lemma 2 ensures Lipschitz continuity of ∇Vθ(s) implying (B1). Next, from (B2), since δn → 0, it follows that a(n) → 0 as n → ∞. Thus, Assumption (B2) holds as well. Now from Lemma 2, it follows that Ps ν(s)∇Vθ(s) is uniformly bounded since θ ∈ C, a compact set. Assumption (B3) is now verified from Proposition 1. Since C is a convex and compact set, Assumption (B4) holds trivially. Finally, Assumption
652
+
653
+ $$(18)$$
654
+
655
+ (B5) is also easy to see as a consequence of Lemma 4. Now note that for the ODE (11), F(θ) = Ps ν(s)Vθ(s)
656
+ serves as an associated Lyapunov function and in fact
657
+
658
+ $$\nabla F(\theta)^{T}\bar{\Gamma}(-\sum_{s}\nu(s)\nabla V_{\theta}(s))$$
659
+ $$=(\sum_{s}\nu(s)\nabla_{\theta}V_{\theta}(s))^{T}\tilde{\Gamma}(-\sum_{s}\nu(s)\nabla V_{\theta}(s))\leq0.$$ For $\theta\in C^{\infty}$ (the interior of $C$), it is easy to see that $\tilde{\Gamma}(-\sum_{s}\nu(s)\nabla V_{\theta}(s))=-\sum_{s}\nu(s)\nabla V_{\theta}(s)$, and $$\nabla F(\theta)^{T}\tilde{\Gamma}(-\sum_{s}\nu(s)\nabla V_{\theta}(s))<0\mbox{if}\theta\in H^{c}\cap C$$ $$=0\mbox{o.w.}$$
660
+
661
+ For θ ∈ δC (the boundary of C), there can additionally be spurious attractors, see Kushner & Yin (1997),
662
+ that are also contained in H. The claim now follows from Theorem 5.3.1 of Kushner & Clark (1978).
663
+
664
+ ## A.2 Convergence Of Two-Simulation Sf
665
+
666
+ The analysis proceeds in a similar manner as for the one-simulation SF except with Gn+ − Gn−
667
+ 2δnin place of Gn δn
668
+ .
669
+
670
+ Proof of Proposition 2:
671
+ A similar calculation as with the proof of Proposition 1 would show that
672
+
673
+ $$E\left[\Delta_{i}(n)\left(\frac{G^{n+}-G^{n-}}{2\delta_{n}}\right)\mid\mathcal{F}_{n}\right]=\sum_{s}\nu(s)E\left[\Delta_{i}(n)\frac{(V_{\theta(n)+\delta_{n}\Delta(n)}(s)-V_{\theta(n)-\delta_{n}\Delta(n)}(s)}{2\delta_{n}}\mid\mathcal{F}_{n}\right].$$
674
+
675
+ Using Taylor's expansions of Vθ(n)+δn∆(n)(s) and Vθ(n)−δn∆(n)(s) around θ(n) gives us
676
+
677
+ $$\Delta(n)\left(\frac{V_{\theta(n)+\delta_{n}\Delta(n)}(s_{n})-V_{\theta(n)-\delta_{n}\Delta(n)}(s_{n})}{2\delta_{n}}\right)=\Delta(n)\Delta(n)^{T}\nabla V_{\theta(n)}(s_{n})+o(\delta_{n}).$$
678
+
679
+ The zero order and second order terms directly cancel above instead of being zero-mean, thereby resulting in lower gradient estimator bias. The rest follows from properties (i)-(iii) mentioned previously for the one-simulation gradient SF. In particular, E[∆(n)∆(n)
680
+ T] = I.
681
+
682
+ Proof of Theorem 2:
683
+ In the light of Proposition 2, the proof here follows in a similar manner as one-simulation SF.
684
+
685
+ ## A.3 Convergence Of Two-Simulation Signed Sf Reinforce
686
+
687
+ Recall that we have
688
+
689
+ $$H_{i}(\theta(n),\Delta(n))=\Delta_{i}(n)\left[\frac{V_{\theta(n)+\delta(n)\Delta(n)}-V_{\theta(n)-\delta(n)\Delta(n))}}{2\delta(n)}\right].$$
690
+
691
+ As explained previously, E[Hi(θ(n), ∆(n)|Fn] = ∇iVθ(n) + o(δ(n)).
692
+
693
+ Also, recall the 'error' in the ith component is given by
694
+
695
+ $$e_{i}(n)=H_{i}(\theta(n),\Delta(n))-\nabla_{i}V_{\theta(n)}=\sum_{j\neq i}\Delta_{i}(n)\Delta_{j}(n)\nabla_{j}V_{\theta(n)}+o(\delta(n)).$$
696
+
697
+ ## Proof Of Theorem 3:
698
+
699
+ We rewrite the algorithm as follows:
700
+
701
+ $$\theta_{i}(n+1)=\Gamma_{i}(\theta_{i}(n)-a(n)s g n(H_{i}(\theta(n),\Delta(n))))$$
702
+ $$=\Gamma_{i}(\theta_{i}(n)-a(n)(I(H_{i}(\theta(n),\Delta(n))>0)-I(H_{i}(\theta(n),\Delta(n))\leq0))),$$
703
+
704
+ where I(·) is the indicator function. Thus, we have
705
+
706
+ $$\theta_{i}(n+1)=\Gamma_{i}(\theta_{i}(n)-a(n)(1-2I(H_{i}(\theta(n),\Delta(n))\leq0)))$$ $$=\Gamma_{i}(\theta_{i}(n)-a(n)(1-2P(H_{i}(\theta(n),\Delta(n))\leq0|{\cal F}_{n})+M_{i}(n+1))),$$
707
+ where
708
+ $$M_{i}(n+1)=2P(H_{i}(\theta(n),\Delta(n))\leq0|\mathcal{F}_{n})-2I(H_{i}(\theta(n),\Delta(n))\leq0),$$ $$=2P(e_{i}(n)\leq-\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s)|\mathcal{F}_{n})-2I(e_{i}(n)\leq-\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s))$$ $$=2P(e_{i}(n)\leq-\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s)|\theta(n))-2I(e_{i}(n)\leq-\sum_{s}\nu(s)\nabla_{i}V_{\theta(n)}(s)),$$
709
+ $\phi_{\rm in}(t)=\phi_{\rm in}(t)$, $\phi_{\rm in}(t)=\phi_{\rm in}(t)$, $\phi_{\rm in}(t)=\phi_{\rm in}(t)$, $\phi_{\rm in}(t)=\phi_{\rm in}(t)$.
710
+ by (A1). It is easy to see that (Mi(n), Fn), n ≥ 0 is a martingale difference sequence. Since supn |Mi(n)| ≤ 1, and under (A4), it follows from an application of the martingale convergence theorem that nX−1 m=0 a(m)Mm+1, n ≥ 1 is an almost surely convergent martingale.
711
+
712
+ It is easy to verify that W(θ) = Ps ν(s)Vθ(s) itself is a Lyapunov function for the ODE (13) since
713
+
714
+ $$\frac{d W(\theta)}{d t}=-\bar{\Gamma}(\langle(1-2F(-\sum_{s}\nu(s)\nabla V_{\theta}(s)|\theta)),\sum_{s}\nu(s)\nabla V_{\theta}(s)\rangle)$$
715
+ = − X N i=1 Γ¯i((1 − 2Fi(− X s ν(s)∇iVθ(s)|θ))X s ν(s)∇iVθ(s)). From (A3), if Ps ν(s)∇iVθ(s) > 0, (1 − 2Fi(−Ps ν(s)∇iVθ(s)|θ)) ≥ 0 and dW(θ) dt ≤ 0. Similarly, if ∇iVθ < 0, (1 − 2Fi(−∇iVθ|θ)) ≤ 0 and dVθ dt ≤ 0. Further, when Ps ν(s)∇iVθ(s) = 0,dW(θ) dt = 0. From Lasalle's invariance theorem in conjunction with Theorem 2 of Chapter 2 of Borkar (2022), it
716
+ follows that θ(n), n ≥ 0 converges almost surely to the largest invariant set K¯ ⊂ K ⊂ {θ|Γ( ¯ ⟨(1 −
717
+ 2F(−Ps
718
+ ν(s)∇Vθ(s)|θ)),Ps
719
+ ν(s)∇Vθ(s)⟩) = 0}. The claim follows.
720
+
721
+ ## B Numerical Results B.1 The Stochastic Grid-World Environment
722
+
723
+ We conduct experiments on a 2-dimensional grid-world environment with configurable sizes denoted by L.
724
+
725
+ The agent starts at the top-left corner of the grid-world and aims to reach the terminal state located at the bottom-right corner. The reward function, based on Manhattan distance, penalizes for not reaching the goal, with a gradual decrease in penalty as the agent approaches the terminal state, as visualized in Figure 7. The penalty here is specified by a negative single-stage reward that is high for states that are farther from the goal than the nearby states. This is designed so as to provide guidance to the agent to move in the desired direction towards the goal state.
726
+
727
+ The agent is allowed to move in all four directions in the grid. However, we introduce an uncertainty in the environment. When an agent executes any of these actions, there is a probability 1 − p that the agent will move in the intended direction, and with probability p/2, it will move in one of the two perpendicular
728
+
729
+ ![19_image_0.png](19_image_0.png)
730
+
731
+ Figure 7: Stochastic grid world problem
732
+
733
+ | Grid size | Max episode length |
734
+ |-------------|----------------------|
735
+ | 4 × 4 | 50 |
736
+ | 8 × 8 | 80 |
737
+ | 10 × 10 | 100 |
738
+ | 20 × 20 | 150 |
739
+ | 50 × 50 | 200 |
740
+
741
+ Table 7: Maximum steps allowed in an episode for a given grid-size (L)
742
+ directions. This probabilistic element makes the objective function stochastic. We set p = 0.1 in all our experiments. The episode ends after the agent reaches the terminal state. If the agent is unable to reach the goal state, to ensure finite total rewards, we terminate the episode after a specified number of steps. Table 7 shows these specific parameters for the available grid-sizes. Note, however, that only the goal state has a positive reward while all the other states carry a negative (though varying) reward. Thus, only episodes that carry a negative return are the ones that were terminated.
743
+
744
+ ## B.2 Policy Function
745
+
746
+ The input to the policy consists of polynomial and modulo features of the agent's position in the grid. Both algorithms utilize the same policy function, which computes logits for the available actions using a linear function. The logits are then inputted into the Softmax or Boltzmann function to obtain a probability distribution. Figure 8 illustrates the same.
747
+
748
+ ## B.3 Algorithm Parameters
749
+
750
+ For the two SF-REINFORCE algorithms we chose the sensitivity and step-size parameters as δ(n) =
751
+ δ0(1 50000+n
752
+ )
753
+ d and α(n) = α0∗50000 50000+n
754
+ , respectively, where n is the iteration or episode number. Note that the iteration and episode numbers are the same in our setup. We set α(0) = α0 = 2 × 10−6. We require d < 0.5 to prove convergence theoretically to satisfy the requirement that X
755
+ n a(n)
756
+ δ(n)
757
+ 2< ∞. Hence, we restrict these quantities as such. We experiment with both (a) decay schemes by varying d and setting δ(0) = δ0 = 1 and also (b) a constant scheme with d = 0 and varying δ0. To reduce the variance, we measure Gn as the average over 10 trials. The two-sided version uses 5 trials for each side, so as to match the net computational effort in comparison to the one-sided scheme, and thereby allowing for a fair comparison.
758
+
759
+ ![20_image_0.png](20_image_0.png)
760
+
761
+ ![20_image_1.png](20_image_1.png)
762
+
763
+ (a) Features from state (b) Policy architecture
764
+ Figure 8: Boltzmann Policy used across all algorithms
765
+
766
+ | Algorithm / Gridsize | 4x4 | 8x8 | 10x10 | 20x20 | 50x50 |
767
+ |------------------------|-------------|--------------|--------------|----------------|----------------|
768
+ | REINFORCE | 13.90 ± 0.1 | 50.74 ± 2.3 | 76.90 ± 4.9 | 144.89 ± 113.8 | -18.79 ± 8.4 |
769
+ | PPO | 12.72 ± 0.2 | 49.72 ± 0.2 | 75.38 ± 0.3 | 235.09 ± 1.9 | 682.97 ± 12.9 |
770
+ | SFR-1 | 13.23 ± 0.1 | 45.40 ± 17.7 | 59.80 ± 35.0 | 220.24 ± 78.8 | 376.29 ± 401.4 |
771
+ | SFR-2 | 13.26 ± 0.1 | 45.52 ± 17.8 | 60.39 ± 35.0 | 170.68 ± 127.4 | 562.89 ± 387.7 |
772
+
773
+ Table 8: Total reward (mean ± standard error) of algorithms on different grid sizes. Parameters used:
774
+ δ0 = 1, d = 0.25 and α0 = 2 × 10−6
775
+ We also include the Proximal Policy Optimization (PPO) algorithm from Schulman et al. (2017b); Raffin et al. (2021) for our numerical comparison. We use the same policy architecture along with a linear layer for the value function. For REINFORCE, and PPO the policy is updated using the policy gradient via an ADAM optimizer with its learning rate set to 0.0003.
776
+
777
+ ## B.4 Gridsizes
778
+
779
+ Figure 9 illustrates the dynamics of the REINFORCE, SF-REINFORCE and PPO algorithms on various gridsizes. For this, we set the parameters: δ0 = 1, d = 0.25 and α0 = 2 × 10−6, and vary the gridsize L = 4, 8, 10, 20 and 50. For the largest grid-size, we end up having |S| = 2500 states. We limit L ≤ 50 since this is quite a lot for a Boltzmann policy of 30 trainable parameters. Table 8 shows the convergence values.
780
+
781
+ From Table 8, it can be seen that REINFORCE does only slightly better than our algorithms on smaller gridsizes L ∈ 4, 8, 10 and performs decently for L = 20. For smaller gridsizes, since REINFORCE uses the analytical gradient, there is no noise in the gradient due to perturbations unlike SF-REINFORCE. As a result, as seen in figures 9a, 9c and 9d, it is able to quickly converge to the optimal linear policy.
782
+
783
+ PPO regularizes the objective function by clipping it. Such methods are reliable for policies with deeper neural networks. Yet, we note that PPO is able to work with linear policy and produces consistent performances across seeds. Compared to REINFORCE, our methods are more robust since it shows reasonable performance on all grid sizes, especially for higher L ∈ {20, 50}. The two-sided SF REINFORCE (SFR-2) attains the best performance for L = 50. As expected, the one-sided SF REINFORCE (SFR-1) shows similar or higher variance across seeds than SFR-2.
784
+
785
+ ![21_image_0.png](21_image_0.png)
786
+
787
+ Figure 9: Plots showing performance of iterates of algorithms on various gridsizes. Parameters used: δη = 1, d = 0.25 and ao = 2 × 10-6
788
+ Although our algorithms converge to competent averages for bigger grid sizes, the variance is still quite large.
789
+
790
+ This motivates us to experiment with various perturbation schedules and gradient clipping, and study their effects not only on the performance (mean ± standard error) but also the dynamics of iterates in Sections 6.1 and 6.2.
loaWwnhYaS/loaWwnhYaS_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 23,
6
+ "ocr_stats": {
7
+ "ocr_pages": 1,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 1,
10
+ "ocr_engine": "surya"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 23,
14
+ "code": 0,
15
+ "table": 8,
16
+ "equations": {
17
+ "successful_ocr": 82,
18
+ "unsuccessful_ocr": 1,
19
+ "equations": 83
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }