RedTachyon commited on
Commit
ee55c06
1 Parent(s): 4a8ca35

Upload folder using huggingface_hub

Browse files
nXfptr5iGa/11_image_0.png ADDED

Git LFS Details

  • SHA256: 2235e7552f9b1d5453b3cf22f7ece524de01602d86cf106943710e2447c15d1e
  • Pointer size: 130 Bytes
  • Size of remote file: 37.2 kB
nXfptr5iGa/3_image_0.png ADDED

Git LFS Details

  • SHA256: cda52a85fc22810e7fefe1adb1373a4b8a7e66f0b1bef7d0570fc10af89be27b
  • Pointer size: 130 Bytes
  • Size of remote file: 17.9 kB
nXfptr5iGa/5_image_0.png ADDED

Git LFS Details

  • SHA256: 70c7bc82a7f56367ecebc1332f5c873e7084861f9fe8bd739781c1028d7e2a4e
  • Pointer size: 130 Bytes
  • Size of remote file: 16.8 kB
nXfptr5iGa/7_image_0.png ADDED

Git LFS Details

  • SHA256: 29872f55e163bb3e7d27be178a27d467646fa2bf165951c0d9855fec72ca87e9
  • Pointer size: 131 Bytes
  • Size of remote file: 172 kB
nXfptr5iGa/9_image_0.png ADDED

Git LFS Details

  • SHA256: 40899b044891a2fd21d8e903a5ccbca8fac1a23bb58a4c8960277575708bbc05
  • Pointer size: 130 Bytes
  • Size of remote file: 54.6 kB
nXfptr5iGa/nXfptr5iGa.md ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Critic Identifiability In Offline Reinforcement Learning With A Deterministic Exploration Policy
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Offline Reinforcement Learning (RL) promises to enable the adoption of RL in settings where logged interaction data is abundant but running live experiments is costly or impossible. The setting where data was gathered with a stochastic exploration policy has been extensively studied, however; in practice, log data is often generated by a deterministic policy. In this work, we examine this deterministic offline RL setting from both a theoretical and practical perspective. We describe the critic identifiability problem from a theoretical standpoint, arguing that algorithms designed for stochastic exploration are ostensibly unsuited for the deterministic version of the problem. We elucidate the problem further using a set of experiments on contextual bandits as well as continuous control problems. We conclude that, quite surprisingly, the tools for stochastic offline RL, notably the TD3+BC algorithm, are applicable after all.
8
+
9
+ ## 1 Introduction
10
+
11
+ In many practical settings where we want to apply Reinforcement Learning, running live experiments is costly. For example, in recommender systems, running an experiment with a new policy of unknown quality might lead to a poor user experience and risks losing revenue. Even more starkly, in healthcare, ethical considerations may completely preclude executing policies with unknown performance. Offline Reinforcement Learning promises to address this problem by enabling us to learn only from logged data that we already have, without having to query additional interactions from the environment. Offline RL algorithms (Fujimoto et al., 2019; Fujimoto & Gu, 2021; Kostrikov et al., 2021; Kumar et al.,
12
+ 2020; Fu et al., 2022) often work by learning a critic and an actor network, with the critic attempting to estimate the quality of the actor's policy and the actor attempting to improve the policy using values learned by the critic. Under regularity conditions, this process is known to lead to policies with improved returns
13
+ (Sutton et al., 1999; Silver et al., 2014). Crucially, in the variants of these methods most commonly used in practice, the critic depends on both a state and an action. When exploration data comes from a stochastic policy, there is ample data to train the critic since the sampled actions span the whole action space.
14
+
15
+ However, wherever the exploration policy is deterministic, by definition we only have one action per state to learn the critic. This means that we have to depend on non-trivial generalization properties of the critic network to obtain useful estimates of the policy gradient. In this paper, we conduct an extensive study of this problem, with the aim of producing an offline Reinforcement Learning agent with good performance on such deterministic data. Contributions We make progress on deterministic offline RL in two main ways. First, we describe the critic identifiability problem from a theoretical standpoint. Second, using a set of continuous control benchmarks, we show empirically how the problem can be addressed by weight initialization and the dynamics of neural network optimization alone. Lastly, we conclude by recommending TD3+BC-Phased, a variation of the TD3+BC algorithm (Fujimoto & Gu, 2021) for performing offline RL from data generated by deterministic policies. To aid reproducibility, all our MuJoCo experiments conform to the recommendations of Agarwal et al. (2021).
16
+
17
+ ## 2 Background
18
+
19
+ Markov Decision Process We formulate the RL problem using a Markov Decision Process (MDP). An MDP is a tuple (S, A*, r, P, γ,* I) where S is the state space, A is the action space, r is a reward function, P is the MDP's transition dynamics probability distribution, γ is a discount factor parameter and I is the distribution over initial states (Puterman, 2014). A policy π is a mapping from states to distributions over actions, with Ra∈A π(a|s)da = 1, ∀s ∈ S. For deterministic policies, we abuse notation by treating π as a function *S → A*.
20
+
21
+ Dataset We study the problem of offline RL (Levine et al., 2020) for continuous state and action spaces. In offline RL, a fixed dataset of data previously gathered in N episodes of length T, DN = {(s i t
22
+ , ait
23
+ , ri t+1, sit+1) :
24
+ t = 1*, . . . , T, i* = 1*, . . . , N*} is used for learning. No new data is gathered at the time of learning1. This is in contrast to online RL in which the agent interacts with an environment, gathers data, and learns from the data while it is being gathered. Value Functions and the Return We define the return to be the sum of the future discounted rewards.
25
+
26
+ The value of a state s under a policy π is defined as the expected return, given that the agent starts in state s and follows π thereafter:
27
+
28
+ $$V^{\pi}(s)\stackrel{\mathrm{def}}{=}\mathbb{E}_{\pi}\!\left[\sum_{t=0}^{T}\gamma^{t}r(S_{t},A_{t})|S_{0}=s\right].$$ In section is defined as expected return if
29
+ Similarly, the state-action value function is defined as expected return if an agent starts from state s, takes action a, and follows policy π thereafter:
30
+
31
+ $$Q^{\pi}(s,a)\stackrel{\mathrm{def}}{=}\mathbb{E}_{\pi}\left[\sum_{t=0}^{T}\gamma^{t}r(S_{t},A_{t})|S_{0}=s,A_{0}=a\right].$$
32
+
33
+ We skip the subscripts denoting the policy in cases it is clear what policy is meant.
34
+
35
+ Actor-Critic Algorithms Our focus is on the policy gradient family of algorithms (Sutton et al., 1999),
36
+ in which the policy is parameterized. We write πθ(s) to denote a deterministic policy parameterized by a neural network whose parameters are denoted by θ. The neural network that estimates probabilities of actions is often referred to as the *actor* network. In policy gradient algorithms, the goal is to learn the policy parameters by doing gradient ascent on the following objective function:
37
+
38
+ $$J(\mathbf{\theta})\ {\stackrel{\mathrm{def}}{=}}\ \mathbb{E}_{s_{0}\sim{\mathcal{I}}}[V^{\pi\mathbf{\theta}}(s_{0})],$$
39
+ $$(1)$$
40
+
41
+ where I is the distribution from which the initial state is sampled. The gradient of this objective is then used
42
+ to improve a performance measure. Silver et al. (2014) show in their *deterministic policy gradient theorem*
43
+ that:
44
+ $$\nabla_{\theta}J(\theta)=\mathbb{E}_{s\sim\mu_{\pi\theta}}\left[\nabla_{a}Q^{\pi\theta}\left(s,a\right)\vert_{a=\pi\theta\left(s\right)}\nabla_{\theta}\pi_{\theta}(s)\right],$$ boundary measure induced by the deterministic
45
+ πθ(*s, a*)|a=πθ(s)∇θπθ(s)], (1)
46
+ where µπθ
47
+ is the discounted occupancy measure induced by the deterministic policy πθ(s) and defined as
48
+ µπ(s
49
+ ′) = RS
50
+ PT
51
+ t=1 γ
52
+ t−1I(s)p(s → s
53
+ ′*, t, π*)ds, where p(s → s
54
+ ′*, t, π*) is the measure associated with being in
55
+ state s
56
+ ′ after t transitions starting in state s and following policy π.
57
+
58
+ In the actor-critic family of algorithms, an estimate of state-action value function is also learned that directs the updates of policy parameters. The state-action value function is typically parameterized using a second neural network, called the *critic* network, Qˆϕ, whose parameters are denoted by ϕ. Algorithms like DDPG
59
+ (Lillicrap et al., 2015) and TD3 (Fujimoto et al., 2018), work by making two approximations in (1). First, they replace Qθ by Qˆϕ. Second, they replace µπθ with pπθ
60
+ , the long-term distribution of being in a state defined as pπ(s
61
+ ′) = RS
62
+ PT
63
+ t=1 I(s)p(s → s
64
+ ′*, t, π*)ds, which does not factor in the discount. This leads to the approximate update for the policy gradient
65
+
66
+ $$\nabla_{\theta}J(\theta)\approx\mathbb{E}_{s\sim p_{\pi_{\theta}}}\left[\nabla_{a}\hat{Q}_{\phi}(s,a)|_{a=\pi_{\theta}(s)}\nabla_{\theta}\pi_{\theta}(s)\right].$$
67
+ -∇aQˆϕ(*s, a*)|a=πθ(s)∇θπθ(s). (2)
68
+ 1Our notation assumes episode length T to be the same for all episodes for ease of exposition. The extension to episodes of varying lengths is straightforward.
69
+
70
+ $$(2)$$
71
+
72
+ This is often written it in an equivalent form as Es∼pπθ
73
+ [∇θQˆϕ(*s, π*θ(s))], which is equal to the right hand side of (2). In batch RL algorithms, the state distribution is (heuristically) replaced by the dataset D, giving the update Es∼D[∇θQˆϕ(*s, π*θ(s))].
74
+
75
+ TD3+BC Actor Update The TD3+BC algorithm (Fujimoto & Gu, 2021) adds a behavior cloning term
76
+ to the TD3 update:
77
+ $$\operatorname*{max}_{\boldsymbol{\theta}}\mathbb{E}_{(s,a)\sim D}\left[\lambda\hat{Q}_{\boldsymbol{\phi}}(s,\pi_{\boldsymbol{\theta}}(s))-\left\|\pi_{\boldsymbol{\theta}}(s)-a\right\|_{2}^{2}\right],$$
78
+ where the action a could be multi-dimensional and λ is defined below. Equation 3 can be justified as follows. We want to use policy gradients as per equation 2. However, to minimize learning bias in the deep RL setting, it is important that the policy we learn stays close to the data, so we can be sure about its performance.
79
+
80
+ Ideally, we want to encode this by adding a hard constraint saying that a divergence between the learned policy and the data-generating policy is smaller than ϵ (Schulman et al., 2015; Laroche et al., 2019; Wu et al., 2019; Jaques et al., 2019; Kumar et al., 2019). For stochastic policies, we could use the KL divergence between Gaussian distributions with the same spherical covariance, and for deterministic policies, we could use the squared L2-Wasserstein divergence. In either case, we just get the mean squared error back (up to a multiplicative constant), which brings us back to (3). Instead of using the theory of optimization to find the Lagrange multiplier λ, Fujimoto & Gu (2021) heuristically propose to set λ using a schedule that has been shown to work remarkably well in practice:
81
+
82
+ $\quad(3)$ .
83
+ $$\lambda\leftarrow\frac{\alpha}{\frac{1}{|B|}\sum_{(s_{i},a_{i})\in\mathcal{B}}|\hat{Q}_{\phi}(s_{i},a_{i})|},\tag{1}$$
84
+ $$\left(4\right)$$
85
+ where B is the mini-batch being used at the current time step, |B| is the number of transitions in the minibatch, and α is a tuneable hyperparameter. In addition, in order to improve the stability of the algorithm, a target actor π target is also learned in parallel. This target policy is defined by its weights θ target, updated as θ target ← τθ + (1 − τ )θ target.
86
+
87
+ TD3+BC Critic Update Before closing the Background section, we provide the update rules for the critic update used by TD3 and TD3+BC algorithms. Both algorithms use an update inspired by double Q-learning (Hasselt, 2010) to update their critic. This reduces the overestimation bias incurred by the original Q-learning algorithm. In addition, for stability, the two critic functions have two sets of parameters each: ϕi, which is being learned directly and ϕ target i, which lags the learned parameters and is updated as ϕ target i ← τϕi + (1 − τ )ϕ target i. Two sets of parameters, ϕ1 and ϕ2 are learned; both functions are trained to match
88
+
89
+ $$y(r,s^{\prime},a^{\prime})\gets r+\operatorname*{min}_{i=1,2}\hat{Q}_{\phi_{i}^{\mathrm{target}}}(s^{\prime},a^{\prime})$$
90
+
91
+ where both parameter sets are learned using regression, by minimizing the following objective function:
92
+
93
+ $$L(\phi_{i},D)\stackrel{{\rm def}}{{=}}\mathbb{E}_{(s,a,r,s^{\prime})\sim D,a^{\prime}\sim\pi^{\rm target}(s^{\prime})}\left[\left(\hat{Q}_{\phi_{i}}(s,a)-y(r,s^{\prime},a^{\prime})\right)^{2}\right].\tag{5}$$
94
+
95
+ Finally, Fujimoto & Gu (2021) always use Qˆϕ1 in the policy learning step. Learning Qˆϕ2 has no use beyond helping to reduce the overestimation bias.
96
+
97
+ ## 3 Related Work
98
+
99
+ As far as we are aware, ours is the first work that studies batch Reinforcement Learning in a setting where the exploration policy is deterministic. The closest related work is literature on batch RL algorithms. The most significant batch algorithm from the perspective of our work is TD3+BC (Fujimoto & Gu, 2021), which was already described in the Background section. For a more complete recent overview of batch RL with stochastic policies, we refer the reader to the work by Fu et al. (2022).
100
+
101
+ ![3_image_0.png](3_image_0.png)
102
+
103
+ Figure 1: Available state-action space data on a contextual bandit problem with 25, 50, 100 interactions for deterministic exploration (top row) and stochastic exploration (bottom row).
104
+ Batch RL with Finite Datasets Fujimoto et al. (2019) extensively study the setting where the dataset given to a batch RL agent is finite, analyzing the difference between an approximation to the optimal policy computed on the finite dataset and the true optimal policy. On the surface it seems that this work should be similar to our setting of deterministic exploration since, for continuous state-action spaces, a stochastic policy will see only one action in a given state. However, in reality, our setting is very different. A
105
+ major distinguishing feature is that, for a deterministic exploration policy, we may fail to explore the whole state-action space even given we visit every state and are provided with an infinite number for trajectories.
106
+
107
+ Therefore, the tools used to study finite datasets generated by stochastic policies will not be enough to capture all phenomena characteristic of deterministic policies. To explain this point further, in Figure 1, we show the difference in the available data on a contextual bandit problem between the setting of a stochastic and deterministic exploration policy.
108
+
109
+ Concentrability and Theoretical Batch RL Work Munos & Szepesvári (2008) use a concentrability coefficient to study Fitted Value Iteration. Chen & Jiang (2019); Xie & Jiang (2020); Rashidinejad et al.
110
+
111
+ (2021) later use the same idea to facilitate the analysis of batch RL in the context of a finite dataset.
112
+
113
+ Crucially, in all of these works, the concentrability coefficient divides by the probability of exploring a given state-action pair, thus implicitly assuming that the policy that generated the data is stochastic. Similarly, Yin et al. (2021) use the minimum probability of seeing a state-action pair in exploration to study off-policy evaluation, while Yin & Wang (2021) also use it to study model-based batch RL, again constraining both approaches to stochastic exploration policies. On the other hand, Xie et al. (2021) and Cheng et al. (2022)
114
+ use a characterization of distribution shift which is aware of the function class used to approximate the value functions, allowing for deterministic exploration policies. While this work is highly relevant, it follows somewhat different goals than we do. Xie et al. (2021) provide a purely theoretical analysis, without testing the ideas in practice. On the other hand, Cheng et al. (2022) are interested in deriving a novel batch RL algorithm that leverages a notion of pessimism and can be theoretically argued to work under a broad spectrum of circumstances. Our goals are different: the main questions we ask are: (1) empirically, can we use off-the-shelf tools that work well with stochastic exploration policies to solve problems where data is generated by deterministic policies and (2) theoretically, under what assumptions can we learn an accurate model of the Q function given a deterministic exploration policy.
115
+
116
+ Phased and One-Step Algorithms The idea of separating out critic and actor learning into separate phases, deployed until approximate convergence, as opposed to learning the actor and the critic simultaneously, has been studied by Peng et al. (2019). The concept of doing one loop of policy iteration rather than many in the context of batch Reinforcement Learning has been studied by Brandfonbrener et al. (2021).
117
+
118
+ They identify that, the reason why such 'single iteration' algorithms can perform better than their 'multiiteration' equivalents is that they do not require off-policy evaluation, which is extremely unstable and prone to inaccuracy in the deep RL setting. Our TD3+BC-Phased algorithm is a special case of the algorithm used by Brandfonbrener et al. (2021) and we do not claim algorithmic novelty, only that the learning from algorithms deployed on data generated by stochastic exploration policies translate to deterministic exploration as well (we believe our paper to be the first one that experimentally studies deep batch RL in the context of deterministic exploration). Additional Regularizers A powerful idea in the batch RL literature is to add additional regularizers to an online RL algorithm. For example, CQL (Kumar et al., 2020) adds an additional term to the critic loss, so that the critic is learning an (approximate) lower bound on the true Q function to both prevent overestimation and constrain the policy to stay near the training data. An extension to this idea is presented by COMBO (Yu et al., 2021), which uses a model to construct a more accurate lower bound. While our analysis in section 7 is also based on additional regularization, we address a different problem. Indeed, none of these prior works addresses the question of what happens if the exploration policy is deterministic, which is the main focus of our paper. Critic Generalization Implicit Q-Learning (Kostrikov et al., 2021), like our algorithm, heavily relies on generalization properties of the function approximator used for the critic. It also separates out the stages of learning the critic and actor. However, Implicit Q-learning has been designed with stochastic exploration policies in mind, and, unlike our work, does not consider the problem that the critic might not be identifiable under deterministic exploration.
119
+
120
+ Imitation Learning Imitation Learning algorithms ignore the reward signal completely, learning a policy by performing supervised learning of the demonstration data. In sequential decision-making problems, due to error compounding, imitation learners suffer from poor performance (Ross & Bagnell, 2010). While Behavioral Cloning can work well for policies generated by deterministic experts, it is constrained by the availability of high-quality data. Recently, a new baseline has been proposed (Chen et al., 2021) that uses a reward signal. Percent Behavioral Cloning performs supervised learning on a subset of data with sufficiently high returns. However, unlike the batch RL setting which we study, this technique is still fundamentally limited by the fact that the behavior policy has to solve the same task learned by the agent.
121
+
122
+ ## 4 Critic Identifiability
123
+
124
+ In this section, we will describe the problem of critic identifiability for MDPs. We analyze model-free offline RL algorithms which learn a critic. In continuous control problems, the critic usually attempts to approximate the Q-function of a policy, which, crucially, depends both on a state and an action. In offline RL, data typically comes from a stochastic policy with support on the whole action space. In this case, given adequate state coverage and a smoothness assumption on the Q-function, the critic is identifiable in the sense that, as the amount of training data approaches infinity, the learned critic becomes close to the true Q-function. We can formalize this notion using the following definition, where we denote the set of datasets with D and the function class used to learn the critic with F.
125
+
126
+ Definition 1 (Identifiability). Consider a problem class (i.e. a set of MDPs) P. The problem class is identifiable in region *R ⊂ S × A* with exploration policy class Π and learning algorithm Alg : *D → F* if for every possible MDP M ∈ P, for every ϵ > 0 and δ ∈ (0, 1] there exists an exploration policy π ϵ,δ E ∈ Π and a dataset size n ϵso that for all N ≥ n ϵ with probability at least 1 − δ we have
127
+
128
+ $$\operatorname*{sup}_{(s,a)\in\mathcal{R}}|Q_{M}^{\pi_{E}^{\epsilon,\delta}}(s,a)-\hat{Q}_{N}^{\epsilon,\delta}(s,a)|\leq\epsilon.$$
129
+ (s,a)∈R
130
+ Here, Qˆϵ,δ N = Alg(D
131
+ π ϵ,δ E
132
+ N ) denotes a critic trained using algorithm Alg on a dataset D
133
+ π ϵ,δ E
134
+ N of N episodes gathered with policy π ϵ,δ E
135
+ . Q
136
+ π ϵ,δ E
137
+ M denotes the ground truth Q-value of the policy π ϵ,δ Ein the MDP M.
138
+
139
+ ![5_image_0.png](5_image_0.png)
140
+
141
+ $$({\mathfrak{h}})$$
142
+
143
+ Figure 2: An illustration of the critic identifiability problem.
144
+ In other words, informally, a problem class is identifiable with a given exploration policy class and a learning algorithm if we can accurately learn the Q-function of an exploration policy in the class using the algorithm2.
145
+
146
+ For ease of exposition, we will now focus on contextual bandits, where the ground truth Q function and the reward function are the same and the episode length is one. First, consider the case where the exploration policy is stochastic (with support on the whole action space), where we make a Lipschitz smoothness assumption on the critic (we denote the set of Lipschitz functions that fulfill certain regularity conditions with H
147
+ - see Appendix B for the exact definition), and use a learning algorithm that returns any Lipschitz-smooth function minimizing MSE error on the training set
148
+
149
+ $$\mathrm{Alg}(D_{N})=\operatorname*{arg\,min}_{\hat{Q}\in\mathcal{H}}\frac{1}{N}\sum_{(s,a,r)\in D_{N}}(\hat{Q}(s,a)-r)^{2}.$$
150
+ 2. (6)
151
+ Moreover, we assume that the problem is realizable, which implies that the term being minimized is always zero. In this case, identifiability on *S × A* follows directly from standard Rademacher bounds (von Luxburg
152
+ & Bousquet, 2004) in the sense that we have limN→∞ sup(s,a)∈S×A |Q(s, a) − QˆN (*s, a*)| = 0 with probability one for every such non-degenerate exploration policy. While other regularity assumptions may lead to faster convergence of this limit, in this work we limit ourselves to the Lipschitz assumption due to the fact that is is relatively straightforward to ensure in the context of deep learning.
153
+
154
+ However, identifiability is no longer a given when the exploration policy is deterministic. To see an illustration why, recall that the critic typically has an architecture similar to the one illustrated in Figure 2a. Now, for deterministic policies, the action can be completely determined from the state. As shown in Figure 2b, this means that a critic trained using a MSE loss can simply learn a copy of the policy and use it whenever the Q function depends on the action. Crucially, if this happens, the action input to the network is completely ignored. This means that policy gradients obtained using that critic are useless, since the gradient of the critic with respect to the action input is zero. More broadly, since the MSE loss function is computed on the training set where we only have one action per state, the learned critic will, without further assumptions, achieve arbitrary values for actions other than the one taken by the exploration policy.
155
+
156
+ The central finding of this work is that, quite surprisingly, the inductive bias arising from using critics represented by ReLU networks and learned using the Adam algorithm (Kingma & Ba, 2014)
157
+ is enough to provide adequate generalization and useful policy gradients. In other words, despite the fact that the MSE loss does not distinguish a critic network that provides useful policy gradients from one that exhibits the pathology shown in Figure 2b, the optimizer always converges to a network with useful policy gradients. We examine this phenomenon empirically in the next section.
158
+
159
+ 2We will explain in Section 6 that for a nontrivial class of problems, it is sufficient in practice to learn the Q function of the exploration policy and we do not have to learn the Q function of an arbitrary policy.
160
+
161
+ ## 5 Experiments On A Contextual Bandit 5.1 Generalization And Critic Identifiability
162
+
163
+ In order to provide a fuller picture of the critic identifiability problem, we studied it empirically for contextual bandits. Figure 3a shows three instances of a contextual bandit, with different reward functions (which are also the ground truth Q functions). Specifically, we studied the quadratic, ring and 'three pits' reward functions. The quadratic function is the most representative since all smooth functions are approximately locally quadratic, while the other two, more complex, examples can be used to study the limits of generalization of non-local features. We generated a training set using a policy tracing a re-scaled sine wave, visible in the figure as a black curve, and obtained the corresponding actions by evaluating it on an evenly distributed grid of 100 states.
164
+
165
+ We then used this training set to learn the critics shown in Figure 3b, using a ReLU network and the Adam optimizer (see appendix A for the learning learning rate and other technical details). Surprisingly, for the quadratic function, despite the training set being confined to the policy, the generalization ability of the ReLU
166
+ network combined with the optimizer turned out to be outstanding, recovering the shape of the function even in points far away from the policy that generated the data. We repeated this experiment multiple times and the results were always qualitatively the same. Each time, we obtained good generalization, which implies good-quality policy gradients in the neighborhood of the exploration policy. On the other hand, the conclusion from plots for the ring reward and the 'four pits' reward is that we only get a reasonable approximation to the true Q in regions of the state-action space close to the policy that generated the data and varying results elsewhere. This is to be expected: while the inductive bias inherent in ReLU networks trained with Adam can be expected to encode a notion of continuity, it cannot make up features of the reward landscape completely absent from the training set. This can be witnessed in the plot showing the part of Figure 3b showing the 'four pits' reward function, where the critic was only able to learn about two of the four pits, since the other two were not represented in the training data. While Figure 3 provides a qualitative measure of generalization, we also provide a quantitative one in Table 1. It can be seen that for all reward functions, we are able to generalize very well when evaluated on state-action pairs coming from the exploration policy and acceptably well for state-action pairs that come from near the policy. On the other hand, generalization to regions of state-action space very far form the exploration policy is only sometimes possible. We provide details of the experiment used to generate Table 1 in Appendix A. Overall, since we found the high quality of the generalization perplexing (especially for the quadratic critic),
167
+ we wanted to further investigate whether the critic identifiability problem can become an issue in practice.
168
+
169
+ To shed light on this, we ran another experiment. We learned another critic function, keeping the same network architecture and optimizer settings, but forcing the weights multiplying the action to be zero at every optimization step. Figure 3c shows the result. By design, the obtained critic function does not depend on the action and produces completely useless policy gradients. However, it achieves the same loss of 0.001 on the training set as the network shown in Figure 3b. This confirms in practice the problem identified in the previous section: it is possible to minimize the MSE loss well while still having a critic useless for policy updates.
170
+
171
+ Given the current limited understanding of the generalization of neural networks, it is hard to speculate why exactly the combined effect of the network architecture, weight initialization and the optimizer always end up preferring weights giving rise to the good behavior in 3b above the pathological one shown in 3c.
172
+
173
+ However, if we treat the number of required optimization steps required to learn the critic as a measure of simplicity of the learned function, it turns out that the good critics are indeed simpler, requiring on the order of 30%-50% optimization steps to train.
174
+
175
+ ## 5.2 The Critic Identifiability Problem Occurs In Practice
176
+
177
+ Above, we could see that the critic identifiability problem can arise in the sense that a good learned critic and the pathological one have the same MSE error on the training set. However, in order to obtain the
178
+
179
+ ![7_image_0.png](7_image_0.png)
180
+
181
+ | Reward Type | Matching πE | Near πE | Far From πE |
182
+ |---------------|-----------------|-----------------|-----------------|
183
+ | quadratic | 0.0010 ± 0.0000 | 0.0015 ± 0.0000 | 0.0081 ± 0.0004 |
184
+ | ring | 0.0008 ± 0.0000 | 0.0012 ± 0.0000 | 0.0062 ± 0.0002 |
185
+ | four pits | 0.0009 ± 0.0000 | 0.0010 ± 0.0000 | 0.1564 ± 0.0067 |
186
+
187
+ Table 1: Mean squared generalization error of critic networks trained using data coming from a deterministic policy πE on various reward signals.
188
+ pathological critic, we had to artificially constrain the critic network not to depend on the action, which can be argued to be unnatural. In this section, we show that, with linear function approximation, the critic identifiability problem can organically happen, even for linear critics.
189
+
190
+ Indeed, consider the class of contextual bandits, with states in R
191
+ 10 and actions in R
192
+ 9, where the dataset is generated by the deterministic policy
193
+
194
+ $\pi_{\rm E}([s_{1},s_{2},...,s_{10}]^{\top})=[s_{1},s_{2},...,s_{9}]^{\top}$
195
+ and the ground truth Q (the reward) is defined as
196
+
197
+ $$Q([s_{1},s_{2},\ldots,s_{10}]^{\top},[a_{1},a_{2},\ldots,a_{9}]^{\top})=\sum_{i=1}^{9}a_{i}+s_{10}.\tag{1}$$
198
+ $$\left(7\right)$$
199
+
200
+ Intuitively, this setting encapsulates the problem commonly found when learning a critic function in a highdimensional state action space. The dataset lies on a low-dimensional manifold and there are many candidate critics in the (linear in this case) function class that model the training data with zero error. However, most of them will generalize very poorly outside of this training dataset. Indeed, we experimentally verified that, for this example, gradient descent trained on data generated by πE not only computed a completely wrong approximation to the ground truth Q, but gave policy gradients where at least one gradient coordinate points in direction opposite to the correct one (details of the experiment are in Appendix A).
201
+
202
+ ## 6 Td3+Bc-Phased Algorithm
203
+
204
+ While the discussion in the previous section suggests that we could use unmodified TD3+BC, it turns out that the setting of deterministic exploration policy allows us to make one further simplification to the algorithm.
205
+
206
+ Similarly to most other actor-critic methods, TD3+BC learns the critic and the actor simultaneously. Specifically, the critic learns the Q-function of the actor policy and the actor learns a policy that maximizes the current critic estimate. Together, this process represents an incremental version of policy iteration. However, if the exploration policy is deterministic, it is reasonable to instead confine ourselves to a single step of policy iteration, where the critic learns the value of the exploration policy and the actor improves on just this value. This simplification is possible because, for deterministic policies, we by definition do not have the data about values of actions other than the one chosen by the exploration policy, making the evaluation of other policies tricky. On the other hand, of course one could also argue that we could instead rely on critic generalization to facilitate multiple steps of policy improvement. Ultimately, given the current understanding of actor-critic algorithms, the question of whether it makes sense to perform more than one step of policy iteration can only be answered empirically. We did just that, by comparing two algorithms: regular TD3+BC and our modification, which we call TD3+BC-Phased.
207
+
208
+ TD3+BC-Phased is described in Algorithm 1 and proceeds in three stages. First, the exploration policy is distilled from data using behavioral cloning. Second, a critic is learned to evaluate this policy.
209
+
210
+ $$L(\phi_{i},D)\stackrel{\mathrm{def}}{=}\mathbb{E}_{(s,a,r,s^{\prime})\sim D}\left[\left(Q_{\phi_{i}}(s,a)-y(r,s^{\prime},\hat{\pi}_{E}(s^{\prime}))\right)^{2}\right].$$
211
+
212
+ This update is different from equation equation 5 in that the policy whose value the critic is computing is an approximation πˆE to the policy that generated the data as opposed to an approximation to the optimal policy the algorithm is learning about. Finally, an actor network is trained to maximize the value of the critic. This staged algorithm is simpler than vanilla TD3+BC because the critic does not depend on the actor. While this algorithm is a special case of the algorithm framework introduced by Brandfonbrener et al. (2021) in that we only do one iteration of policy improvement, our experiment is different in that we consider data generated by a deterministic exploration policy, while Brandfonbrener et al. (2021) uses stochastic exploration policies.
213
+
214
+ In Figure 4, we report the performance of both variants of the algorithm on a variant of the D4RL benchmark where the demonstration data was generated by deterministic policies (see Appendix A for details), to match the setting in our paper. It can be seen that the phased version of the algorithm has better performance, on both expert and medium datasets. We therefore chose this version as a basis for our later experiments with regularization, described in Section 8.
215
+
216
+ $$(8)$$
217
+
218
+ Algorithm 1 The TD3+BC-Phased Algorithm Learn πˆE by minimizing P(s,a)∈Dn
219
+ ∥a − πθ(s)∥ wrt. θ Learn Qˆ by minimizing equation 8 Learn π by minimizing equation 3
220
+
221
+ ![9_image_0.png](9_image_0.png)
222
+
223
+ Figure 4: Comparison of vanilla TD3+BC (in blue) and the phased version (in orange). Performance profiles and aggregate metrics (mean, interquartile mean, median) were computed using 5 seeds in each of 3 environments (MuJoCo Walker, Hopper and HalfCheetah). 95%-confidence interval were obtained via stratified bootstrap with 50k replications.
224
+
225
+ ## 7 Ensuring Critic Identifiability With Lipschitz Regularization
226
+
227
+ In order to mitigate the critic identifiability problem, we propose to constrain the critic Qˆ to be Lipschitzcontinuous with constant L. We propose to use Lipschitz continuity as a regularity condition that allows us to compensate for the fact that a deterministic exploration policy will only gather data for one action in any given state. To motivate Lipschitz regularity further, in Appendix C, we provide an example of a contextual bandit problem where the Lipschitz assumption proves crucial to identify the optimal policy. Moreover, in this section we will show theoretically that, for contextual bandit problems and under reasonable technical assumptions, the Lipschitz assumption allows us to obtain useful policy gradients in the region near the manifold covered by the deterministic exploration policy, addressing the problem of critic non-identifiability. We begin by introducing the definition of the data manifold.
228
+
229
+ $${\mathcal{M}}\ {\stackrel{\mathrm{def}}{=}}\ \{(s,\pi_{\mathrm{E}}(s)):s\in{\mathcal{S}}\}$$
230
+
231
+ Here, πE is the deterministic exploration policy. We first introduce a Lemma showing that, as the amount of training data approaches infinity, the critic function Qˆ restricted to M approaches the true Q-function.
232
+
233
+ In the Lemma below, we assume that the critic belongs to the function class H, which contains Lipschitz functions that satisfy technical regularity conditions (see appendix B for a definition). In this section, we adopt the notation that the critic QˆN was trained on a dataset of size N.
234
+
235
+ Lemma 1. Assume that both the true Q-function and the critic QˆN are in function class H and that the MSE critic training error is zero in the sense that PN
236
+ i=1 Q(si, πE(si)) − QˆN (si, πE(si))2= 0 *for all states* si in the dataset. Assume that the distribution of training data satisfies p(s) > 0 for all states s ∈ S. Then, with probability one, for all (s, a) ∈ M*, we have*
237
+
238
+ $$\operatorname*{lim}_{N\rightarrow\infty}|Q(s,a)-\hat{Q}_{N}(s,a)|=0.$$
239
+
240
+ The proof of the Lemma uses standard Rademacher tools for Lipschitz functions (von Luxburg & Bousquet, 2004; Mohri et al., 2018) and is given in appendix B.
241
+
242
+ While the result above certifies the quality of function fit on the manifold itself, in order to reason about the quality of the policy gradient, we need to have a result that can be extended to nearby points. We first define the neighborhood.
243
+
244
+ $${\mathcal{M}}_{\eta}\stackrel{\mathrm{def}}{=}\left\{(s,a):\exists(s^{\prime},a^{\prime})\in{\mathcal{M}}.\;\;\;\left\|\left[{\frac{s^{\prime}}{a^{\prime}}}\right]-\left[{\frac{s}{a}}\right]\right\|\leq\eta\right\}.$$
245
+
246
+ Herre, we use the notation sa to denote a concatenation of vectors s and a. We now introduce a Lemma which talks about the quality of fit in the neighborhood.
247
+
248
+ Lemma 2. Under assumptions of Lemma 1, for state-action pairs (s, a) ∈ Mη*, with probability one we have*
249
+
250
+ $$\operatorname*{lim}_{N\to\infty}|Q(s,a)-\hat{Q}_{N}(s,a)|\leq2\eta L.$$
251
+
252
+ The proof of the Lemma can be found in appendix B. Lemma 2 quantifies the quality of fit near the data manifold. In order to have a fit guarantee that covers the entire state-action space, we now introduce a coverage assumption on the exploration policy.
253
+
254
+ Definition 2. A policy πE(s) achieves η-coverage if Mη *⊇ S × A*.
255
+
256
+ For example, consider the case when S = [0, 1] and A = [−1, 1]. Consider the policy πE(s) = sin 2πs p
257
+ . If we set η = p, we indeed have that πE has η-coverage. Moreover, we can construct policies with η-coverage for arbitrarily small η by setting the period of the exploration policy to be equally small. We now introduce the following corollary, which follows immediately from Lemma 2 and the definition of η-coverage.
258
+
259
+ Corollary 1. For an exploration policy which achieves η*-coverage, we have*
260
+
261
+ $$\operatorname*{lim}_{N\to\infty}|Q(s,a)-{\hat{Q}}|$$
262
+
263
+ N→∞
264
+ |Q(s, a) − QˆN (s, a)| ≤ 2ηL
265
+ for every state-action pair in *S × A*.
266
+
267
+ The Corollary implies that, for the class of exploration policies that achieve η-coverage for η arbitrarily small, using the algorithm as defined in equation 6, a contextual bandit problem class with a Lipschitz ground truth reward is indeed identifiable on *S × A* as per Definition 1 because we can choose an exploration policy such that η becomes arbitrarily small. This requirement is natural in the sense that, lacking stochastic actions, we need another mechanism to ensure that the dataset tells us enough about the true Q function. In practical settings, we are unlikely to have an exploration policy which achieves η-coverage, since it implies covering the whole state-action space. However, the results in this section are straightforward to extend to a setting where the coverage is restricted to a region *R ⊂ S × A*.
268
+
269
+ Finally, we proceed to quantify the error made estimating the gradients. Unfortunately, Lipschitz continuity is not enough to ensure that closeness of functions implies the closeness of gradients. To have that property, we need additional assumptions. Specifically, we choose to assume that both Q and QˆN are band-limited functions (see appendix B for an exact definition). We stress that this assumption is true of many physical systems since the presence of very high frequencies in Q indicates a lack of stability. Proposition 1. Assume that the exploration policy achieves η-coverage, that both the true Q-function and the critic QˆN are in function class H and that the MSE critic training error is zero. Assume that the distribution of training data satisfies p(s) > 0 for all states s ∈ S and that Q and QˆN *can be extended to* W-bandlimited functions. For state-action pairs (s, a) ∈ S × A*, we have*
270
+
271
+ $$\operatorname*{lim}_{N\to\infty}\|\nabla_{a}Q(s,a)-\nabla_{a}\hat{Q}_{N}(s,a)\|_{1}\leq8\pi W\eta L d_{\mathcal{A}},$$
272
+
273
+ ![11_image_0.png](11_image_0.png)
274
+
275
+ Figure 5: Comparison of TD3+BC-Phased with and without gradient penalty (GP). Performance profiles were computes using 5 seeds in each of 3 environments (MuJoCo Walker, Hopper and HalfCheetah).
276
+ where we denoted the dimensionality of the action space with dA.
277
+
278
+ The proof is found in appendix B. Proposition 1 is important because it quantifies the amount of error in our estimates of policy gradient, which is what the TD3 algorithm is based on.
279
+
280
+ In the next section, we proceed to examine the effects of imposing such Lipschitz regularization in practice.
281
+
282
+ ## 8 Assessing The Practical Effectiveness Of Lipschitz Regularization
283
+
284
+ In the past section, we addressed the problem of critic identifiability theoretically, identifying assumptions under which we can guarantee recovery of accurate policy gradients even if the exploration policy is deterministic. In practice, the crucial assumption enabling us to claim generalization over the action space was Lipschitz continuity. In this section, we attempt to draw empirical insights from this theoretical argument.
285
+
286
+ Specifically, we investigate the effect of Lipschitz regularization on the performance of the phased version of the TD3+BC algorithm.
287
+
288
+ In order to make our critic smoother, we add a gradient penalty term to the critic loss. This is inspired by the literature on the Wasserstein GAN (Gulrajani et al., 2017). In practice, we add the term β 1 dS
289
+ ∥∇sQˆ(*s, a*)∥2 +
290
+ 1 dA
291
+ ∥∇aQˆ(*s, a*)∥2 to the critic loss, where β is a small constant tuned as a hyperparameter. This serves to prevent the critic from having steep gradients. While a gradient penalty term does not strictly guarantee the critic to a Lipschitz function, it is currently a state of the art technique achieving Lipschitz regularization.
292
+
293
+ We performed an experiment to assess how adding such regularization influences the performance of the batch RL algorithm in the same setting we used in section 6 (a deterministic variant of the D4RL benchmark).
294
+
295
+ Results are shown in Figure 5. The experiment shows that adding Lipschitz regularization does not affect performance in a statistically significant way on the medium datasets, while causing a slight performance degradation on the expert datasets. The overarching conclusion is that batch RL practitioners now have a choice. Using TD3+BC-Phased, achieves good empirical performance, but is potentially susceptible to the critic identifiability issue. On the other hand, using the version with Lipschitz regularization means we will not be susceptible3to critic identifiability issues, but involves paying a price in terms of performance.
296
+
297
+ ## 9 Conclusion
298
+
299
+ We have identified the critic identifiability problem, which arises when batch RL technology meant for data coming from stochastic policies is used with exploration policies that are deterministic. We also propose a solution based on Lipschitz regularization, which works for practical MuJoCo control problems and addresses
300
+
301
+ 3In the idealized setting described in Section 7.
302
+ the critic identifiability problem while causing only a small loss of performance relative to applying the vanilla TD3+BC-Phased algorithm.
303
+
304
+ ## Broader Impact Statement
305
+
306
+ The social risks of deploying batch RL with deterministic exploration are similar to these of batch RL
307
+ in general. Because our work is generic and tests on industry-standard benchmarks, it does not carry a significant risk of immediate harm. While RL can certainly be used for nefarious purposes, we believe those risks to be out-weighted by the possibilities of positive impact that arises from making existing control systems more efficient by using offline data. Our work does carry a small additional risk of over-reliance on the theoretical results, in settings where our assumptions are not met. We tried to mitigate these risks by spelling out the assumptions explicitly.
308
+
309
+ ## References
310
+
311
+ Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems, 34:29304–29320, 2021.
312
+
313
+ David Brandfonbrener, Will Whitney, Rajesh Ranganath, and Joan Bruna. Offline rl without off-policy evaluation. *Advances in neural information processing systems*, 34:4933–4946, 2021.
314
+
315
+ Jinglin Chen and Nan Jiang. Information-theoretic considerations in batch reinforcement learning. In International Conference on Machine Learning, pp. 1042–1051. PMLR, 2019.
316
+
317
+ Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Misha Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling.
318
+
319
+ Advances in neural information processing systems, 34:15084–15097, 2021.
320
+
321
+ Ching-An Cheng, Tengyang Xie, Nan Jiang, and Alekh Agarwal. Adversarially trained actor critic for offline reinforcement learning. In *International Conference on Machine Learning*, pp. 3852–3878. PMLR, 2022.
322
+
323
+ Yuwei Fu, Di Wu, and Benoit Boulet. A closer look at offline RL agents. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022.
324
+
325
+ URL https://openreview.net/forum?id=mn1MWh0iDCA.
326
+
327
+ Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. *Advances* in neural information processing systems, 34:20132–20145, 2021.
328
+
329
+ Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *International conference on machine learning*, pp. 1587–1596. PMLR, 2018.
330
+
331
+ Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration.
332
+
333
+ In *International conference on machine learning*, pp. 2052–2062. PMLR, 2019.
334
+
335
+ Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. *Advances in neural information processing systems*, 30, 2017.
336
+
337
+ Hado Hasselt. Double q-learning. *Advances in neural information processing systems*, 23, 2010. Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. *arXiv preprint arXiv:1907.00456*, 2019.
338
+
339
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014.
340
+
341
+ Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning.
342
+
343
+ arXiv preprint arXiv:2110.06169, 2021.
344
+
345
+ Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. *Advances in Neural Information Processing Systems*, 32, 2019.
346
+
347
+ Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020.
348
+
349
+ Amos Lapidoth. *A foundation in digital communication*. Cambridge University Press, 2017. Romain Laroche, Paul Trichelair, and Remi Tachet Des Combes. Safe policy improvement with baseline bootstrapping. In *International conference on machine learning*, pp. 3652–3661. PMLR, 2019.
350
+
351
+ Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020.
352
+
353
+ Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *arXiv preprint* arXiv:1509.02971, 2015.
354
+
355
+ Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. *Foundations of machine learning*. MIT press, 2018.
356
+
357
+ Rémi Munos and Csaba Szepesvári. Finite-time bounds for fitted value iteration. Journal of Machine Learning Research, 9(5), 2008.
358
+
359
+ Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. *arXiv preprint arXiv:1910.00177*, 2019.
360
+
361
+ Martin L Puterman. *Markov decision processes: discrete stochastic dynamic programming*. John Wiley &
362
+ Sons, 2014.
363
+
364
+ Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. Bridging offline reinforcement learning and imitation learning: A tale of pessimism. *Advances in Neural Information Processing Systems*, 34:11702–11716, 2021.
365
+
366
+ Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 661–668. JMLR Workshop and Conference Proceedings, 2010.
367
+
368
+ John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International conference on machine learning*, pp. 1889–1897. PMLR, 2015.
369
+
370
+ David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In *International conference on machine learning*, pp. 387–395. PMLR,
371
+ 2014.
372
+
373
+ Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. *Advances in neural information processing systems*, 12, 1999.
374
+
375
+ Ulrike von Luxburg and Olivier Bousquet. Distance-based classification with lipschitz functions. J. Mach.
376
+
377
+ Learn. Res., 5(Jun):669–695, 2004.
378
+
379
+ Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. *arXiv* preprint arXiv:1911.11361, 2019.
380
+
381
+ Tengyang Xie and Nan Jiang. Q* approximation schemes for batch reinforcement learning: A theoretical comparison. In *Conference on Uncertainty in Artificial Intelligence*, pp. 550–559. PMLR, 2020.
382
+
383
+ Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. *Advances in neural information processing systems*, 34:6683–6694, 2021.
384
+
385
+ Ming Yin and Yu-Xiang Wang. Optimal uniform ope and model-based offline reinforcement learning in timehomogeneous, reward-free and task-agnostic settings. *Advances in neural information processing systems*,
386
+ 34:12890–12903, 2021.
387
+
388
+ Ming Yin, Yu Bai, and Yu-Xiang Wang. Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning. In *International Conference on Artificial Intelligence and Statistics*, pp.
389
+
390
+ 1567–1575. PMLR, 2021.
391
+
392
+ Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. Combo:
393
+ Conservative offline model-based policy optimization. *Advances in neural information processing systems*,
394
+ 34:28954–28967, 2021.
nXfptr5iGa/nXfptr5iGa_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 15,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 15,
14
+ "code": 0,
15
+ "table": 1,
16
+ "equations": {
17
+ "successful_ocr": 27,
18
+ "unsuccessful_ocr": 2,
19
+ "equations": 29
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }