RedTachyon commited on
Commit
700ec52
1 Parent(s): d266303

Upload folder using huggingface_hub

Browse files
XRpt5JYF8m/16_image_0.png ADDED

Git LFS Details

  • SHA256: beed5031eaa7186ba3a649681e779920a2519248b688f268dfed620d7e3dd93f
  • Pointer size: 130 Bytes
  • Size of remote file: 34.3 kB
XRpt5JYF8m/16_image_1.png ADDED

Git LFS Details

  • SHA256: 3e817e2de81029281b96cf6fbb81c706ba9bb200a442dbe8efac919e8f93a7d1
  • Pointer size: 130 Bytes
  • Size of remote file: 37 kB
XRpt5JYF8m/18_image_0.png ADDED

Git LFS Details

  • SHA256: 363d6f9f8fa12c3c4c1828ad8df0278bdc276a41abe7f1b0d1081725dce8c703
  • Pointer size: 131 Bytes
  • Size of remote file: 136 kB
XRpt5JYF8m/19_image_0.png ADDED

Git LFS Details

  • SHA256: 720ed49dce487ccaed866c373a84d3bce2afa01196a733c6bb1b9e9e5ae4af07
  • Pointer size: 131 Bytes
  • Size of remote file: 103 kB
XRpt5JYF8m/20_image_0.png ADDED

Git LFS Details

  • SHA256: 5b1d27167e92a0babd624490639ce0be623dfba6802bc7cccbda80535663d545
  • Pointer size: 131 Bytes
  • Size of remote file: 128 kB
XRpt5JYF8m/21_image_0.png ADDED

Git LFS Details

  • SHA256: e82f4d9d795d785c65126afa7c057106a21a8a8843ea513ffc5ba66fa211e795
  • Pointer size: 131 Bytes
  • Size of remote file: 118 kB
XRpt5JYF8m/22_image_0.png ADDED

Git LFS Details

  • SHA256: 6a7d0dc5c3883416068a124aefb3b01ec6491a583e699628a1a995bae2a6160d
  • Pointer size: 130 Bytes
  • Size of remote file: 83.5 kB
XRpt5JYF8m/23_image_0.png ADDED

Git LFS Details

  • SHA256: d9cb7e37a34708f79723d2b1ec47eb6d4b4844b89c014d6da5c15db42243c5fb
  • Pointer size: 130 Bytes
  • Size of remote file: 66.5 kB
XRpt5JYF8m/25_image_0.png ADDED

Git LFS Details

  • SHA256: 989da39c56646520d22824dbc952f1be2f632ce4c394d59c51c54293f27b2129
  • Pointer size: 130 Bytes
  • Size of remote file: 35.6 kB
XRpt5JYF8m/7_image_0.png ADDED

Git LFS Details

  • SHA256: 0b643b7cc56de42158a5591f6da990e610e649e337d61c0cb9de9485cb567bef
  • Pointer size: 130 Bytes
  • Size of remote file: 50.7 kB
XRpt5JYF8m/7_image_1.png ADDED

Git LFS Details

  • SHA256: 3d2b79bd7ea511dfbd6c7e22e810e6d322f944c939c4bd99ec398778c94f0a4d
  • Pointer size: 130 Bytes
  • Size of remote file: 45.6 kB
XRpt5JYF8m/8_image_0.png ADDED

Git LFS Details

  • SHA256: f78bb394b96ad5b209241b0cd17c54b393cd3c65cb955f2c90b27dcbbf38a7ad
  • Pointer size: 130 Bytes
  • Size of remote file: 54 kB
XRpt5JYF8m/XRpt5JYF8m.md ADDED
@@ -0,0 +1,648 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Some Supervision Required: Incorporating Oracle Policies In Reinforcement Learning Via Epistemic Uncertainty Metrics
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ An inherent problem of reinforcement learning is performing exploration of an environment through random actions, of which a large portion can be unproductive. Instead, exploration can be improved by initializing the learning policy with an existing (previously learned or hard-coded) oracle policy, offline data, or demonstrations. In the case of using an oracle policy, it can be unclear how best to incorporate the oracle policy's experience into the learning policy in a way that maximizes learning sample efficiency. In this paper, we propose a method termed *Critic Confidence Guided Exploration* (CCGE) for incorporating such an oracle policy into standard actor-critic reinforcement learning algorithms. More specifically, CCGE takes in the oracle policy's actions as suggestions and incorporates this information into the learning scheme when uncertainty is high, while ignoring it when the uncertainty is low. CCGE is agnostic to methods of estimating uncertainty, and we show that it is equally effective with two different techniques. Empirically, we evaluate the effect of CCGE on various benchmark reinforcement learning tasks, and show that this idea can lead to improved sample efficiency and final performance. Furthermore, when evaluated on sparse reward environments, CCGE is able to perform competitively against adjacent algorithms that also leverage an oracle policy. Our experiments show that it is possible to utilize uncertainty as a heuristic to guide exploration using an oracle in reinforcement learning.
8
+
9
+ We expect that this will inspire more research in this direction, where various heuristics are used to determine the direction of guidance provided to learning.
10
+
11
+ ## 1 Introduction
12
+
13
+ Reinforcement Learning (RL) seeks to learn a policy that maximizes the expected discounted future rewards for Markov Decision Processes (MDPs) (Sutton & Barto, 2018). Unlike supervised learning, which learns a function to map data to labels, RL involves an agent interacting with an environment to learn how to make decisions that optimize a reward signal. While RL has shown great promise in a wide range of applications, it can be challenging to explore complex environments to find optimal policies. Most popular RL methods employ stochastic actions to explore the environment, which can be time-consuming and lead to suboptimal solutions if the agent gets stuck in local minima, which can be true due to the curse of dimensionality or in complex environments where the reward signal is sparse. One technique to circumvent this issue is by incorporating prior knowledge into the learning process via the use of an oracle policy - a policy that takes better-than-random actions when given a state observation.
14
+
15
+ Such an oracle policy can be obtained from demonstration data through behavioral cloning (Bain & Sammut, 1995), pretrained using a different RL algorithm, or simply hard-coded. That said, it may not be clear how best to incorporate this oracle policy into the learning policy - and more crucially, when to wean the learning policy off the oracle policy. In this paper, we propose a new approach for RL called Critic Confidence Guided Exploration (CCGE) that seeks to address these challenges by using uncertainty to decide when to use the oracle policy to guide exploration versus aiming to simply optimize a reward signal. To our knowledge, this idea of using a heuristic to determine when to utilize an oracle in learning is a first of its kind.
16
+
17
+ The proposed CCGE algorithm builds upon the popular actor critic framework for RL, where an actor learns a policy that interacts with the environment while the critic aims to learn the value function (Grondman et al., 2012). Many of the most widely used methods in deep RL use the actor critic framework. For instance, they comprise four out of twelve algorithms in OpenAI Baselines Dhariwal et al. (2017) and four out of seven algorithms in Stable-Baselines3 (Raffin et al., 2019). CCGE works by using the critic's prediction error or variance as a proxy for uncertainty. When uncertainty is high, the actor learns from the oracle policy to guide exploration, and when uncertainty is low, the actor aims to maximize the learned value function. Our intuition is that it is more beneficial to allow the learning policy to decide when to follow the oracle policy, as opposed to competing approaches that dictate when this switching happens. We compare the proposed CCGE algorithm with existing exploration strategies on several benchmark RL
18
+ tasks for robotics, and our experiments demonstrate that it can lead to better performance and faster convergence in some challenging environments. Specifically, for dense reward environments, we show that CCGE can lead to policies that escape local minima faster through exploration via an oracle policy. We further show that this approach also works for sparse reward environments, being competitive with other similar approaches. Our results show that CCGE has the potential to enable more efficient and effective RL
19
+ algorithms.
20
+
21
+ ## 2 Related Work 2.1 Imitation Learning / Offline Reinforcement Learning Initialized Policies
22
+
23
+ One method to improve sample efficiency of RL is to initialize training with policies obtained from imitation learning (Silver et al., 2016; Kim et al., 2022; Chang et al., 2021; Kidambi et al., 2021). The goal here is to train a policy via supervised learning on a dataset of states and actions obtained from an optimal oracle policy. This policy can then be used in an environment during finetuning to produce a more optimal policy
24
+ (Gupta et al., 2019; Lavington et al., 2022). This method can work reasonably well for policy gradient methods, but can yield bad results when applied to actor-critic methods due to a poorly conditioned critic
25
+ (Zhang & Ma, 2018; Nair et al., 2020).
26
+
27
+ Offline RL, in a similar grain to imitation learning, also involves learning from a fixed dataset of pre-collected experience generated by an oracle policy Fu et al. (2020); Fujimoto et al. (2019). However, offline RL assumes that each transition in the dataset is labelled with a reward, and aims to learn a policy which always improves upon the oracle policy in state-distributions well covered by the dataset. In a similar manner to initializing a policy with imitation learning, policies can also be initialized using offline RL (Nair et al., 2020; 2018; Hester et al., 2018; Kumar et al., 2020; Sonabend et al., 2020).
28
+
29
+ A key aspect of these techniques is that the distribution of experience that comes from the oracle policy is fixed beforehand. This can be detrimental when the learning policy requires more information from the oracle policy in regions that are ill represented in the dataset. In contrast, CCGE allows the learning policy to query the oracle policy whenever it is uncertain, allowing for information from the oracle policy to be gathered dynamically.
30
+
31
+ ## 2.2 Rolling In Oracle Policies
32
+
33
+ Given an oracle policy, it is possible to involve it at every stage when attempting to learn an optimal policy, possibly resulting in better critics for downstream finetuning. One method is to carry out environment rollouts by taking composite actions that are a weighted sum of the oracle's actions and the learning policy's actions (Rosenstein et al., 2004). As training progresses, the weighting is annealed to favour the learning policy's actions over the oracle's. This is vaguely similar to Kullback-Leibler (KL) regularized RL , which integrates an oracle policy by incorporating the oracle's actions into the actor's action distribution using an additional KL loss between the two distributions. This has been proven to work on actor critic methods on a range of RL benchmarks (Nair et al., 2020; Peng et al., 2019; Siegel et al., 2019; Wu et al., 2019).
34
+
35
+ However, this can cause the actor to perform poorly when trying to assign probability mass to deterministic actions (Rudner et al., 2021). Other methods include Jump Start Reinforcement Learning (JSRL), which uses an oracle policy to step through the environment for a fixed number of steps before the learning policy interacts with the environment (Uchendu et al., 2022). The number of steps is gradually reduced as training progresses, forming a curriculum that can be gradually learned.
36
+
37
+ These ideas have an explicitly defined *switching point* - a point where the learning policy changes from learning from the oracle policy to learning on its own, or vice versa. Such switching can be counterproductive when the oracle policy is more experienced in regions of the environment it cannot reach on its own - regions where switching never happens. In contrast, CCGE allows the switching to happen whenever the learning policy is uncertain, which can happen at any point during training.
38
+
39
+ ## 2.3 Other Methods For Improving Sample Efficiency
40
+
41
+ Curriculum learning aims to form a curriculum of different tasks that progressively become more difficult
42
+ (Bengio et al., 2009) whilst model based RL aims to gain more detailed world knowledge of the environment more efficiently to reduce the number of steps that need to be taken in it (Hafner et al., 2019; Chen et al.,
43
+ 2022; Hafner et al., 2020; Schrittwieser et al., 2020). Adjacent methods include curiousity-driven RL or intrinsic motivation, whereby the agent is driven by an additional intrinsic reward signal Schmidhuber
44
+ (1991); Pathak et al. (2017). This intrinsic signal can be based on learning progress, exploration, or novel experiences, leading to the motivated exploration of new states.
45
+
46
+ ## 2.4 Epistemic Uncertainty
47
+
48
+ When optimizing a model, knowing how much further performance could be improved given more training would be obviously helpful. In statistical learning theory, this general quantity is referred to as *epistemic* uncertainty (Matthies, 2007) or model uncertainty, of which there are multiple formal quantifications. This quantity is different from *aleatoric uncertainty* or uncertainty in the data, which generally refers to the variability in the desired target prediction conditioned on a data point (Hüllermeier & Waegeman, 2021).
49
+
50
+ A naive approach to estimating epistemic uncertainty in machine learning models is to use the variance of model predictions as a proxy metric (Lakshminarayanan et al., 2017; Gal & Ghahramani, 2016). Epistemic uncertainty estimation is widely known in Bayesian RL, which use Monte Carlo Dropout or model ensembles
51
+ (Ghosh et al., 2021; Osband et al., 2018; Osband, 2016; Lütjens et al., 2019). Other examples use Gaussian processes or deep kernel learning methods to explicitly store aleatoric uncertainty estimates within the replay buffer (Kuss & Rasmussen, 2003; Engel et al., 2005; Xuan et al., 2018) or distributional models aiming to learn a distribution of returns (Bellemare et al., 2017; Dabney et al., 2018). Such methods allow a direct capture of aleatoric uncertainty, allowing epistemic uncertainty to be estimated as a function of model variance.
52
+
53
+ Evidential regression (Amini et al., 2020) aims to estimate epistemic uncertainty by proposing evidential priors over the data likelihood function. This works very well for data with stationary distributions, notably in supervised learning (Charpentier et al., 2020; 2021; Malinin & Gales, 2018). However, this technique does not extend to non-stationary distributions, a staple in RL.
54
+
55
+ More recently, Jain et al. (2021) modelled aleatoric and epistemic uncertainties by relating them to the expected prediction errors of the model. This relation is intuitive as it directly predicts how much improvement can be gained given more data and learning capacity. They further demonstrated how to use this prediction in a limited case of curiousity-driven RL, in a similar grain to Moerland et al. (2017); Nikolov et al. (2018).
56
+
57
+ ## 3 Preliminaries 3.1 Notation
58
+
59
+ This work uses the standard MDP definition (Sutton & Barto, 2018) defined by the tuple {S, A*, ρ, r, γ*} where S and A represent the state and action spaces, ρ(st+1|st, at) represent the state transition dynamics, rt =
60
+ r(st, at, st+1) represents the reward function and γ ∈ (0, 1) represents the discount factor. An agent interacts with the MDP according to the policy π(at|st), and during training, transition tuples of {st, at, rt, st+1} are stored in a replay buffer D. The goal of RL is to find the optimal policy π
61
+ ∗that maximizes the cumulative discounted rewards E[P∞
62
+ t=0 γ trt] for any state s0 ∈ S, where at ∼ π(st) and st+1 ∼ ρ(st, at).
63
+
64
+ ## 3.2 Deep Q Network
65
+
66
+ One of the major approaches to finding an optimal policy is via Q-learning, which aims to learn a stateaction value function Qπ: S × A → R, defined as the expected cumulative discounted rewards from taking action at in state st. The Deep Q Network (DQN) (Mnih et al., 2015) was one of such approaches, in which transition tuples are sampled from D to learn the function Qπϕ parameterized by ϕ,. Here, the Bellman error is minimized via the loss function:
67
+
68
+ $${\mathcal{L}}_{Q}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbb{E}[l(Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})-r_{t}-\gamma Q_{\phi}^{\pi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1}))]$$
69
+ $$(1)$$
70
+
71
+ (st+1, at+1))] (1)
72
+ where l(·) is usually the squared error loss and the expectation is taken over {st, at, st+1, rt*} ∼ D*, at+1 ∼
73
+ π(·|st+1). In DQN, the optimal policy is defined as π(at|st) = δ(a − arg maxat Qπϕ
74
+ (st, at)), where δ(·) is the Dirac function. This loss function in equation 1 will be used in derivations later in this work.
75
+
76
+ ## 3.3 Soft Actor Critic (Sac)
77
+
78
+ The Soft Actor Critic (SAC) algorithm is an entropy regularized actor critic algorithm introduced by Haarnoja et al. (2018a). This work specifically refers to SAC-v2 (Haarnoja et al., 2018b), a slightly improved variant with twin delayed Q networks and automatic entropy tuning. SAC has a pair of models parameterized by θ and ϕ that represent the actor πθ and the critic Qπϕ
79
+ . In a similar grain to DQN, the critic aims to learn the state-action value function via Q-learning. However, here, the critic is modified to include entropy regularization to promote exploration and stability for continuous action spaces. The actor is then optimized by minimizing the following loss function via the critic:
80
+
81
+ $${\mathcal{L}}_{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})=-{\mathbb{E}}_{\mathbf{s}_{t}\sim{\mathcal{D}}}\left[Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})\right]$$
82
+ $$\left(2\right)$$
83
+
84
+ (st, at)] (2)
85
+ Note that we have dropped the entropy term for brevity as it is not relevant to our derivations, but it can be viewed in the formulation by Haarnoja et al. (2018b).
86
+
87
+ ## 3.4 Formalism Of Epistemic Uncertainty
88
+
89
+ We present the formalism for epistemic uncertainty presented by Jain et al. (2021) and the related definitions in the notation that will be used in this paper. Consider a learned function f that tries to minimize the expected value of the loss l(f(x) − y) where y ∼ P(Y|x).
90
+
91
+ Definition 1 The *total uncertainty* Uf (x) of a function f at an input x, is defined as the expected loss l(f(x) − y) under the conditional distribution y ∼ P(Y|x).
92
+
93
+ $${\mathcal{U}}_{f}(x)=\mathbb{E}_{y\sim P({\mathcal{Y}}|x)}[l(f(x)-y)]$$
94
+ $\downarrow$ .
95
+
96
+ This expected loss stems from the random nature of the data P(Y|x) (aleatoric uncertainty) as well as prediction errors by the function due to insufficient knowledge (epistemic uncertainty).
97
+
98
+ Definition 2 A *Bayes optimal predictor* f
99
+ ∗is defined as the predictor f of sufficient capacity that minimizes
100
+
101
+ Uf at every point x.
102
+ $$f^{*}=\arg\operatorname*{min}_{f}{\mathcal{U}}_{f}(x)$$
103
+ Uf (x) (4)
104
+ Definition 3 The *aleatoric uncertainty* A(Y|x) of some data y ∼ P(Y|x) is defined as the irreducible uncertainty of a predictor. It can also be viewed as the total uncertainty of a Bayes optimal predictor.
105
+
106
+ $${\mathcal{A}}({\mathcal{Y}}|x)={\mathcal{U}}_{f^{\star}}(x)$$
107
+ ∗ (x) (5)
108
+ Note that the aleatoric uncertainty is defined over the conditional data distribution, and is not conditioned on the estimator. By definition, A(Y|x) ≤ Uf (x), ∀f∀x.
109
+
110
+ $\downarrow$ .
111
+
112
+ When P(Y|x) is Gaussian and l(·) is defined as the squared error loss, an optimal predictor predicts the mean of the conditional distribution, f
113
+ ∗(x) = Ey∼P (Y|x)[y]. As a result, the total uncertainty of an optimal predictor is equivalent to the variance of the conditional distribution. Thus, by extension, the aleatoric uncertainty of any predictor on Gaussian data trained using the squared error loss is equal to the variance of the data itself.
114
+
115
+ $$\mathcal{A}(\mathcal{Y}|x)=\mathcal{U}_{f^{*}}(x)$$ $$=\mathbb{E}_{y\sim P(\mathcal{Y}|x)}[l(y-\mathbb{E}_{y\sim P(\mathcal{Y}|x)}[y])]$$ $$=\mathbb{E}_{y\sim P(\mathcal{Y}|x)}[l(y)]-l(\mathbb{E}_{y\sim P(\mathcal{Y}|x)}[y])$$ $$=\sigma_{[x}^{2}(\mathcal{Y})$$
116
+ $$\mathbf{\Sigma}$$
117
+
118
+ $$\left(7\right)$$
119
+
120
+ (6)
121
+ Note that we have chosen to write σ 2(Y|x) as σ 2 |x
122
+ (Y) for brevity.
123
+
124
+ Definition 5 The *epistemic uncertainty* Ef (x) is defined as the difference between total uncertainty and aleatoric uncertainty. This quantity will asymptotically approach zero as the amount of data goes to infinity for a predictor f with sufficient capacity.
125
+
126
+ $${\mathcal{E}}_{f}(x)={\mathcal{U}}_{f}(x)-{\mathcal{A}}({\mathcal{Y}}|x)={\mathcal{U}}_{f}(x)-{\mathcal{U}}_{f^{*}}(x)$$
127
+ ∗ (x) (7)
128
+ The right hand side of equation 7 provides an intuitive view of epistemic uncertainty - it is the difference in performance between the current model and an optimal one.
129
+
130
+ ## 4 Critic Confidence Guided Exploration
131
+
132
+ In this section, we propose a technique to improve the sample efficiency of RL agents by choosing to mimic an oracle policy when uncertain, and performing self-exploration when the oracle policy's actions have been explored. In contrast to current techniques where the agent has no control over when to mimic the oracle policy (e.g. annealed weighting in Rosenstein et al. (2004), random oracle policy walks in Uchendu et al.
133
+
134
+ (2022)), our technique leverages the uncertainty of the agent to control when to follow the oracle policy. We term this algorithm CCGE, in reference to the learning policy mimicking the oracle policy for exploration when the confidence in the Q-value estimate is low.
135
+
136
+ ## 4.1 Incorporating An Oracle Policy
137
+
138
+ Our method of improving sample efficiency with an oracle policy is loosely inspired by the Upper Confidence Bound (UCB) Bandit algorithm. Assume that we have a critic Qπϕ and means of estimating its epistemic uncertainty Eϕ. For any state and action {st, at}, we can assign an upper-bound to the true Qπ value:
139
+
140
+ $$Q_{\mathrm{UB}}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})=Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})+{\mathcal{E}}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})$$
141
+ $\downarrow$ .
142
+
143
+ (st, at) + Eϕ(st, at) (8)
144
+ In a state st, the potential improvement of following the oracle policy's suggested action a¯t can be then
145
+ canonically written as:
146
+ $$\Delta(\mathbf{s}_{t},\mathbf{a}_{t},{\bar{\mathbf{a}}}_{t})=Q_{\mathrm{UB}}^{\pi}(\mathbf{s}_{t},{\bar{\mathbf{a}}}_{t})-Q_{\mathrm{UB}}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})$$
147
+ UB(st, at) (9)
148
+ The term QπUB(st, a¯t) refers to the upper-bound Q-value when taking the action a¯t and then acting according
149
+ to the learning policy πθ. Now, the decision to learn from the oracle policy versus the critic can be made
150
+ by defining the actor's loss function as either a supervisory signal Lsup or a reinforcement signal Lπ (from
151
+ equation 2):
152
+ $$\mathcal{L}_{\text{CCGE}}(\mathbf{s}_{t},\mathbf{a}_{t},\bar{\mathbf{a}}_{t})=\begin{cases}\mathcal{L}_{\text{sup}}(\mathbf{s}_{t},\mathbf{a}_{t},\bar{\mathbf{a}}_{t}),&\text{if}k\geq\lambda\\ \mathcal{L}_{\pi}(\mathbf{s}_{t},\mathbf{a}_{t}),&\text{otherwise}\end{cases}\tag{1}$$
153
+ $$(10)$$
154
+ where k is computed with:
155
+
156
+ $\downarrow$ .
157
+
158
+ $$k={\frac{\Delta(\mathbf{s}_{t},\mathbf{a}_{t},{\bar{\mathbf{a}}}_{t})}{Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})}}$$
159
+ (st, at)(11)
160
+ and λ is a constant which we term *confidence scale*. To put simply, the choice between imitating the oracle policy versus performing reinforcement learning is chosen according to the normalized potential improvement
161
+
162
+ $$(11)$$
163
+
164
+ of doing the former versus the latter. In general, CCGE is flexible to choices of λ, and we show preliminary results of different values of λ in Appendix B. During training policy rollout, the learning policy may also choose to take the action from the oracle policy:
165
+
166
+ $\mathbf{a}_{t}\leftarrow\begin{cases}\bar{\mathbf{a}}_{t},\text{if}k\geq\lambda\\ \mathbf{a}_{t},\text{otherwise}\end{cases}$
167
+ $$\left(13\right)$$
168
+ $$\left(12\right)$$
169
+ This allows the learning policy to quickly see the effect of the oracle policy's suggestions, helpful when the learning policy is completely uncertain about its environment at the beginning of training. We do not do this step at inference or policy evaluation.
170
+
171
+ ## 4.2 Supervision Signal Definition
172
+
173
+ There are various ways to define the supervision loss Lsup. For continuous action spaces, we can simply use the squared error loss:
174
+
175
+ $\mathcal{L}_{\sup}(\mathbf{s}_{t},\mathbf{a}_{t},\bar{\mathbf{a}}_{t})=\mathbb{E}_{\mathbf{a}_{t}\sim\pi_{\theta}(\cdot|\mathbf{s}_{t})}[||\mathbf{a}_{t}-\bar{\mathbf{a}}_{t}||_{2}^{2}]$ $\mathbf{a}_{t}\sim\pi(\cdot|\mathbf{s}_{t})$ $\mathbf{a}_{t}\sim\pi(\cdot|\mathbf{s}_{t})$ $\mathbf{a}_{t}\sim\pi(\cdot|\mathbf{s}_{t})$
176
+ That said, any other loss function in similar contexts can be used, such as minimizing the negative log likelihood − log πθ(a¯t), L1 loss function |at −a¯t|, or similar. Discrete action space models can instead utilize the cross entropy loss −a¯t log(πθ(at)).
177
+
178
+ ## 4.3 Epistemic Uncertainty Metrics For A Critic
179
+
180
+ We present two metrics for quantifying the epistemic uncertainty of a critic - one based on Q-network ensembles in a similar grain to past work (Osband, 2016; Clements et al., 2019; Festor et al., 2021) which we call Implicit Epistemic Uncertainty, and one based on the DEUP technique (Jain et al., 2021) which we call Explicit Epistemic Uncertainty. We do not argue which metric provides a more holistic estimate of epistemic uncertainty, interested readers are instead directed to work by Charpentier et al. (2022) for a more detailed study. Instead, CCGE simply assumes a means of evaluating the epistemic uncertainty of the Q-value estimate given a state action pair, and these are two such examples.
181
+
182
+ ## 4.3.1 Implicit Epistemic Uncertainty
183
+
184
+ We adopt the simplest form of estimating epistemic uncertainty in this regime using an ensemble of Qnetworks. In SAC, an ensemble of n(=2) Q-networks are used to tame overestimation bias (Hasselt, 2010; Haarnoja et al., 2018b). For every state action pair, a set of Q-value estimates, {Qπϕ1
185
+ , Qπϕ2
186
+ , ...Qπϕn
187
+ } are obtained. We utilize the variance of these Q-values as a proxy metric for epistemic uncertainty Eϕ =
188
+ σ 2({Qπϕ1
189
+ , Qπϕ2
190
+ , ...Qπϕn
191
+ }) and set Qπϕ = min(Q1ϕ
192
+ , Q2ϕ
193
+ , ...Qn ϕ
194
+ ) as is done in the original SAC implementation to obtain k.
195
+
196
+ ## 4.3.2 Explicit Epistemic Uncertainty
197
+
198
+ Adopting the framework for epistemic uncertainty from Jain et al. (2021), we derive an estimate for epistemic uncertainty based on the Bellman residual error. More formally, we first define single step epistemic
199
+ uncertainty for a Q-value estimate as:
200
+ uncertainty for a $Q$-value estimate as: $$\delta_{t}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathcal{U}_{\theta}(\mathbf{s}_{t},\mathbf{a}_{t})-\mathcal{A}\big{(}Q_{\theta}^{\tau}[\mathbf{s}_{t},\mathbf{a}_{t})\big{)}\tag{14}$$ $$=l\big{(}\mathcal{E}\big{(}Q_{\theta}^{\tau}(\mathbf{s}_{t},\mathbf{a}_{t})-\tau_{t}-Q_{\theta}^{\tau}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})\big{)}\big{)}$$ This is simply the expected Bellman residual error projected through the squared error loss. The exact
201
+ derivation is available in Appendix A.1.
202
+
203
+ Then, we propose using the root of the discounted sum of single step epistemic uncertainties as a measure for total epistemic uncertainty given a state and action pair:
204
+
205
+ $$(14)$$
206
+ $${\mathcal{E}}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})=\left[\mathbb{E}_{\pi,{\mathcal{D}}}\left[\sum_{i=t}^{T}\gamma^{i-t}|\delta_{i}(\mathbf{s}_{i},\mathbf{a}_{i})|\right]\right]^{\frac{1}{2}}$$
207
+
208
+ $$\left(15\right)$$
209
+
210
+ The intuition here is that δt is a biased estimate of epistemic uncertainty. To obtain a more holistic view of epistemic uncertainty, we instead take the discounted sum of all single step epistemic uncertainty estimates as the true measure. In our experiments, we found that taking the root of this value results in more stable training dynamics.
211
+
212
+ The value of Eϕ can be estimated on an auxiliary output of the Q-network itself, and learned by minimizing the residual loss in equation 16:
213
+
214
+ $${\mathcal{L}}_{\mathcal{E}}(\mathbf{s}_{t},\mathbf{a}_{t})=l({\mathcal{E}}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})-\left(\delta_{t}(\mathbf{s}_{t},\mathbf{a}_{t})+\gamma{\mathcal{E}}_{\phi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})^{2}\right)^{\frac{1}{2}})$$
215
+ 2 12) (16)
216
+ The resulting loss function for the Q-value network for a single state action pair can then be derived from equation equation 1 and equation 16:
217
+
218
+ $$(16)$$
219
+ $${\mathcal{L}}_{{\mathcal{E}},Q}(\mathbf{s}_{t},\mathbf{a}_{t})={\mathcal{L}}_{{\mathcal{E}}}(\mathbf{s}_{t},\mathbf{a}_{t})+{\mathcal{L}}_{Q}(\mathbf{s}_{t},\mathbf{a}_{t})$$
220
+ $$(17)$$
221
+ LE,Q(st, at) = LE (st, at) + LQ(st, at) (17)
222
+
223
+ ## 4.4 Algorithm
224
+
225
+ We present the full pseudocode for CCGE in Algorithm 1. The standard parameters for an actor-critic reinforcement learning algorithm are first initialized. Then, we select a confidence scale that controls how confident the algorithm should be about the learning policy's actions. For most tasks, a value of 1 has been found to work well, while smaller values near 0.1 may result in better performance for harder exploration tasks. During environment rollout, actions are taken in the environment according to the oracle policy or learning policy depending on k. Similarly, during policy optimization, the learning policy either performs supervised learning against the oracle policy's suggested actions or standard reinforcement learning depending on k. In Algorithm 1, UpdateCritic is the standard reinforcement learning value function update, usually done by minimizing equation 1 when using Implicit Epistemic Uncertainty, or equation 17 for Explicit Epistemic Uncertainty. Algorithm 1 Critic Confidence Guided Exploration (CCGE)
226
+ Select discount factor γ, learning rates ηϕ, ηπ Select size of Q-value network ensemble n Select confidence scale λ ≥ 0 Initialize parameter vectors θ, ϕ Initialize actor and critic networks πθ, Qϕ Initialize or hardcode oracle π¯
227
+ for number of episodes do Initialize st = st=0 ∼ P(·)
228
+ while env not done do Sample at from πθ(·|st) Sample a¯t from π¯(·|st)
229
+ Compute k using Qϕ(st, at), Qϕ(st, a¯t), E(st, at), Eϕ(st, a¯t) Override at ← a¯t if k ≥ λ Sample rt, st+1 from ρ(·|st, at)
230
+ Store transition tuples {st, at, rt, st+1, a¯t} in D
231
+ end while for {st, at, rt, st+1, a¯t} in D do Update critic Qϕ ← UpdateCritic(st, at, rt, Qϕ) Update actor parameters θ ← θ − ηϕ∇ϕLCCGE
232
+ end for end for
233
+
234
+ ## 5 Experiments
235
+
236
+ We implement CCGE on SAC, specifically SACv2 (Haarnoja et al., 2018b), and then evaluate its behaviour and performance on a variety of environments against a range of other algorithms. We use environments from Gymnasium (formerly Gym) (Brockman et al., 2016), D4RL derived environments from Gymnasium Robotics
237
+ (Fu et al., 2020), and waypoint environments from PyFlyt (Tai & Wong, 2023). Following guidelines from Agarwal et al. (2021), we report 50% Interquartile Means (IQMs) with bootstrapped confidence intervals. For each configuration, we train for 1 million environment timesteps and aggregate results based on 50 seeds per configuration and calculate evaluation scores based on 100 rollouts every 10,000 timesteps. The results of our experiments are shown in Figure 1 and Figure 2 with all relevant hyperparameters and network setups recorded in Appendix D, E, and F. The experiments are aimed at answering the following questions:
238
+ 1. Does CCGE benefit most from the supervision signal or the guidance from actions taken by the oracle?
239
+
240
+ 2. How does CCGE compare over the baseline algorithm with no supervision? 3. How important is the performance of the oracle policy towards the performance of CCGE? 4. Is CCGE sensitive to different methods of epistemic uncertainty estimation? 5. Can we bootstrap the oracle using the learning policy continuously?
241
+
242
+ 6. How does CCGE perform on hard exploration tasks in comparison to other state of the art algorithms?
243
+
244
+ ![7_image_0.png](7_image_0.png)
245
+ Figure 1: Learning curves of CCGE and SAC on Gym Mujoco environments. For the CCGE runs, the run names are written as CCGE_{Oracle Number}_{Epistemic Uncertainty Estimation Type}. The oracle policy labelled B denotes the bootstrapped oracle policy.
246
+
247
+ ![7_image_1.png](7_image_1.png)
248
+
249
+ Figure 2: Learning curves of CCGE, AWAC and JSRL on three Gymnasium Robotics and two PyFlyt environments.
250
+
251
+ ## 5.1 Supervision Or Guidance
252
+
253
+ ![8_image_0.png](8_image_0.png)
254
+
255
+ Figure 3: Learning curves of CCGE, AWAC and JSRL on three Gymnasium Robotics and two PyFlyt environments.
256
+ Algorithmically, CCGE uses two techniques to incorporate the oracle policy - a supervision signal induced during the policy improvement phase, and guidance via taking the oracle's actions directly during policy rollout. We conduct a series of experiments using Walker-v4, Ant-v4, PyFlyt/QuadX-Waypoints-v0 and AdroitHandHammerSparse-v1 with either only guidance, only supervision, or both enabled to determine the effect that either component has on CCGE's performance. The results are shown in Figure 3. In all cases, using the supervision only variant of CCGE produces the worst performance, with performance similar to the base SAC algorithm in the case of Walker2d-v4 and Ant-v4. CCGE performance benefits most from oracle guidance, with the guidance only algorithm performing similarly to the full algorithm in all dense reward environments. For sparse reward environments, the supervision component in the full algorithm produces a minimal but non-negligible learning improvement in terms of final performance in PyFlyt/QuadX-Waypoints-v0 and initial rate of improvement in AdroitHandHammerSparse-v1.
257
+
258
+ ## 5.2 Ccge Ablations
259
+
260
+ To answer questions 2, 3, and 4, we use the continuous control, dense reward, robotics tasks from the MuJoCo suite of Gymnasium environments (Brockman et al., 2016; Todorov et al., 2012). More concisely, we use Hopper-v4, Walker2d-v4, HalfCheetah-v4 and Ant-v4. These environments are canonically similar to those used by Haarnoja et al. (2018b) and various other works. For each environment, we obtain two oracles - Oracle 1 and Oracle 2 - using SAC trained for 250 × 103 and 500 × 103respectively. Both oracles have different final performances that are generally lower than what is obtainable using SAC trained to convergence. For each oracle, we evaluate the performance of CCGE using two separate methods of estimating epistemic uncertainty - the implicit and explicit methods detailed in Sections 4.3.1 and 4.3.2.
261
+
262
+ This results in a total of four CCGE runs per environment, which we use to compare results against SAC.
263
+
264
+ We label CCGE with the first oracle as CCGE_1 and with the second, better performing oracle as CCGE_2.
265
+
266
+ To denote between different forms of epistemic uncertainty estimation, we further append either _Im or _Ex to the algorithm names to denote usage of either the implicit or explicit forms of epistemic uncertainty estimation.
267
+
268
+ ## 5.2.1 Ccge Vs. No Ccge
269
+
270
+ Compared against the baseline SAC algorithm, CCGE outperforms SAC in all cases using the right oracle policy. In some cases, CCGE significantly outperforms SAC by leveraging the oracle policy to escape local minima early on. The obvious example here is in the Ant-v4 environment. Here, SAC tends to learn a standing configuration very early on, achieving an evaluation score of about 1000. It takes approximately 300×103 more transitions before the algorithm achieves a stable walking policy that does not fall. Using the right oracle policy - in this case a mostly walking oracle with an evaluation score of 2100, CCGE learns a successful walking configuration in approximately 15% the number of transitions that it takes for SAC. The performance of CCGE in the Ant-v4 environment seems to degrade over time. One possible reason for this could be catastrophic forgetting due to the limited capacity replay buffer (Isele & Cosgun, 2018), and this is explored more in Appendix C.
271
+
272
+ ## 5.2.2 Oracle Performance Sensitivity
273
+
274
+ Most reinforcement learning algorithms that bootstrap off imitation learning are improved by having higher quality data. CCGE is no exception to this dependency, as the change in performance from having different oracle policies directly dictates the learning performance of the algorithm. In particular, a different oracle policy for HalfCheetah-v4 results in CCGE either performing better or worse than SAC. Similarly, when initialized with an oracle policy stuck in a local minima in Ant-v4, CCGE causes the learning policy to learn slightly slower than SAC due to it repeatedly reverting to the suboptimal oracle policy when uncertain.
275
+
276
+ Despite this, the learning policy escapes the local minima in about the same time as SAC. This suggests that while CCGE with a bad oracle policy can hamper learning performance, it does not limit the exploration rate of the learning policy in general.
277
+
278
+ ## 5.2.3 Different Epistemic Uncertainty Estimation Techniques
279
+
280
+ We use the same confidence scale λ (introduced in Section 4.1) for both implementations. The results from Figure 1 show that while choice of epistemic uncertainty estimation technique does impact learning performance, the impact is far smaller than a change of oracle policy. More concretely, implicit epistemic uncertainty estimation works slightly better in Walker2d-v4 and HalfCheetah-v4, but performs worse in Ant-v4. As long as CCGE has access to a reasonable measure of epistemic uncertainty, CCGE would benefit much more from better performing oracle policies than better epistemic uncertainty estimation techniques.
281
+
282
+ ## 5.3 Bootstrapped Oracle
283
+
284
+ One advantage of CCGE's formulation is that it does not make any assumptions about the oracle policy. In fact, it does not even require the state-conditioned distribution of actions from the oracle policy to be stationary as required by algorithms such as Implicit Q Learning (IQL) (Kostrikov et al., 2023). Therefore, CCGE can function even when the oracle policy is continuously improving. To test this theory, we evaluated a variant of CCGE where the oracle policy's weights are updated to match the learning policy's weights whenever the learning policy's evaluation performance surpassed that of the oracle policy. We use CCGE
285
+ with explicit epistemic uncertainty estimation, with the same MuJoCo suite setup as done previously and the oracle policy initialized using Oracle 2. The resulting runs are labelled as CCGE_B_Ex. Surprisingly, we found that using a bootstrapped oracle in this scenario did not provide any meaningful improvement in performance.
286
+
287
+ ## 5.4 Hard Exploration Tasks
288
+
289
+ We evaluate the performance of CCGE on hard exploration tasks against Advantage Weighted Actor Critic (AWAC) and JSRL using an SAC backbone - two algorithms suitable for this setting of learning from prior data in conjunction with environment interaction. Their performances are evaluated on the AdroitHandDoorSparse-v1, AdroitHandPenSparse-v1 and AdroitHandHammerSparse-v1 tasks from Gymnasium Robotics, as well as the Fixedwing-Waypoints-v0 and QuadX-Waypoints-v0 tasks from PyFlyt. Hard exploration tasks describe environments where rewards are flat everywhere except when a canonical goal has been achieved. For example, in AdroitHandDoorSparse-v1 from Gymnasium Robotics, the reward is -0.1 at every timestep and 10 whenever the goal of opening the door completely has been achieved. Similarly, in PyFlyt the reward is -0.1 at every timestep and 100 when the agent has reached a waypoint.
290
+
291
+ The oracles for the AdroidHand tasks were obtained using SAC trained in the dense reward setup until the evaluation performance reached 0.6, requiring about 200 × 103to 400 × 103environment steps in the dense reward setup. The oracle for Fixedwing-Waypoints-v0 was obtained similarly - using SAC until the agent reaches 3 waypoints during evaluation, which happened after approximately 300 × 103environment steps.
292
+
293
+ For QuadX-Waypoints-v0, a cascaded Proportional Integral Derivative (PID) controller (Kada & Ghazzawi, 2011) which partially solves this environment is deployed as the oracle policy. The variant of CCGE here uses explicit epistemic uncertainty estimation. When compared with other competitive methods on the Gymnasium Robotics and PyFlyt tasks, CCGE demonstrates competitive sample efficiency and final performance on most tasks. This implies that CCGE can effectively explore and learn from environments with sparse rewards, which can be challenging for traditional RL algorithms. Notably, CCGE excels on environments that require more exploration than incremental improvement, such as AdroitHandPenSparse-v1, AdroitHandDoorSparse-v1, and PyFlyt/QuadX-Waypointsv0. Its underlying SAC's maximum entropy paradigm allows for aggressive exploration, which enables CCGE
294
+ to quickly surpass the performance of the oracle policy. This is in contrast to AWAC which struggles to perform much better than the oracle policy due to more conservative exploration. However, on environments with a more narrow range of optimal action sequences, such as AdroitHandHammerSparse-v1 and FixedwingWaypoints-v0, AWAC's more conservative policy updates result in better overall learning performance and final performance. Interestingly, JSRL fails to perform well in most tasks except one. This could be due to a few factors, such as its reliance on the underlying IQL backbone or the potential idea that simply starting from good starting states is insufficient in guaranteeing good performance.
295
+
296
+ ## 6 Conclusion
297
+
298
+ This paper introduces a new approach for RL called Critic Confidence Guided Exploration (CCGE), which seeks to address the challenges of exploration during optimization. The approach bootstraps from an oracle policy and uses the critic's prediction error or variance as a proxy for uncertainty to determine when to learn from the oracle and when to optimize the learned value function. We empirically evaluate CCGE on several benchmark robotics tasks from Gymnasium, evaluating its learning behaviour under different oracle policies and different methods of measuring uncertainty. We further test CCGE's performance on various sparse reward environments from Gymnasium Robotics and PyFlyt and find that CCGE offers competitive sample efficiency and final performance on most tasks against other, well performing algorithms, such as Advantage Weighted Actor Critic (AWAC) and Jump Start Reinforcement Learning (JSRL). Notably, CCGE excels on environments that require more exploration than incremental improvement, while other methods perform better on tasks with a more narrow range of optimal action sequences. We expect that CCGE will motivate more research in this direction, where leveraging an oracle in reinforcement learning is guided by an intrinsic heuristic. In the future, we would like to explore scenarios with multiple oracle policies, as well as the potential of utilizing CCGE in a multi-agent setting, where agents utilize other agents in the environment as oracles.
299
+
300
+ ## References
301
+
302
+ Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. *Advances in Neural Information Processing* Systems, 34, 2021.
303
+
304
+ Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 14927–14937. Curran Associates, Inc., 2020. URL https:
305
+ //proceedings.neurips.cc/paper/2020/file/aab085461de182608ee9f607f3f7d18f-Paper.pdf.
306
+
307
+ Michael Bain and Claude Sammut. A framework for behavioural cloning. In *Machine Intelligence 15*, pp.
308
+
309
+ 103–129, 1995.
310
+
311
+ Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning.
312
+
313
+ In *International Conference on Machine Learning*, pp. 449–458. PMLR, 2017.
314
+
315
+ Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48, 2009.
316
+
317
+ Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. *arXiv preprint arXiv:1606.01540*, 2016.
318
+
319
+ Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, and Wen Sun. Mitigating covariate shift in imitation learning via offline data with partial coverage. Advances in Neural Information Processing Systems, 34:965–979, 2021.
320
+
321
+ Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. *Advances in Neural Information Processing Systems*,
322
+ 33:1356–1367, 2020.
323
+
324
+ Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, and Stephan Günnemann. Natural posterior network: Deep bayesian predictive uncertainty for exponential family distributions. arXiv preprint arXiv:2105.04471, 2021.
325
+
326
+ Bertrand Charpentier, Ransalu Senanayake, Mykel Kochenderfer, and Stephan Günnemann. Disentangling epistemic and aleatoric uncertainty in reinforcement learning. *arXiv preprint arXiv:2206.01558*, 2022.
327
+
328
+ Chang Chen, Yi-Fu Wu, Jaesik Yoon, and Sungjin Ahn. Transdreamer: Reinforcement learning with transformer world models. *arXiv preprint arXiv:2202.09481*, 2022.
329
+
330
+ William R Clements, Bastien Van Delft, Benoît-Marie Robaglia, Reda Bahi Slaoui, and Sébastien Toth.
331
+
332
+ Estimating risk and uncertainty in deep reinforcement learning. *arXiv preprint arXiv:1905.09638*, 2019.
333
+
334
+ Will Dabney, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. In *International conference on machine learning*, pp. 1096–1105. PMLR, 2018.
335
+
336
+ Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines, 2017.
337
+
338
+ Yaakov Engel, Shie Mannor, and Ron Meir. Reinforcement learning with gaussian processes. In *Proceedings* of the 22nd international conference on Machine learning, pp. 201–208, 2005.
339
+
340
+ Paul Festor, Giulia Luise, Matthieu Komorowski, and A Aldo Faisal. Enabling risk-aware reinforcement learning for medical interventions through uncertainty decomposition. *arXiv preprint arXiv:2109.07827*, 2021.
341
+
342
+ Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep datadriven reinforcement learning, 2020.
343
+
344
+ Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration.
345
+
346
+ In *International conference on machine learning*, pp. 2052–2062. PMLR, 2019.
347
+
348
+ Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *international conference on machine learning*, pp. 1050–1059. PMLR, 2016.
349
+
350
+ Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P Adams, and Sergey Levine. Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability. Advances in Neural Information Processing Systems, 34:25502–25515, 2021.
351
+
352
+ Ivo Grondman, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. A survey of actor-critic reinforcement learning: Standard and natural policy gradients. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6):1291–1307, 2012.
353
+
354
+ Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning:
355
+ Solving long-horizon tasks via imitation and reinforcement learning. *arXiv preprint arXiv:1910.11956*,
356
+ 2019.
357
+
358
+ Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pp. 1861–1870. PMLR, 2018a.
359
+
360
+ Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. *arXiv* preprint arXiv:1812.05905, 2018b.
361
+
362
+ Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. *arXiv preprint arXiv:1912.01603*, 2019.
363
+
364
+ Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. *arXiv preprint arXiv:2010.02193*, 2020.
365
+
366
+ Hado Hasselt. Double q-learning. *Advances in neural information processing systems*, 23, 2010.
367
+
368
+ Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning from demonstrations. In *Proceedings of the* AAAI Conference on Artificial Intelligence, volume 32, 2018.
369
+
370
+ Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. *Machine Learning*, 110(3):457–506, 2021.
371
+
372
+ David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
373
+
374
+ Moksh Jain, Salem Lahlou, Hadi Nekoei, Victor Butoi, Paul Bertin, Jarrid Rector-Brooks, Maksym Korablyov, and Yoshua Bengio. Deup: Direct epistemic uncertainty prediction. *arXiv preprint* arXiv:2102.08501, 2021.
375
+
376
+ Belkacem Kada and Y Ghazzawi. Robust pid controller design for an uav flight control system. In Proceedings of the World congress on Engineering and Computer Science, volume 2, pp. 1–6, 2011.
377
+
378
+ Rahul Kidambi, Jonathan Chang, and Wen Sun. Mobile: Model-based imitation learning from observation alone. *Advances in Neural Information Processing Systems*, 34:28598–28611, 2021.
379
+
380
+ Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, and KeeEung Kim. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In *International Conference on Learning Representations*, 2022.
381
+
382
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. In International Conference on Learning Representations, 2023.
383
+
384
+ Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 33:1179–1191, 2020. Malte Kuss and Carl Rasmussen. Gaussian processes in reinforcement learning. *Advances in neural information processing systems*, 16, 2003.
385
+
386
+ Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017.
387
+
388
+ Jonathan Wilder Lavington, Sharan Vaswani, and Mark Schmidt. Improved policy optimization for online imitation learning. In *Conference on Lifelong Learning Agents*, pp. 1146–1173. PMLR, 2022.
389
+
390
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018.
391
+
392
+ Björn Lütjens, Michael Everett, and Jonathan P How. Safe reinforcement learning with model uncertainty estimates. In *2019 International Conference on Robotics and Automation (ICRA)*, pp. 8662–8668. IEEE, 2019.
393
+
394
+ Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. *Advances in neural* information processing systems, 31, 2018.
395
+
396
+ Hermann G Matthies. Quantifying uncertainty: modern computational representation of probability and applications. In *Extreme man-made and natural hazards in dynamics of structures*, pp. 105–135. Springer, 2007.
397
+
398
+ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015.
399
+
400
+ Thomas Moerland, Joost Broekens, and Catholijn Jonker. Efficient exploration with double uncertain value networks. In *NIPS 2017: Thirty-first Conference on Neural Information Processing Systems*, pp. 1–17, 2017.
401
+
402
+ Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 6292–6299. IEEE, 2018.
403
+
404
+ Ashvin Nair, Murtaza Dalal, Abhishek Gupta, and Sergey Levine. Accelerating online reinforcement learning with offline datasets. *arXiv preprint arXiv:2006.09359*, 2020.
405
+
406
+ Nikolay Nikolov, Johannes Kirschner, Felix Berkenkamp, and Andreas Krause. Information-directed exploration for deep reinforcement learning. In *International Conference on Learning Representations*, 2018.
407
+
408
+ Ian Osband. Risk versus uncertainty in deep learning: Bayes, bootstrap and the dangers of dropout. In NIPS workshop on bayesian deep learning, volume 192, 2016.
409
+
410
+ Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning.
411
+
412
+ Advances in Neural Information Processing Systems, 31, 2018.
413
+
414
+ Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by selfsupervised prediction. In *International conference on machine learning*, pp. 2778–2787. PMLR, 2017.
415
+
416
+ Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. *arXiv preprint arXiv:1910.00177*, 2019.
417
+
418
+ Antonin Raffin, Ashley Hill, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, and Noah Dormann.
419
+
420
+ Stable baselines3, 2019.
421
+
422
+ Michael T Rosenstein, Andrew G Barto, Jennie Si, Andy Barto, Warren Powell, and Donald Wunsch.
423
+
424
+ Supervised actor-critic reinforcement learning. *Learning and Approximate Dynamic Programming: Scaling* Up to the Real World, pp. 359–380, 2004.
425
+
426
+ Tim GJ Rudner, Cong Lu, Michael Osborne, Yarin Gal, and Yee Teh. On pathologies in kl-regularized reinforcement learning from expert demonstrations. *Advances in Neural Information Processing Systems*,
427
+ 34, 2021.
428
+
429
+ Jürgen Schmidhuber. Curious model-building control systems. In *Proc. international joint conference on* neural networks, pp. 1458–1463, 1991.
430
+
431
+ Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. *Nature*, 588(7839):604–609, 2020.
432
+
433
+ Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In *International Conference on Learning Representations*,
434
+ 2019.
435
+
436
+ David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. *nature*, 529(7587):484–489, 2016.
437
+
438
+ Aaron Sonabend, Junwei Lu, Leo Anthony Celi, Tianxi Cai, and Peter Szolovits. Expert-supervised reinforcement learning for offline policy learning and evaluation. Advances in Neural Information Processing Systems, 33:18967–18977, 2020.
439
+
440
+ Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Jun Jet Tai and Jim Wong. Pyflyt - uav flight simulator gymnasium environments for reinforcement learning research, 2023. URL http://github.com/jjshoots/PyFlyt.
441
+
442
+ Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In *2012* IEEE/RSJ international conference on intelligent robots and systems, pp. 5026–5033. IEEE, 2012.
443
+
444
+ Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, et al. Jump-start reinforcement learning. arXiv preprint arXiv:2204.02372, 2022.
445
+
446
+ Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. *arXiv* preprint arXiv:1911.11361, 2019.
447
+
448
+ Junyu Xuan, Jie Lu, Zheng Yan, and Guangquan Zhang. Bayesian deep reinforcement learning via deep kernel learning. *International Journal of Computational Intelligence Systems*, 2018.
449
+
450
+ Xiaoqin Zhang and Huimin Ma. Pretraining deep actor-critic reinforcement learning algorithms with expert demonstrations. *arXiv preprint arXiv:1801.10459*, 2018.
451
+
452
+ ## A Explicit Epistemic Uncertainty
453
+
454
+ In this section, we derive epistemic uncertainty in terms of Q-value networks for any {st, at} pair. This derivation follows the formalism taken from Jain et al. (2021) briefly covered in Section 3.4. Unless otherwise specified, all expectations in this section are taken over st+1, rt ∼ ρenv(·|st, at), at+1 ∼ πθ(·|st+1).
455
+
456
+ ## A.1 Single Step Epistemic Uncertainty
457
+
458
+ For a Q-value estimator, following the definition in equation 3, the *single step total uncertainty* can be written as the expected total loss for Qπϕ
459
+ :
460
+
461
+ $${\mathcal{U}}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbb{E}\big[l(Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})-r_{t}-\gamma Q_{\phi}^{\pi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1}))\big]$$
462
+
463
+ (st+1, at+1))(18)
464
+ The expectation here is taken over {st+1, rt*} ∼ D|{*st, at}, at+1 ∼ π(·|st+1). Likewise, we can define a *single* step aleatoric uncertainty for an estimated Q-value by extending equation 5 and equation 6. Recall that the aleatoric uncertainty is defined over the target distribution (in this case E[rt + γQπϕ
465
+ (st+1, at+1)]), and assuming that the target distribution is Gaussian, the aleatoric uncertainty simply becomes the variance in the data. Using this assumption, we can write the aleatoric uncertainty as the variance of the target Q-value estimate.
466
+
467
+ A(Q
468
+ π ϕ |{st, at}) = σ 2 |st,at rt + Q
469
+ π ϕ
470
+ (st+1, at+1)(19)
471
+ Finally, we denote the epistemic uncertainty for Q-value estimates across one time step for a given state and action as δt(st, at). Following equation equation 7 and using equation 18, this is defined as:
472
+
473
+ $$\begin{array}{l}{{\delta_{t}(\mathbf{s}_{t},\mathbf{a}_{t})}}\\ {{=\mathcal{U}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})-\mathcal{A}(Q_{\phi}^{\pi}[\{\mathbf{s}_{t},\mathbf{a}_{t}\})}}\\ {{=\mathbb{E}\big[l(Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})-r_{t}-\gamma Q_{\phi}^{\pi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1}))\big]-\sigma_{|\mathbf{s}_{t},\mathbf{a}_{t}}^{2}\left(r_{t}+Q_{\phi}^{\pi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})\right)}}\end{array}$$
474
+ $$(20)$$
475
+ $$(18)$$
476
+
477
+ Taking the definition of variance as σ 2(x) = E[l(x)] − l(E[x]), the first term in equation 20 can be expanded
478
+
479
+ into:E -l(Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1)) = σ 2 |st,at (Q π ϕ (st, at) − rt − Q π ϕ (st+1, at+1)) + lE[Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1) (21) Since σ 2(Qπϕ (st, at)) = 0, σ 2(x + C) = σ 2(x) for constant C, and σ 2(−x) = σ 2(x), equation 21 evaluates to:
480
+ E -l(Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1)) = σ 2 |st,at (Q π ϕ (st, at) − rt − Q π ϕ (st+1, at+1)) + lE[Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1) = σ 2 |st,at (−rt − Q π ϕ (st+1, at+1)) + lE[Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1) = σ 2 |st,at (rt + Q π ϕ (st+1, at+1)) + lE[Q π ϕ (st, at) − rt − γQπϕ (st+1, at+1)
481
+ $$(21)$$
482
+ $$(22)$$
483
+ $$(23)$$
484
+ Putting equation 22 into equation 20, we obtain:
485
+
486
+ $$\delta_{t}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathcal{U}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})-\mathcal{A}(Q_{\phi}^{\pi}[\{\mathbf{s}_{t},\mathbf{a}_{t}\})$$ $$=l\big{(}\mathbb{E}[Q_{\phi}^{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})-r_{t}-\gamma Q_{\phi}^{\pi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1})]\big{)}\tag{1}$$
487
+
488
+ Simply put, the single step epistemic uncertainty for a Q value estimator is simply the expected Bellman residual error projected through the loss function.
489
+
490
+ ## A.2 N-Step Epistemic Uncertainty
491
+
492
+ The single step epistemic uncertainty is only a measure of epistemic uncertainty of the Q-value predicted against its target value. For RL methods which rely on bootstrapping, the target value consists of a sampled reward and an estimated Q-value of the next time step. To obtain a more reliable estimate for epistemic uncertainty, it is important to account for the epistemic uncertainty in the target value itself. This train of thought leads very naturally to estimating epistemic uncertainty using the discounted sum of single step epistemic uncertainties, which we refer to as Eϕ.
493
+
494
+ $$\mathcal{E}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbb{E}_{\pi,\mathcal{D}}\left[\sum_{i=t}^{T}\gamma^{i-t}|\delta_{i}(\mathbf{s}_{i},\mathbf{a}_{i})|\right]\tag{1}$$
495
+ $$(24)$$
496
+
497
+ ## A.3 Learning The N-Step Epistemic Uncertainty
498
+
499
+ In our experiments, we found that having neural networks learn equation 24 leads to very unstable training due to value blowup. Instead, we propose simply learning its root, resulting in much more stable training:
500
+
501
+ $$\mathcal{E}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})=\left[\mathbb{E}_{\pi,\mathcal{D}}\left[\sum_{i=t}^{T}\gamma^{i-t}|\delta_{i}(\mathbf{s}_{i},\mathbf{a}_{i})|\right]^{\frac{1}{2}}\right.\tag{1}$$
502
+ $$(26)$$
503
+ $$(25)$$
504
+
505
+ This quantity can be learnt via bootstrapping, in the normal fashion that Q-value estimators are learnt using the following recursive sum:
506
+
507
+ $${\mathcal{E}}_{\phi}(\mathbf{s}_{t},\mathbf{a}_{t})=\left(\delta_{t}(\mathbf{s}_{t},\mathbf{a}_{t})+\gamma({\mathcal{E}}_{\phi}(\mathbf{s}_{t+1},\mathbf{a}_{t+1}))^{2}\right)^{\frac{1}{2}}\tag{1}$$
508
+
509
+ Equation 26 provides a learnable metric for epistemic uncertainty that can be learned via bootstrapping for Q-value networks.
510
+
511
+ One detail is that equation 23 requires taking the expectation of the Bellman residual error projected through the squared error loss. In practice, except for the simplest of environments, this is not entirely possible as it requires access to the reward and next state distribution for given state and action pair. In our experiments, we take the expectation of single state transition samples, and note that this results in inflated estimates of epistemic uncertainty. More concisely, the resulting learnt quantity is much closer to equation 18.
512
+
513
+ ## A.4 Explicit Epistemic Uncertainty Estimates On Gymnasium Environments
514
+
515
+ We implement DQN with explicit epistemic uncertainty estimation on four Gym environments with increasing difficulty and complexity (Brockman et al., 2016): CartPole, Acrobot, MountainCar, and LunarLander. The goal is to study how this measurement behaves throughout the learning process. Note that we do not use this measurement of epistemic uncertainty to motivate exploration or mitigate risk as in other works, the goal is to simply study its behaviour during a standard training run of DQN. We aggregate results over 150 runs using varying hyperparameters shown in Table 1. On CartPole in Fig. 4, the epistemic uncertainty starts small, increases and then decreases, while evaluation performance exhibits a mostly upward trend. This is perhaps expected, since state diversity –and therefore reward diversity– starts small and increases during training, model uncertainty follows the same initial trend. Eventually, model uncertainty falls as Q-value predictions get more accurate through sufficient exploration and Q-value network updates, leading to better performance and lower model uncertainties. This trend is similarly observed in Acrobot and MountainCar, albeit to a lesser extent. Conversely, it is not observed for the more challenging LunarLander, where the trend of the F-value increases monotonically and plateaus out in aggregate. Analyzing results from individual runs reveals that in environments with rewards that vary fairly smoothly such as CartPole and LunarLander (Figure 5(a)), the F-value can show either performance collapse as in the example shown in CartPole, or indicate state exploration activity as in LunarLander. In environments with sparse rewards such as Acrobot and MountainCar (Figure 5(b)), the F-value is indicative of agent learning progress: we observe spikes in uncertainty when the model discovers crucial checkpoints in the environment. For Acrobot, this occurs when the upright position is reached and the reward penalty is stopped. In MountainCar, this occurs when the agent first reaches the top of the mountain, which completes the environment. More examples of similar plots are shown in Appendix A.5.
516
+
517
+ ![16_image_0.png](16_image_0.png)
518
+ Figure 4: Aggregate episodic mean epistemic uncertainty and evaluation scores across four environments using DQN.
519
+
520
+ ![16_image_1.png](16_image_1.png)
521
+ Figure 5: Examples of episodic mean epistemic uncertainty behaviour on non-sparse reward environments using DQN.
522
+
523
+ | Table 1: DQN Hyperparameters for CartPole, Acrobot, MountainCar and Lunarlander | |
524
+ |-----------------------------------------------------------------------------------|------------------------------------------------------|
525
+ | Parameter | Value |
526
+ | Constants optimizer | AdamW (Loshchilov & Hutter, 2018; Kingma & Ba, 2015) |
527
+ | number of hidden layers (all networks) | 2 |
528
+ | number of neurons per layer (all networks) | 64 |
529
+ | non-linearity | ReLU |
530
+ | number of evaluation episodes | 50 |
531
+ | evaluation frequency | every 10 × 103 steps |
532
+ | total environment steps CartPole | 250 × 103 |
533
+ | Acrobot | 250 × 103 |
534
+ | MountainCar | 1 × 106 |
535
+ | LunarLander | 1 × 106 |
536
+ | replay buffer size CartPole | 100 × 103 |
537
+ | Acrobot | 100 × 103 |
538
+ | MountainCar | 200 × 103 |
539
+ | LunarLander | 200 × 103 |
540
+ | Ranges minibatch size | {128, 256, 512} |
541
+ | max gradient norm | [0.25, 1.00] |
542
+ | learning rate | [0.0001, 0.001] |
543
+ | exploration ratio | [0.05, 0.15] |
544
+ | discount factor | [0.980, 0.999] |
545
+ | gradient steps before target network update | [500, 2000] |
546
+
547
+ A.5 Examples of Epistemic Uncertainty Behaviour on Individual Runs of Gym Environments
548
+
549
+ ![18_image_0.png](18_image_0.png)
550
+ Figure 6: Episodic mean epistemic uncertainty and evaluation performance curves on various runs of CartPole.
551
+
552
+ ![19_image_0.png](19_image_0.png)
553
+ Figure 7: Episodic mean epistemic uncertainty and evaluation performance curves on various runs of Acrobot.
554
+
555
+ ![20_image_0.png](20_image_0.png)
556
+ Figure 8: Episodic mean epistemic uncertainty and evaluation performance curves on various runs of MountainCar.
557
+
558
+ ![21_image_0.png](21_image_0.png)
559
+ Figure 9: Episodic mean epistemic uncertainty and evaluation performance curves on various runs of LunarLander.
560
+
561
+ ## B Choosing Confidence Scale
562
+
563
+ CCGE introduces one hyperparameter - the confidence scale, λ - on top of the base RL algorithm. We perform a series of experiments to study the effect that this hyperparameter has on the learning behaviour of the algorithm. To do so, we perform 100 different runs with λ ∈ [0, 5], and compute mean values over 200k timestep intervals. We plot the guidance ratio - the proportion of transitions where the learning policy requests guidance from the oracle policy - as well as the average evaluation performance for the PyFlyt/QuadX-Waypoints-v0 and Walker2d-v4 environments. The results are shown in Fig. 10.
564
+
565
+ ![22_image_0.png](22_image_0.png)
566
+ Figure 10: Mean guidance ratio and mean evaluation performance of CCGE on one sparse and one dense reward environment. The results are averaged over 200k timestep intervals, each line corresponds to 100 different runs using a range of values for λ.
567
+ The choice of environment for these sets of experiments, while not exhaustive, were chosen to study the effect of λ on CCGE in both a dense and sparse reward environment. As expected, low values of λ result in a higher guidance ratios, and this is especially true early on in training (before 400k timesteps). At later stages in training, the effect of λ is almost nullified, likely due to the learning policy's performance surpassing that of the oracle policy in all states. In sparse reward environments, a high guidance ratio leads to better evaluation and learning performance; while in dense reward settings, this effect seems to not be present.
568
+
569
+ As an overarching recommendation, we suggest λ ≳ 0 for more sparse reward settings, higher values seem to have no effect on performance for well-formed dense reward environments.
570
+
571
+ ## C Exploring Performance Degradation In Ant-V4
572
+
573
+ The performance of CCGE starts degrading after about 300k timesteps. We suspect that this is not an issue of instability in CCGE, but a result of catastrophic forgetting due to the limited capacity of the FIFO
574
+ replay buffer. Several additional experiments were performed to pinpoint this reasoning. Here, we train both SAC and CCGE in Ant-v4 for 2 million timesteps, with a replay buffer size of 3 × 105 and 5 × 105.
575
+
576
+ In addition, we also implement a global distribution matching replay buffer (Isele & Cosgun, 2018) for both replay buffer sizes for both learning algorithms. The IQM results for each configuration, displayed in Fig. 11, were aggregated using 20 random initial seeds.
577
+
578
+ ![23_image_0.png](23_image_0.png)
579
+
580
+ Figure 11: Learning curves of CCGE and SAC on the Ant-v4 environment, using different replay buffer sizes and different replay buffer forgetting techniques, trained for 2 million timesteps. Runs with the _Ext extension denote experiments done with a replay buffer size of 5 × 105, and runs with the _RR extension denote experiments where global distribution matching was used as a forgetting technique.
581
+
582
+ From the results, while the performance of CCGE in the default configuration does degrade to the peak level of SAC, the performance of SAC also ends up degrading a significant amount when training is continued for an extended amount of time. Utilizing a larger replay buffer size does aid in reducing this performance degradation in CCGE, but seems to hurt performance in SAC. When using global distribution matching - a technique for circumventing catastrophic forgetting - the performance degradation of both CCGE and SAC is much less severe, inline with the results obtained by Isele & Cosgun (2018). The results here are interesting, posing an interesting question for future research on whether CCGE can be used to reduce the replay buffer size through an oracle policy. That said, the results also suggest that the performance degradation is not necessarily instability, nor is it an artifact caused by CCGE's implementation alone as it is also present in SAC.
583
+
584
+ | Table 2: SAC and CCGE Hyperparameters for Hopper-v4, Walker2d-v4, HalfCheetah-v4 and Ant-v4 | |
585
+ |-----------------------------------------------------------------------------------------------|------------------------------------------------------|
586
+ | Parameter | Value |
587
+ | Constants optimizer | AdamW (Loshchilov & Hutter, 2018; Kingma & Ba, 2015) |
588
+ | learning rate | 4e-4 |
589
+ | batch size | 256 |
590
+ | number of hidden layers (all networks) | 2 |
591
+ | number of neurons per layer (all networks) | 256 |
592
+ | non-linearity | ReLU |
593
+ | number of evaluation episodes | 100 |
594
+ | evaluation frequency | every 10 × 103 environment steps |
595
+ | total environment steps | 1 × 106 |
596
+ | replay buffer size | 300 × 103 |
597
+ | target entropy | −dim(A) |
598
+ | discount factor (γ) | 0.99 |
599
+ | For CCGE only confidence scale (λ) | 1.0 |
600
+
601
+ ## D Sac And Ccge Hyperparameters For Mujoco Tasks E Awac, Jsrl, And Ccge Hyperparameters For Adroithand Tasks
602
+
603
+ | AdroitHandHammer-v1. Parameter | Value |
604
+ |---------------------------------------------------|------------------------------------------------------|
605
+ | Constants optimizer | AdamW (Loshchilov & Hutter, 2018; Kingma & Ba, 2015) |
606
+ | learning rate | 4e-4 |
607
+ | batch size | 512 |
608
+ | number of hidden layers (all networks) | 2 |
609
+ | number of neurons per layer (all networks) | 128 |
610
+ | non-linearity | ReLU |
611
+ | number of evaluation episodes | 100 |
612
+ | evaluation frequency | every 10 × 103 environment steps |
613
+ | total environment steps | 1 × 106 |
614
+ | replay buffer size | 300 × 103 |
615
+ | target entropy | −dim(A) |
616
+ | discount factor (γ) | 0.92 |
617
+ | For CCGE only confidence scale (λCCGE) | 1.0 |
618
+ | For AWAC only Number of demonstration transitions | 100 × 103 |
619
+ | Pretrain epochs | 10 |
620
+ | Lagrangian multiplier (λAWAC) | 0.3 |
621
+
622
+ We utilize the same SAC backbone for all three algorithms. For JSRL, we utilize JSRL Random as described in the original paper (Uchendu et al., 2022), supposedly a more performant version of JSRL which allows the oracle policy to act for a random amount of timesteps in each episode before the learning policy takes over.
623
+
624
+ ## F Awac, Jsrl, And Ccge Hyperparameters For Pyflyt Tasks
625
+
626
+ Part of the observation space in the PyFlyt Warpaint environments uses the Sequence space from Gymnasium. As a result, using a vanilla neural network to represent the actor and critic is insufficient due to the non-constant observation shapes. We utilize the network architectures illustrated in Figure 12, which takes inspiration from graph neural networks to process the waypoint observations and agent state separately. All algorithms utilize the same architecture. For JSRL, we use JSRL Random - the more performant version of JSRL as described in the original paper(Uchendu et al., 2022).
627
+
628
+ ![25_image_0.png](25_image_0.png)
629
+
630
+ Figure 12: Block diagram of the architectures of the actor and critic used for PyFlyt experiments.
631
+
632
+ | PyFlyt/QuadX-Waypoints-v0. Parameter | Value |
633
+ |---------------------------------------------------|------------------------------------------------------|
634
+ | Constants optimizer | AdamW (Loshchilov & Hutter, 2018; Kingma & Ba, 2015) |
635
+ | learning rate | 4e-4 |
636
+ | batch size | 1024 |
637
+ | number of neurons per layer (all networks) | 128 |
638
+ | non-linearity | ReLU |
639
+ | number of evaluation episodes | 100 |
640
+ | evaluation frequency | every 10 × 103 environment steps |
641
+ | total environment steps | 1 × 106 |
642
+ | replay buffer size | 300 × 103 |
643
+ | target entropy | −dim(A) |
644
+ | discount factor (γ) | 0.99 |
645
+ | For CCGE only confidence scale (λCCGE) | 0.1 |
646
+ | For AWAC only Number of demonstration transitions | 100 × 103 |
647
+ | Pretrain epochs | 10 |
648
+ | Lagrangian multiplier (λAWAC) | 0.3 |
XRpt5JYF8m/XRpt5JYF8m_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 27,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 27,
14
+ "code": 0,
15
+ "table": 4,
16
+ "equations": {
17
+ "successful_ocr": 47,
18
+ "unsuccessful_ocr": 18,
19
+ "equations": 65
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }