RedTachyon
commited on
Commit
•
04fd90c
1
Parent(s):
3e095d8
Upload folder using huggingface_hub
Browse files- U4MnDySGhy/10_image_0.png +3 -0
- U4MnDySGhy/11_image_0.png +3 -0
- U4MnDySGhy/1_image_0.png +3 -0
- U4MnDySGhy/6_image_0.png +3 -0
- U4MnDySGhy/6_image_1.png +3 -0
- U4MnDySGhy/9_image_0.png +3 -0
- U4MnDySGhy/9_image_1.png +3 -0
- U4MnDySGhy/U4MnDySGhy.md +397 -0
- U4MnDySGhy/U4MnDySGhy_meta.json +25 -0
U4MnDySGhy/10_image_0.png
ADDED
Git LFS Details
|
U4MnDySGhy/11_image_0.png
ADDED
Git LFS Details
|
U4MnDySGhy/1_image_0.png
ADDED
Git LFS Details
|
U4MnDySGhy/6_image_0.png
ADDED
Git LFS Details
|
U4MnDySGhy/6_image_1.png
ADDED
Git LFS Details
|
U4MnDySGhy/9_image_0.png
ADDED
Git LFS Details
|
U4MnDySGhy/9_image_1.png
ADDED
Git LFS Details
|
U4MnDySGhy/U4MnDySGhy.md
ADDED
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Measuring And Mitigating Interference In Reinforcement Learning
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Catastrophic interference is common in many network-based learning systems, and many proposals exist for mitigating it. Before overcoming interference we must understand it better. In this work, we provide a definition and novel measure of interference for value-based reinforcement learning methods such as Fitted Q-Iteration and DQN. We systematically evaluate our measure of interference, showing that it correlates with instability in control performance, across a variety of network architectures. Our new interference measure allows us to ask novel scientific questions about commonly used deep learning architectures and study learning algorithms which mitigate interference. Lastly, we outline a class of algorithms which we call online-aware that are designed to mitigate interference, and show they do reduce interference according to our measure and that they improve stability and performance in several classic control environments.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
A successful reinforcement learning (RL) agent must generalize - learn from one part of the state space to behave well in another. Generalization not only makes learning more efficient but is also essential for RL problems with large state spaces. For such problems, an agent does not have the capacity to individually represent every state and must rely on a function approximator - such as a neural network - to generalize its knowledge across many states. While generalization can improve learning by allowing an agent to make accurate predictions in new states, learning predictions of new states can also lead to inaccurate predictions in unseen or even previously seen states. If the agent attempts to generalize across two states that require vastly different behavior, learning in one state can interfere with the knowledge of the other. This phenomenon is commonly called *interference*1 or forgetting in RL (Bengio et al., 2020; Goodrich, 2015; Liu et al., 2019; Kirkpatrick et al., 2017; Riemer et al., 2018).
|
12 |
+
|
13 |
+
The conventional wisdom is that interference is particularly problematic in RL, even single-task RL, because
|
14 |
+
(a) when an agent explores, it processes a sequence of observations, which are likely to be temporally correlated; (b) the agent continually changes its policy, changing the distribution of samples over time; and
|
15 |
+
(c) most algorithms use bootstrap targets (as in temporal difference learning), making the update targets non-stationary. All of these issues are related to having data and targets that are not iid. When learning from a stream of temporally correlated data, as in RL, the learner might fit the learned function to recent data and potentially overwrite previous learning—for example, the estimated values.
|
16 |
+
|
17 |
+
To better contextualize the impacts of interference on single task RL, consider a tiny two room gridworld problem shown in Figure 1. In the first room, the optimal policy would navigate to the bottom-right as fast as possible, starting from the top-left. In the second room, the optimal policy is the opposite: navigating to the top-left as fast as possible, starting from the bottom-right. The agent is given its position in the room and the room ID number, thus the problem is fully observable. However, the agent has no control over which room it operates in. We can see catastrophic interference if we train a DQN agent in room one for a while 1The term *interference* comes from early work in neural networks (McCloskey & Cohen, 1989; French, 1993; 1999). McCloskey and French (McCloskey & Cohen, 1989) showed that neural networks, when trained on successive supervised learning tasks using gradient descent, over-wrote the knowledge of earlier tasks with that of newer tasks.
|
18 |
+
|
19 |
+
![1_image_0.png](1_image_0.png)
|
20 |
+
|
21 |
+
Figure 1: The Two-Room environment: a simple diagnostic MDP to highlight the impacts of interference. The inset of the diagram above shows how the two rooms of the environment are not connected. The optimal policy and value function are opposite in each room as indicated by the color gradient. Each room is a grid of size 20 × 20. One third of the way through training the agent is teleported from room 1 to room 2. A DQN agent experiences significant interference upon switching to the second room negatively impacting its performance in the first room (as shown by offline return), even though the observation enables perfect disambiguation of the rooms.
|
22 |
+
and then move the agent to room two. The agent simply overrides its knowledge of the values for room one the longer it trains in room two. Indeed, we see DQN's performance in room one completely collapse. In this case the interference is caused by DQN's neural network. We contrast this to a simple tile coding representation (fixed basis), with a linear Q-learning agent. The tile coding represents these two rooms with completely separate features; as a result, there is no interference and performance in room one remains high even when learning in room two.
|
23 |
+
|
24 |
+
It is difficult to verify this conventional wisdom in more complex settings, as there is no established online measure of interference for RL. There has been significant progress quantifying interference in supervised learning (Chaudhry et al., 2018; Fort et al., 2019; Kemker et al., 2018; Riemer et al., 2018), with some empirical work even correlating interference and properties of task sequences (Nguyen et al., 2019), and investigations into (un)forgettable examples in classification (Toneva et al., 2019). In RL, recent efforts have focused on generalization and transfer, rather than characterizing or measuring interference. Learning on new environments often results in drops in performance on previously learned environments (Farebrother et al., 2018; Packer et al., 2018; Rajeswaran et al., 2017; Cobbe et al., 2018). DQN-based agents can hit performance plateaus in Atari, presumably due to interference. In fact, if the learning process is segmented in the right way, the interference can be more precisely characterized with TD errors across different game contexts (Fedus et al., 2020). Unfortunately this analysis cannot be done online as learning progresses.
|
25 |
+
|
26 |
+
Finally, recent work investigated several different possible measures of interference, but did not land on a clear measure (Bengio et al., 2020).
|
27 |
+
|
28 |
+
Interference classically refers to an update negatively impacting the agent's previous learning—eroding the agent's knowledge stored in the value function. Therefore it makes sense to first characterize interference in the value function updates, instead of the policy or return. In most systems the value estimates and actions change on every time-step conflating many different sources of non-stationarity, stochasticity, and error.
|
29 |
+
|
30 |
+
If an update to the value function interferes, the result of that update might not manifest in the policy's performance for several time steps, if at all. We therefore focus on measuring interference for approximate policy iteration algorithms: those that fix the policy for some number of steps (an iteration) and only update the value estimates.
|
31 |
+
|
32 |
+
We specifically conduct experiments on a class of algorithms we call Deep Q-iteration. One instance—with target networks—is almost the same as DQN but additionally keeps the behavior policy fixed within an iteration. The goal is to remove as many confounding factors as possible to make progress on understanding interference in RL. This class of algorithms allows us to investigate an algorithm similar to DQN; investigate the utility of target networks; and define a sensible interference measure by keeping more factors of variation constant within an iteration.
|
33 |
+
|
34 |
+
The contributions in this work are as follows. (1) We define interference at different granularities to capture interference within and across iterations for this class of value-based algorithms. (2) We justify using differences in squared TD errors across states before and after an update as an effective and computationally efficient approximation of this interference definition. (3) We empirically verify the utility of our interference metric by showing that it correlates with instability in control performance across architectures and optimization choices. (4) We leverage this easy-to-compute measure to outline a class of algorithms that mitigate interference. We demonstrate that these *online-aware* algorithms can improve stability in control by minimizing the interference metric. We conclude this work by highlighting limitations and important next steps.
|
35 |
+
|
36 |
+
## 2 Problem Formulation
|
37 |
+
|
38 |
+
In reinforcement learning (RL), an agent interacts with its environment, receiving observations and selecting actions to maximize a reward signal. We assume the environment can be formalized as a Markov decision process (MDP). An MDP is a tuple (S, A,Pr*, R, γ*) where S is a set of states, A is an set of actions, Pr : *S × A × S →* [0, 1] is the transition probability, R : *S × A × S →* R is the reward function, γ ∈ [0, 1)
|
39 |
+
a discount factor. The goal of the agent is to find a policy π : *S × A →* [0, 1] to maximize the expected discounted sum of rewards.
|
40 |
+
|
41 |
+
Value-based methods find this policy using an approximate policy iteration (API) approach, where the agent iteratively estimates the action-values for the current policy and then greedifies. The action-value function Qπ: *S ×A →* R for policy π is Qπ(*s, a*) := E[P∞
|
42 |
+
k=0 γ kRt+k+1|St = *s, A*t = a], where Rt+1 := R(St, At, St+1),
|
43 |
+
St+1 ∼ Pr(·|St, At), and At ∼ π(·|St). The Bellman operator for action values T
|
44 |
+
π: R
|
45 |
+
|S|×|A| → R
|
46 |
+
|S|×|A| is defined (T
|
47 |
+
πQ)(*s, a*) := Ps 0∈S Pr(s 0|s, a)-R(*s, a, s*0) + γPa0∈A π(a 0|s 0)Q(s 0, a0). This operator can be used to obtain Qπ because it is the unique solution of the Bellman equation: T
|
48 |
+
πQπ = Qπ. Temporal difference (TD) learning algorithms are built on this operator, as the sampled TD error δ in expectation equals T
|
49 |
+
πQ − Q. We can use neural networks to learn an approximation Qθ to the action-values, with parameters θ. Under certain conditions, the API procedure—consisting of a policy evaluation step to get Qθ, followed by greedifying to get a new policy, and repeating—eventually converges to a nearly optimal value function (Sutton & Barto, 2018).
|
50 |
+
|
51 |
+
We investigate a particular API algorithm that is similar to Deep Q-learning (DQN), that we call Deep Q-iteration. The only difference to DQN is that the behavior policy is held fixed during each evaluation phase. In this algorithms there is an explicit evaluation phase for a fixed target policy, where the agent has several steps Teval to improve its value estimates. More specifically, on iteration k with current action-values estimates Qk, the target policy is greedy πk(s) = arg maxa∈A Qk(*s, a*) and the behavior is -greedy. For each step in the iteration, a mini-batch update from a replay buffer is performed, using the update equation
|
52 |
+
|
53 |
+
## ∆Θ := Δt∇Θtqθt (St, At)
|
54 |
+
|
55 |
+
for temporal difference (TD) error δt. This TD error can either be computed without target networks, δt := Rt+1 +γQθt
|
56 |
+
(St+1, πk(St+1))−Qθt
|
57 |
+
(St, At), or with a target network, δt = Rt+1+γQk(St+1, πk(St+1))−
|
58 |
+
Qθt
|
59 |
+
(St, At). The procedure for is summarized in Algorithm 1.
|
60 |
+
|
61 |
+
We exactly recover DQN by setting the behavior policy2to be -greedy in Qθt rather than Qk. We opt to analyze this slightly modified algorithm, Deep Q-iteration, to avoid confounding factors due to the policy changing at each step. The definitions of interference developed in the next section, however, directly apply to DQN as well. For the controlled Two Rooms example in the introduction, we used our measure for a DQN
|
62 |
+
agent. However, when moving to more complex, less controlled scenarios, this changing data distribution may impact outcomes in unexpected ways. Therefore, to control for this factor, we focus in this work on Deep Q-iteration algorithms where the data-gathering policy also remains fixed during each iteration.
|
63 |
+
|
64 |
+
2Notice that the typical bootstrap target maxa0 Qk(St+1, a0) in DQN is in fact equivalent to the Deep Q-iteration update with a target network, because maxa0 Qk(St+1, a0) = Qk(St+1, πk(St+1)). The scalar Teval is the target network refresh frequency. We can also recover Double DQN (Hasselt et al., 2016), though it deviates just a bit more from Deep Q-iteration. It similarly uses the Deep Q-iteration update with a target network, but the target policy is greedy in Qθt rather than Qk. The resulting TD error is instead δt := Rt+1 + γQk(St+1, arg maxa0 Qθt
|
65 |
+
(St+1, a0)) − Qθt
|
66 |
+
(St, At).
|
67 |
+
|
68 |
+
Algorithm 1 Deep Q-iteration (DQI)
|
69 |
+
Initialize weights θ0. Initialize an empty buffer.
|
70 |
+
|
71 |
+
for t ← 0, 1, 2*, . . .* do If t mod Teval = 0 **then** Qk ← Qθt
|
72 |
+
, update πk to be greedy w.r.t Qk, bk to be -greedy Choose at ∼ bk(st), observe (st+1, rt+1), and add the transition (st, at, st+1, rt+1) to the buffer Sample a set of transitions Bt from the buffer and update the weights:
|
73 |
+
|
74 |
+
$$\theta_{t+1}\leftarrow\theta_{t}+\frac{\alpha}{|B_{t}|}\sum_{(s,a,r,s^{\prime})\in B_{t}}\delta(\theta_{t};s,a,r,s^{\prime})\nabla_{\theta}Q_{\theta_{t}}(s,a).$$
|
75 |
+
|
76 |
+
where *without* a target network:
|
77 |
+
δ(θt; *s, a, r, s*0) = r + γQθt
|
78 |
+
(s 0, πk(s 0))) − Qθt
|
79 |
+
(*s, a*)
|
80 |
+
and *with* a target network:
|
81 |
+
δ(θt; *s, a, r, s*0) = r + γ maxa0 Qk(s 0, a0) − Qθt
|
82 |
+
(*s, a*) = r + γQk(s 0, πk(s 0)) − Qθt
|
83 |
+
(*s, a*)
|
84 |
+
end for The central question in this work is how generalization in Qθ impacts behavior of Deep Q-iteration. Intuitively, updates to Qθ in some states may *interfere* with the accuracy of the values in other states. We formalize this notion in the next section, and in the following sections empirically connect the level of interference to performance.
|
85 |
+
|
86 |
+
## 3 Defining Interference For Value Estimation Algorithms
|
87 |
+
|
88 |
+
In this section, we define the interference measure that will be used when comparing Deep Q-iteration algorithms in the coming sections. Deep Q-iteration alternates between policy evaluation and policy improvement, where one cycle of policy evaluation and improvement is called an iteration. To explain this measure, we first need to define interference during the evaluation phase of an iteration. We then discuss interference at four different levels of granularity, the coarser of which we use for our experiments. We start at the lowest level to build intuition for the final definition of interference. Within each iteration—in each evaluation phase—we can ask: did the agent's knowledge about its value estimates improve or degrade? The evaluation phase is more similar to a standard prediction problem, where the goal is simply to improve the estimates of the action-values towards a clear target. In the case of Deep Q-iteration with target networks, it attempts to minimize the distance to the target function E[R + maxa0 Qk(S
|
89 |
+
0, a0)|S = *s, A* = a]. More generally, Deep Q-iteration, with or without target networks, attempts to reduce the squared expected TD-error: E[δ(θ)|S = *s, A* = a]
|
90 |
+
2. Without target networks, the expected TD error is the Bellman error: E[δ(θ)|S = *s, A* = a] = T
|
91 |
+
πQθ(s, a) − Qθ(*s, a*), where T
|
92 |
+
πQ(*s, a*) =
|
93 |
+
Eπ[R + γQ(S
|
94 |
+
0, A0)|S = *s, A* = a]. A natural criterion for whether value estimates improved is to estimate if the expected TD error decreased after an update.
|
95 |
+
|
96 |
+
Arguably, the actual goal for policy evaluation within an iteration is to get closer to the true Qπ(*s, a*).
|
97 |
+
|
98 |
+
Reducing the expected TD error is a surrogate for this goal. We could instead consider interference by looking at if an update made our estimate closer or further from Qπ(*s, a*). But, we opt to use expected TD
|
99 |
+
errors, because we are evaluating if the agent improved its estimates under its own objective—did its update interfere with its own goal rather than an objective truth. Further, we have the additional benefit that theory shows clear connections between value error to Qπ(*s, a*) and Bellman error. Bellman error provides an upper bound on the value error (Williams, 1993), and using Bellman errors is sufficient to obtain performance bounds for API (Munos, 2003; 2007; Farahmand et al., 2010).
|
100 |
+
|
101 |
+
Accuracy Change At the most fine-grained, we can ask if an update, going from θt to θt+1, resulted in interference for a specific point (*s, a*). The change in accuracy at *s, a* after an update is Accuracy Change((*s, a*), θt, θt+1) := E[δ(θt+1)|S = *s, A* = a]
|
102 |
+
2 − E[δ(θt)|S = *s, A* = a]
|
103 |
+
2 where if this number is negative it reflects that accuracy improved. This change resulted in interference if it is positive, and zero interference if it is negative. Update Interference At a less fine-grained level, we can ask if the update generally improved our accuracy—our knowledge in our value estimates—across points.
|
104 |
+
|
105 |
+
Update Interference(θt, θt+1) := max E(S,A)∼d [Accuracy Change((*S, A*), θt, θt+1)] , 0 where (*s, a*) are sampled according to distribution d, such as from a buffer of collected experience.
|
106 |
+
|
107 |
+
Both Accuracy Change and Update Interference are about one step. At an even higher level, we can ask how much interference we have across multiple steps, both within an iteration and across multiple iterations.
|
108 |
+
|
109 |
+
Iteration Interference reflects if there was significant interference in updating during the evaluation phase (an iteration). We define Iteration Interference for iteration k using expectation over Updated Interference in the iteration Iteration Interference(k) := E[X] for X = Update Interference(θ*T ,k*, θT +1,k)
|
110 |
+
where T is a uniformly sampled time step in the iteration k.
|
111 |
+
|
112 |
+
Interference Across Iterations reflects if an agent has many iterations with significant interference.
|
113 |
+
|
114 |
+
Here, it becomes more sensible to consider upper percentiles rather than averages. Even a few iterations with significant interference could destabilize learning; an average over the steps might wash out those few significant steps. We therefore take expectations over only the top α percentage of values. In finance, this is typically called the expected tail loss or conditional value at risk. Previous work in RL (Chan et al., 2020) has used conditional value at risk to measure the long-term risk of RL algorithms. For iteration index K, which is a random variable, Interference Across Iterations := E[X|X ≥Percentile0.9(X)] for X = Iteration Interference(K).
|
115 |
+
|
116 |
+
Iteration index K is uniformly distributed and Percentile0.9(X) is the 0.9-percentile of the distribution of X.
|
117 |
+
|
118 |
+
Other percentiles could be considered, where smaller percentiles average over more values and a percentile of 0.5 gives the median.
|
119 |
+
|
120 |
+
These definitions are quite generic, assuming only that the algorithm attempts to reduce the expected TD
|
121 |
+
error (Bellman error) to estimate the action-values. Calculating Update Interference, however, requires computing an expectation over TD errors, which in many cases is intractable to calculate. To solve this issue, we need an approximation to Update Interference, which we describe in the next section.
|
122 |
+
|
123 |
+
## 4 Approximating Update Interference
|
124 |
+
|
125 |
+
The difficulty in computing the Update Interference is that it relies on computing the expected TD error.
|
126 |
+
|
127 |
+
With a simulator, these can in fact be estimated. For small experiments, therefore, the exact Accuracy Change could be computed. For larger and more complex environments, the cost to estimate Accuracy Change is most likely prohibitive, and approximations are needed. In this section, we motivate the use of squared TD errors as a reasonable approximation. The key issue is that, even though we can get an unbiased sample of the TD errors, the square of these TD
|
128 |
+
errors does not correspond to the squared expected TD error (Bellman error). Instead, there is a residual term, that reflects the variance of the targets (Antos et al., 2008)
|
129 |
+
E[δ(θ)
|
130 |
+
2|S = *s, A* = a] = E[δ(θ)|S = *s, A* = a]
|
131 |
+
2 + Var [R + Qθ(S
|
132 |
+
0, A0)|S = *a, A* = a]
|
133 |
+
where the expectation is over (R, S0, A0), for the current (*s, a*), where A0is sampled from the current policy we are evaluating. When we consider the difference in TD errors, after an update, for (*s, a*), we get E[δ(θt+1)
|
134 |
+
2|S = *s, A* = a] − E[δ(θt)
|
135 |
+
2|S = *s, A* = a] = E[δ(θt+1)|S = *s, A* = a]
|
136 |
+
2 − E[δ(θt)|S = *s, A* = a]
|
137 |
+
2
|
138 |
+
+ Var -R + Qθt+1 (S
|
139 |
+
0, A0)|S = *a, A* = a− Var [R + Qθt
|
140 |
+
(S
|
141 |
+
0, A0)|S = a, A = a] .
|
142 |
+
|
143 |
+
For a given (*s, a*), we would not expect the variance of the target to change significantly. When subtracting the squared TD errors, therefore, we expect these residual variance terms to nearly cancel. When further averaged across (*s, a*), it is even more likely for this term to be negligible. There are actually two cases where the squared TD error is an unbiased estimate of the squared expected TD error. First, if the environment is deterministic, then this variance is already zero and there is no approximation. Second, when we use target networks, the bootstrap target is actually R + Qk(S
|
144 |
+
0, A0) for both. The difference in squared TD errors measures how much closer Qθt+1 (*s, a*) is to the target after the update. Namely, δ(θt+1) = R + Qk(S
|
145 |
+
0, A0) − Qθt+1 (*s, a*). Consequently
|
146 |
+
|
147 |
+
$$\mathbb{E}[\delta(\mathbf{\theta}_{t+1})^{2}|S=s,A=a]-\mathbb{E}[\delta(\mathbf{\theta}_{t})^{2}|S=s,A=a]=\mathbb{E}[\delta(\mathbf{\theta}_{t+1})|S=s,A=a]^{2}-\mathbb{E}[\delta(\mathbf{\theta}_{t})|S=s,A=a]^{2}$$ $$\qquad+\text{Var}[R+Q_{k}(S^{\prime},A^{\prime})|S=a,A=a]-\text{Var}[R+Q_{k}(S^{\prime},A^{\prime})|S=a,A=a]$$ $$=\mathbb{E}[\delta(\mathbf{\theta}_{t+1})|S=s,A=a]^{2}-\mathbb{E}[\delta(\mathbf{\theta}_{t})|S=s,A=a]^{2}.$$
|
148 |
+
|
149 |
+
It is straightforward to obtain a sample average approximation of E[δ(θt+1)
|
150 |
+
2|S = *s, A* = a] − E[δ(θt)|S =
|
151 |
+
s, A = a]. We sample B transitions (si, ai, ri, s0i
|
152 |
+
) from our buffer, to get samples of δ 2(θt+1*, S, A, R, S*0) −
|
153 |
+
δ 2(θt*, S, A, R, S*0). This provides the following approximation for Update Interference:
|
154 |
+
|
155 |
+
Update $\text{Interference}(\mathbf{\theta}_{t},\mathbf{\theta}_{t+1})\approx\max\left(\frac{1}{B}\sum_{i=1}^{B}\delta^{2}(\mathbf{\theta}_{t+1},s_{i},a_{i},r_{i},s_{i}^{\prime})-\delta^{2}(\mathbf{\theta}_{t},s_{i},a_{i},r_{i},s_{i}^{\prime}),0\right).$
|
156 |
+
The use of TD errors for interference is related to previous interference measures based on *gradient alignment*.
|
157 |
+
|
158 |
+
To see why, notice if we perform an update using one transition (st, at, rt, s0t
|
159 |
+
), then the interference of that update to (*s, a, r, s*0) is δ 2(θt+1*, s, a, r, s*0) − δ 2(θt*, s, a, r, s*0). Using a Taylor series expansion, we get the following first-order approximation assuming a small stepsize α:
|
160 |
+
|
161 |
+
$$\delta^{2}(\mathbf{\theta}_{t+1},s,a,r,s^{\prime})-\delta^{2}(\mathbf{\theta}_{t},s,a,r,s^{\prime})\approx\nabla_{\mathbf{\theta}}\delta^{2}(\mathbf{\theta}_{t};s,a,r,s^{\prime})^{\top}(\mathbf{\theta}_{t+1}-\mathbf{\theta}_{t})$$ $$=-\alpha\nabla_{\mathbf{\theta}}\delta^{2}(\mathbf{\theta}_{t};s,a,r,s^{\prime})^{\top}\nabla_{\mathbf{\theta}}\delta^{2}(\mathbf{\theta}_{t};s_{t},a_{t},r_{t},s^{\prime}_{t}).\tag{1}$$
|
162 |
+
|
163 |
+
This approximation corresponds to negative *gradient alignment*, which has been used to learn neural networks that are more robust to interference (Lopez-Paz et al., 2017; Riemer et al., 2018). The idea is to encourage gradient alignment to be positive, since having this dot product greater than zero indicates transfer between two samples. Other work used gradient cosine similarity, to measure the level of transferability between tasks (Du et al., 2018), and to measure the level of interference between objectives (Schaul et al., 2019).
|
164 |
+
|
165 |
+
A somewhat similar measure was used to measure generalization in reinforcement learning (Achiam et al.,
|
166 |
+
2019), using the dot product of the gradients of Q functions ∇θQθt
|
167 |
+
(st, at)
|
168 |
+
>∇θQθt
|
169 |
+
(*s, a*). This measure neglects gradient direction, and so measures both positive generalization as well as interference. Gradient alignment has a few disadvantages, as compared to using differences in the squared TD errors. First, as described above, it is actually a first order approximation of the difference, introducing further approximation. Second, it is actually more costly to measure, since it requires computing gradients and taking dot products. Computing Update Interference on a buffer of data only requires one forward pass over each transition. Gradient alignment, on the other hand, needs one forward pass and one backward pass for each transition. Finally, in our experiments we will see that optimizing for gradient alignment is not as effective for mitigating interference, as compared to the algorithms that reduced Update Interference.
|
170 |
+
|
171 |
+
## 5 Measuring Interference & Performance Degradation
|
172 |
+
|
173 |
+
Given a measure for interference, we can now ask if interference correlates with degradation in performance, and study what factors affect both interference and this degradation. We define *performance degradation* at each iteration as the difference between the best performance achieved before this iteration, and the performance after the policy improvement step. Similar definitions have been used to measure catastrophic forgetting in the multi-task supervised learning community (Serra et al., 2018; Chaudhry et al., 2019).
|
174 |
+
|
175 |
+
![6_image_0.png](6_image_0.png)
|
176 |
+
|
177 |
+
Figure 2: Correlation plot of interference and degradation with M = 200. Each point represents one algorithm for one run. We clip interference to be less than 1 to show all points in the plot. The results for M ∈ {100, 400} are qualitatively similar to the result for M = 200.
|
178 |
+
Let E(s,a)∼d0
|
179 |
+
[Qπk+1 (*s, a*)] be the agent performance after the policy improvement step at iteration k where d0 is the start-state distribution, where a random action is taken in the first step. We estimate this value using 50 rollouts. Performance Degradation due to iteration k is defined as As before, we take the expected tail over all iterations. If a few iterations involve degradation, even if most do not, we should still consider degradation to be high. We therefore define Degradation across iterations as Degradation := E[X|X ≥ Percentile0.9(X)] for X = Iteration Degradation(K).
|
180 |
+
|
181 |
+
It might seem like Degradation could be an alternative measure of Interference. A central thesis in this paper, however, is that Interference is about estimated quantities, like values, that represent the agent's knowledge of the world. The policy itself—and so the performance—may not immediately change even with inaccuracies introduced into the value estimates. Further, in some cases the agent may even choose to forgo reward, to explore and learn more; the performance may temporarily be worse, even in the absence of interference in the agent's value estimates. We empirically show that Interference Across Iterations is correlated with Degradation, by measuring these two quantities for a variety of agents with different buffer sizes and number of hidden nodes. We perform the experiment in two classic environments: Cartpole and Acrobot. In Cartpole, the agent tries to keep a pole balanced, with a positive reward per step. We chose Cartpole because RL agents have been shown to exhibit catastrophic forgetting in this environment (Goodrich, 2015). In Acrobot, the agent has to swing up from a resting position to reach the goal, receiving negative reward per step until termination. We chose Acrobot because it exhibits different learning dynamics than Cartpole: instead of starting from a good location, it has to explore to reach the goal.
|
182 |
+
|
183 |
+
We ran several agents to induce a variety of different learning behaviors. We generated many agents by varying buffer size ∈ {1000, 5000, 10000}, number of steps in one iteration M ∈ {100, 200, 400}, hidden layer size ∈ {64, 128, 256, 512} with two hidden layers. Each algorithm performed 400 iterations. Interference Across Iterations and Degradation are computed over the last 200 iterations. A buffer for measuring Interference is obtained using reservoir sampling from a larger batch of data, to provide a reasonably diverse set of transitions. Each hyperparameter combination is run 10 times, resulting in 360 evaluated agents for Deep Q-iteration without target networks and 360 with target networks.
|
184 |
+
|
185 |
+
We show the correlation plot between Interference and Degradation in Figure 2. For DQI *with* target networks, there is a strong correlation between our measure of interference and performance degradation.
|
186 |
+
|
187 |
+
For DQI *without* target networks, we actually found that the agents were generally unstable, with many suffering from maximal degradation. Measuring interference for algorithms that are not learning well is not particularly informative, because there is not necessarily any knowledge to interfere with.
|
188 |
+
|
189 |
+
:= $\max\limits_{i=1,\ldots}$
|
190 |
+
Iteration Degradation(k) := max i=1*,...,k* E(s,a)∼d0
|
191 |
+
[Q
|
192 |
+
|
193 |
+
$$Q^{\pi_{i}}(s,a)]-\mathbb{E}_{(s,a)\sim d_{0}}[Q^{\pi_{k+1}}(s,a)].$$
|
194 |
+
|
195 |
+
![6_image_1.png](6_image_1.png)
|
196 |
+
|
197 |
+
We note a few clear outcomes. (1) Neural networks with a larger hidden layer size tend to have higher interference and degradation. (2) DQI with target networks has lower magnitude Interference and less degradation than DQI without target networks on both environments. Target networks are used in most deep RL algorithms to improve training stability, and the result demonstrates that using target network can indeed reduce interference and improve stability. This result is unsurprising, though never explicitly verified to the best of our knowledge. It also serves as a sanity check on the approach, and supports the use of this measure for investigation in the role of other algorithm properties that might impact interference.
|
198 |
+
|
199 |
+
## 6 Mitigating Interference Via Online-Aware Meta Learning
|
200 |
+
|
201 |
+
With the interference measures developed and a better understanding of some of the factors that effect interference, we now consider how to mitigate interference. In this section, we outline and empirically investigate a class of algorithms, which we call *online-aware* algorithms, that are designed to mitigate interference.
|
202 |
+
|
203 |
+
## 6.1 Online-Aware Algorithms
|
204 |
+
|
205 |
+
We first discuss an objective to learn a neural network that explicitly mitigates interference. We then outline a class of algorithms that optimize this objective. Let θ be the network parameters and U
|
206 |
+
n B(θ) be an inner update operator that updates θ using the set of transitions in B, n times. For example, U
|
207 |
+
n B(θ) could consist of sampling mini-batches Bi from B for each of the i = 1*, . . . , n* DQI updates. The goal of online-aware learning is to update the network parameters to minimize interference for multiple steps in the future: find a direction gt at time step t to minimize the n-step ahead Update Interference
|
208 |
+
|
209 |
+
$$\mathbb{E}_{B}\left[\sum_{i=1}^{|B|}\delta_{i}(U_{B}^{n}(\theta_{t}-g_{t}))^{2}-\delta_{i}(\theta_{t})^{2}\right]$$
|
210 |
+
|
211 |
+
Formally, we can describe the online-aware objective as
|
212 |
+
|
213 |
+
$$J(\mathbf{\theta})=\mathbb{E}_{B}[L_{B}(U_{B}^{n}(\mathbf{\theta}))]\;\;\;\mathrm{where}\;L_{B}(\mathbf{\theta})=\frac{1}{|B|}\sum_{i=1}^{|B|}\delta_{i}(\mathbf{\theta})^{2}.$$
|
214 |
+
|
215 |
+
We refer to the class of algorithms which optimizes the online-aware objective as online-aware algorithms, and provide the pseudocode in Algorithm 2. Note that this objective not only minimizes interference but also maximizes transfer (promotes positive rather than negative generalization).
|
216 |
+
|
217 |
+
The objective can be optimized by meta-learning algorithms including MAML (Finn et al., 2017), which is a second-order method by computing gradients through the inner update gradients to update the metaparameters, or a first-order method such as Reptile (Nichol et al., 2018). Reptile is more computationally efficient since it does not involve computing higher order terms, and only needs to perform the n inner updates to then perform the simple meta update.
|
218 |
+
|
219 |
+
Algorithm 2 is not a new algorithm, but rather is representative of a general class of algorithms which explicitly mitigates interference. It incorporates several existing meta learning algorithms. The choice of meta-parameters, inner update operator and meta update rules results in many variants of this Online Aware algorithm. Two most related approaches, OML (Javed & White, 2019) and MER (Riemer et al., 2018), can be viewed as instances of such an algorithm. OML was proposed as an offline supervised learning algorithm, but the strategy can be seen as instance of online-aware learning where the inner update operator updates only the last few layers at each step on a correlated sequence of data, whereas the initial layers are treated as meta-parameters and updated using the second-order method proposed in MAML. MER, on the other hand, uses the first-order method proposed in Reptile to update the entire network as the meta-parameters. During the inner loop, MER updates the entire network with stochastic samples. MER Algorithm 2 Deep Q-iteration with Online-aware Meta Learning Initialize an empty buffer. Initialize weights θ0.
|
220 |
+
|
221 |
+
for t ← 0, 1, 2*, ...* do If t mod Teval = 0 **then** Qk ← Qθt
|
222 |
+
, update πk to be greedy w.r.t Qk, bk to be -greedy Choose at ∼ bk(st), observe (st+1, rt+1), and add the transition to the buffer B θt,0 ← θt for i ← 1, 2*, ...n* do Sample a set of transitions Bi−1 from the buffer θt,i ← UBi
|
223 |
+
(θt,i−1) *\# Inner update* end for Meta update by second-order MAML method:
|
224 |
+
Sample a set of transitions B from the buffer θt+1 = θt +
|
225 |
+
α |B| Pj δj (θt,n)∇θtQθt,n (sj , aj )
|
226 |
+
or first-order Reptile method:
|
227 |
+
θt+1 = θt + α(θt,n − θt)
|
228 |
+
end for introduces within-batch and across-batch meta updates; this difference to the Online-aware framework is largely only about smoothing updates to the meta-parameters. In fact, if the stepsize for the across-batch meta update is set to one, then the approach corresponds to our algorithm with multiple meta updates per step. For a stepsize less than one, the across-batch meta update averages past meta-parameters. MER also uses other deep RL techniques such as prioritizing current samples and reservoir sampling.
|
229 |
+
|
230 |
+
## 6.2 Experimental Setup
|
231 |
+
|
232 |
+
We aim to empirically answer the question: do these online-aware algorithms mitigate interference and performance degradation? We focus on an instance of the online-aware algorithm where the meta updates are performed with the first-order Reptile methods. This instance can be viewed as a variant of MER using one big batch (Riemer et al., 2018, Algorithm 6). We have also tried online-aware algorithm using MAML or only meta learning a subset of network parameters similar to Javed & White (2019), but we found online-aware algorithm using Reptile outperforms the MAML and OML variants consistently across the environments we tested. To answer the question we compare baseline algorithms to online-aware (OA) algorithms where the baseline algorithm is DQI with or without target nets. OA treats the entire network as the meta-parameter and uses the first-order Reptile method, shown in Algorithm 2. The inner update operator uses randomly sampled mini-batches to compute the update in Algorithm 1. To fairly compare algorithms, we restrict all algorithms to perform only one update to the network parameters per step, and all algorithms use similar amounts of data to compute the update. We also include two other baselines: *Large* which is DQI with 10 to 40 times larger batch sizes so that the agent sees more samples per step, and GA which directly maximizes gradient alignment from Equation equation 1 within DQI3.
|
233 |
+
|
234 |
+
## 6.3 Experiments For Dqi Without Target Networks
|
235 |
+
|
236 |
+
We first consider DQI without target networks, which we found in the previous section suffered from more interference than DQI with target networks. We should expect using an online aware update should have the biggest impact in this setting. Figure 3 summarizes the results on Acrobot and Cartpole. We can see that OA significantly mitigates interference and performance degradation, and improves control performance. Large (light-blue) and GA (green) do not mitigate interference nearly as well. In fact, Large generally performs quite poorly and in two cases actually increases interference. Our results indicate that
|
237 |
+
|
238 |
+
3In fact, both MAML and Reptile approximately maximize the inner product between gradients of different minibatches (Nichol et al., 2018).
|
239 |
+
|
240 |
+
![9_image_0.png](9_image_0.png)
|
241 |
+
|
242 |
+
Figure 3: Acrobot and Cartpole results for DQI without target nets. Return, Interference and Degradation are computed for the last 200 iterations, and averaged over 30 runs with one standard error.
|
243 |
+
the online-aware algorithms are capable of mitigating interference, whereas, simply processing more data or directly maximizing gradient alignment are not sufficient to mitigate interference.
|
244 |
+
|
245 |
+
Further insight can be found by investigating data from individual runs. The previous results aggregate performance and return over runs, which can remove much of the interesting structure in the data. Looking closer, Figure 4(a) shows the return per run (left) and iteration interference per run (right) in Acrobot, revealing that vanilla DQI without target nets (in blue) experienced considerable problems learning and maintaining stable performance. OA (in red) in comparison was substantially more stable and reached higher performance. Overall OA also exhibits far less interference.
|
246 |
+
|
247 |
+
![9_image_1.png](9_image_1.png)
|
248 |
+
|
249 |
+
Figure 4: Learning curves and Iteration Interference, one run per subplot, for DQI *without* target networks
|
250 |
+
(blue) compared to adding the online-aware objective (red).
|
251 |
+
|
252 |
+
## 6.4 Experiments For Dqi With Target Networks
|
253 |
+
|
254 |
+
In this section, we investigate the utility of OA for DQI with target networks. Again, it is unlikely to be particularly useful to add OA for settings where the interference is low. In the previous section, in Figure 2, we found that DQI with target networks had higher interference with a larger hidden layer size (512). We
|
255 |
+
|
256 |
+
![10_image_0.png](10_image_0.png)
|
257 |
+
|
258 |
+
Figure 5: Acrobot and Cartpole results for DQI with target nets. Return, Interference and Degradation are computed for the last 200 iterations, and averaged over 30 runs with one standard error.
|
259 |
+
therefore test the benefits of OA for this larger network size in this experiment. Figure 5 summarizes the results on Acrobot and Cartpole. We can see that the addition of OA to DQI with target networks helps notably in Cartpole, and only slightly in Acrobot. This is in stark contrast to the last section, where there was a large gain in Acrobot when adding OA. This outcome makes sense, as when adding OA to DQI *without* target networks, the agent went from failure to learning reasonably well. In this case of DQI *with* target networks, the agent was already learning reasonably. Nonetheless, the addition of the OA objective does still provide improvement. In Cartpole, the improvement is more substantial. Again, looking at the previous correlation plots in Figure 2, we can see that a hidden layer size of 512 resulted in more interference in Cartpole, and more Performance Degradation; there was more room for OA to be beneficial in Cartpole. When looking at the individual runs in Figure 5, we can see that DQI has some drops in performance, whereas the OA variant is much more stable.
|
260 |
+
|
261 |
+
A few other outcomes are notable. The larger network (10x the size) was actuthally better than OA in Acrobot but did worse than the base algorithm (DQI with hidden layer sizes of 512) in Cartpole. The most consistent performance was with OA. Further, except for the larger network, there was a clear correspondence between interference and performance: OA reduced interference most and performed the best, GA was next in terms of both and then finally the baseline with no additions.
|
262 |
+
|
263 |
+
## 7 Conclusion
|
264 |
+
|
265 |
+
In this paper, we proposed a definition of interference for value-based methods that fix the target policy for each iteration. We justified the use of squared TD errors to approximate this interference and showed this interference measure is correlated with control performance. In this empirical study across agents, we found that target networks can significantly reduce interference, and that bigger hidden layers resulted in higher interference in our environments. Lastly, we discuss a framework for online-aware learning for Deep Q-iteration, where a neural network is explicitly trained to mitigate interference. We concluded with experiments on classical reinforcement learning environments that showed the efficacy of online-aware algorithms in improving stability and lowering our measure of interference. This was particularly the case for Deep Q-iteration without target networks, where interference was the highest. These online aware algorithms also exhibit lower performance degradation across most of the tested environments.
|
266 |
+
|
267 |
+
![11_image_0.png](11_image_0.png)
|
268 |
+
|
269 |
+
Figure 6: Learning curves and Iteration Interference, one run per subplot, for DQI *with* target networks (blue) compared to adding the online-aware objective (red). There are several limitations in this work. We did not carefully control for other factors that could impact performance, like exploration or the distribution of data in the replay buffer. DQI without target networks performed poorly in Acrobot under many hyperparameter settings, making it difficult to measure interference. Later, by including online-aware learning, the performance significantly improved, suggesting interference was indeed the culprit. But, it was difficult to perfectly identify, at least using only our measure. The correlation plots themselves indicate there are other factors, beyond interference, driving performance degradation. Another important limitation is that we only examined Deep Q-iteration algorithms, which fixed the behavior during each iteration. Allowing this behavior to update on each step, to be -greedy with respect to the current action-values, would give us DQN. An important next step is to analyze this algorithm, and other extensions of Deep Q-iteration. Finally, these results highlight several promising avenues for improving stability in RL. One surprising outcome was the instability, within a run, of a standard method like Deep Q-iteration. The learning curve was quite standard, and without examining individual runs, this instability would not be obvious. This motivates re-examining many reinforcement learning algorithms based on alternative measures, like degradation and other measures of stability. It also highlights that there are exciting opportunities to significantly improve reinforcement learning algorithms by leveraging online-aware learning.
|
270 |
+
|
271 |
+
## References
|
272 |
+
|
273 |
+
Joshua Achiam, Ethan Knight, and Pieter Abbeel. Towards characterizing divergence in deep q-learning.
|
274 |
+
|
275 |
+
arXiv:1903.08894, 2019.
|
276 |
+
|
277 |
+
András Antos, Csaba Szepesvári, and Rémi Munos. Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. *Machine Learning*, 2008.
|
278 |
+
|
279 |
+
Emmanuel Bengio, Joelle Pineau, and Doina Precup. Interference and generalization in temporal difference learning. In *International Conference on Machine Learning*, 2020.
|
280 |
+
|
281 |
+
Stephanie C.Y. Chan, Anoop Korattikara, Sam Fishman, John Canny, and Sergio Guadarrama. Measuring the reliability of reinforcement learning algorithms. In *International Conference on Learning Representations*, 2020.
|
282 |
+
|
283 |
+
Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In *Proceedings of the European* Conference on Computer Vision, 2018.
|
284 |
+
|
285 |
+
Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019.
|
286 |
+
|
287 |
+
Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. *arXiv preprint arXiv:1812.02341*, 2018.
|
288 |
+
|
289 |
+
Yunshu Du, Wojciech M Czarnecki, Siddhant M Jayakumar, Razvan Pascanu, and Balaji Lakshminarayanan.
|
290 |
+
|
291 |
+
Adapting auxiliary losses using gradient similarity. *arXiv preprint arXiv:1812.02224*, 2018.
|
292 |
+
|
293 |
+
Amir-massoud Farahmand, Csaba Szepesvári, and Rémi Munos. Error propagation for approximate policy and value iteration. In *Advances in Neural Information Processing Systems*, 2010.
|
294 |
+
|
295 |
+
Jesse Farebrother, Marlos C Machado, and Michael Bowling. Generalization and regularization in dqn. *arXiv* preprint arXiv:1810.00123, 2018.
|
296 |
+
|
297 |
+
William Fedus, Dibya Ghosh, John D Martin, Marc G Bellemare, Yoshua Bengio, and Hugo Larochelle. On catastrophic interference in atari 2600 games. *arXiv preprint arXiv:2002.12499*, 2020.
|
298 |
+
|
299 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *International Conference on Machine Learning*, 2017.
|
300 |
+
|
301 |
+
Stanislav Fort, Paweł Krzysztof Nowak, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. *arXiv preprint arXiv:1901.09491*, 2019.
|
302 |
+
|
303 |
+
Robert M French. Using semi-distributed representations to overcome catastrophic forgetting in connectionist networks. Technical report, 1993.
|
304 |
+
|
305 |
+
Robert M French. Catastrophic forgetting in connectionist networks. *Trends in Cognitive Sciences*, 1999. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In International Conference on Artificial Intelligence and Statistics, 2011.
|
306 |
+
|
307 |
+
Benjamin Frederick Goodrich. Neuron clustering for mitigating catastrophic forgetting in supervised and reinforcement learning. 2015.
|
308 |
+
|
309 |
+
Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning.
|
310 |
+
|
311 |
+
In *Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence*, AAAI'16, pp. 2094âĂŞ2100. AAAI Press, 2016.
|
312 |
+
|
313 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *IEEE International Conference on Computer Vision*,
|
314 |
+
2015.
|
315 |
+
|
316 |
+
Khurram Javed and Martha White. Meta-learning representations for continual learning. In Advances in Neural Information Processing Systems, 2019.
|
317 |
+
|
318 |
+
Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
|
319 |
+
2018.
|
320 |
+
|
321 |
+
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the National Academy of Sciences*, 2017.
|
322 |
+
|
323 |
+
Vincent Liu, Raksha Kumaraswamy, Lei Le, and Martha White. The utility of sparse representations for control in reinforcement learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*,
|
324 |
+
volume 33, pp. 4384–4391, 2019.
|
325 |
+
|
326 |
+
David Lopez-Paz et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, 2017.
|
327 |
+
|
328 |
+
Michael McCloskey and Neal J Cohen. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. *Psychology of Learning and Motivation*, 1989.
|
329 |
+
|
330 |
+
Rémi Munos. Error bounds for approximate policy iteration. In International Conference on Machine Learning, 2003.
|
331 |
+
|
332 |
+
Rémi Munos. Performance bounds in l_p-norm for approximate value iteration. *SIAM Journal on Control* and Optimization, 2007.
|
333 |
+
|
334 |
+
Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, and Stefano Soatto.
|
335 |
+
|
336 |
+
Toward understanding catastrophic forgetting in continual learning. *arXiv preprint arXiv:1908.01091*, 2019.
|
337 |
+
|
338 |
+
Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
|
339 |
+
|
340 |
+
Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbühl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning. *arXiv preprint arXiv:1810.12282*, 2018.
|
341 |
+
|
342 |
+
Aravind Rajeswaran, Kendall Lowrey, Emanuel V Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. In *Advances in Neural Information Processing Systems*, 2017.
|
343 |
+
|
344 |
+
Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro.
|
345 |
+
|
346 |
+
Learning to learn without forgetting by maximizing transfer and minimizing interference. In International Conference on Learning Representations, 2018.
|
347 |
+
|
348 |
+
Tom Schaul, Diana Borsa, Joseph Modayil, and Razvan Pascanu. Ray interference: a source of plateaus in deep reinforcement learning. *arXiv:1904.11455*, 2019.
|
349 |
+
|
350 |
+
Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In *International Conference on Machine Learning*, 2018.
|
351 |
+
|
352 |
+
Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. 2018.
|
353 |
+
|
354 |
+
Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2019.
|
355 |
+
|
356 |
+
Ronald J Williams. Tight performance bounds on greedy policies based on imperfect value functions. Technical report, 1993.
|
357 |
+
|
358 |
+
## A Experimental Details A.1 Experiment Setup
|
359 |
+
|
360 |
+
We experiment with two environments: Cartpole and Acrobot from the OpenAI gym (https://gym.openai.
|
361 |
+
|
362 |
+
com/). We set the maximum steps per episode to 500, and use a discounting factor γ = 0.99 in all environments.
|
363 |
+
|
364 |
+
We use 50 Monte Carlo rollouts to estimate the performance of the policy at each iteration, that is, E(s,a)∼d0
|
365 |
+
[Qπk (*s, a*)]. For evaluating the TD error difference, we use a reservoir buffer of size 1000, which approximates uniform sampling from all the past transitions.
|
366 |
+
|
367 |
+
## A.2 Network Architecture And Hyperparameters
|
368 |
+
|
369 |
+
For all experiments, we use a three-layer neural network with ReLU activation (Glorot et al., 2011) and He initialization (He et al., 2015) to initialize the neural. For Adam and RMSprop optimizer, we use the default values for the hyper-parameters except the step size.
|
370 |
+
|
371 |
+
For the experiments in Section 5, we generate a set of hyper-parameter by choosing each parameter in the set:
|
372 |
+
- Batch size = 64
|
373 |
+
- Step size α = 0.0003
|
374 |
+
- Number of iteration = 400
|
375 |
+
- Optimizer = Adam
|
376 |
+
- Buffer size ∈ {1000, 5000, 10000} - Hidden size ∈ {64, 128, 256, 512} - Number of steps in an iteration M ∈ {100, 200, 400} - Network architecture ∈ {action-output, action-input}
|
377 |
+
For the experiments in Section 6.3 and 6.4, all algorithms use buffer size of 10000, 100 steps in an iteration, and 400 iterations. DQI without target nets uses hidden size of 128, and DQI with target nets uses hidden size of 512. The best parameters are chosen based on average performance of the policies over the last 200 iterations. Baseline. We sweep the hyperparameters for DQI in the range:
|
378 |
+
- Batch size = 64
|
379 |
+
- Optimizer ∈ {Adam, RMSprop}
|
380 |
+
- Step size α ∈ {0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
|
381 |
+
DQI with large batch size. For the baseline Large, we find the best batch size in the range:
|
382 |
+
- Batch size ∈ {640, 1280, 2560}
|
383 |
+
Online-aware DQI. In our experiment, We sweep over the hyperparameters in the set:
|
384 |
+
|
385 |
+
- Inner update optimizer = SGD
|
386 |
+
- α ∈ {1.0, 0.3, 0.1, 0.03}
|
387 |
+
- αinner ∈ {0.01, 0.001, 0.0001, 0.00001}
|
388 |
+
- Number of inner updates K ∈ {5, 10, 20}
|
389 |
+
DQI maximizing gradient alignment (GA). When updating the parameters, we draw two mini-batch samples B1 and B2 and add a regularization term in the loss function:
|
390 |
+
|
391 |
+
$$-\lambda\left[\frac{1}{|B_{1}|}\sum_{i\in B_{1}}\nabla_{\mathbf{\theta}}\delta_{i}^{2}(\mathbf{\theta})\right]^{\top}\left[\frac{1}{|B_{2}|}\sum_{j\in B_{2}}\nabla_{\mathbf{\theta}}\delta_{j}^{2}(\mathbf{\theta})\right],$$
|
392 |
+
|
393 |
+
normalized by the number of parameters in the network. In our experiment, We sweep over the hyperparameters in the set:
|
394 |
+
|
395 |
+
- Optimizer ∈ {Adam, RMSprop}
|
396 |
+
- Step size α ∈ {0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
|
397 |
+
- λ ∈ {10.0, 1.0, 0.1, 0.01, 0.001}
|
U4MnDySGhy/U4MnDySGhy_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 16,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 16,
|
14 |
+
"code": 0,
|
15 |
+
"table": 0,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 9,
|
18 |
+
"unsuccessful_ocr": 0,
|
19 |
+
"equations": 9
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|