RedTachyon commited on
Commit
b6d9e08
1 Parent(s): 63b565e

Upload folder using huggingface_hub

Browse files
NT9zgedd3I/10_image_0.png ADDED

Git LFS Details

  • SHA256: b47f49a02b248bedbb70586b46d2f1e050e1f299a7e3138055ba9291e029f27f
  • Pointer size: 130 Bytes
  • Size of remote file: 12.4 kB
NT9zgedd3I/10_image_1.png ADDED

Git LFS Details

  • SHA256: d0de93fd301b2b7294a1864b544f6b22b2e5ac914751e31098ca2872747742a9
  • Pointer size: 130 Bytes
  • Size of remote file: 11.7 kB
NT9zgedd3I/11_image_0.png ADDED

Git LFS Details

  • SHA256: 55fb544f574a8f9b8ec249667e2eb9c5e797d6b47963cf62a66ba3ca3cad0c6b
  • Pointer size: 130 Bytes
  • Size of remote file: 18.7 kB
NT9zgedd3I/11_image_1.png ADDED

Git LFS Details

  • SHA256: bc56b731584f5f5423edb7e9c44e403252125094beff207cf861049aca79c6ff
  • Pointer size: 130 Bytes
  • Size of remote file: 11.6 kB
NT9zgedd3I/11_image_2.png ADDED

Git LFS Details

  • SHA256: 209eef0e6348c4344ae39ee5b29cab31599f4a6ef491e83f31b6f50db342bc43
  • Pointer size: 129 Bytes
  • Size of remote file: 9.57 kB
NT9zgedd3I/28_image_0.png ADDED

Git LFS Details

  • SHA256: d203eb69923b38dbaa1500dda687dc688e51cdae8b454ec8ac018513347b50ab
  • Pointer size: 130 Bytes
  • Size of remote file: 15.4 kB
NT9zgedd3I/28_image_1.png ADDED

Git LFS Details

  • SHA256: 254e2d800d0073da522d074010461268317b04a297a9e0ac15d935498273d24d
  • Pointer size: 130 Bytes
  • Size of remote file: 13.2 kB
NT9zgedd3I/29_image_0.png ADDED

Git LFS Details

  • SHA256: f7d3ba63d795f948a9bc13af2c47ba4df50cdb94a822942aaa8b460cf673ca0e
  • Pointer size: 130 Bytes
  • Size of remote file: 21.6 kB
NT9zgedd3I/4_image_0.png ADDED

Git LFS Details

  • SHA256: 438bcd710ca9319fc5aa6d16d4100f04cac85fffc890791052b161827bc64d55
  • Pointer size: 130 Bytes
  • Size of remote file: 20.4 kB
NT9zgedd3I/7_image_0.png ADDED

Git LFS Details

  • SHA256: c1fc891c8a54921d709c0ecd9ba1060911a17db751f3e81a67e511ebe0588ffc
  • Pointer size: 130 Bytes
  • Size of remote file: 22.7 kB
NT9zgedd3I/8_image_0.png ADDED

Git LFS Details

  • SHA256: 9ae9d2d128b45b860dbb88faa1aad8e4e29d7a5fdf178ec45d4f1e1d1498243b
  • Pointer size: 130 Bytes
  • Size of remote file: 64.6 kB
NT9zgedd3I/9_image_0.png ADDED

Git LFS Details

  • SHA256: 6813f9fefafc15e21f94a0ae3b2dec9c482116e82cad93198b9f1424b05127ab
  • Pointer size: 130 Bytes
  • Size of remote file: 14 kB
NT9zgedd3I/9_image_1.png ADDED

Git LFS Details

  • SHA256: 06e94fb4f7979653e402898a85b9887cbddfc874b5ba514b6f53f836248c0a25
  • Pointer size: 130 Bytes
  • Size of remote file: 15.1 kB
NT9zgedd3I/NT9zgedd3I.md ADDED
@@ -0,0 +1,1253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning To Switch Among Agents In A Team Via 2**-Layer** Markov Decision Processes
2
+
3
+ Vahid Balazadeh *vahid@cs.toronto.edu* University of Toronto Abir De *abir@cse.iitb.ac.in* Indian Institute of Technology Bombay Adish Singla *adishs@mpi-sws.dot.org* Max Planck Institute for Software Systems Manuel Gomez Rodriguez *manuelgr@mpi-sws.org* Max Planck Institute for Software Systems Reviewed on OpenReview: *https://openreview.net/forum?id=NT9zgedd3I*
4
+
5
+ ## Abstract
6
+
7
+ Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner—they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. The total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps and, whenever multiple teams of agents operate in a similar environment, our algorithm greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities and it enjoys a better regret bound than problem-agnostic algorithms. Simulation experiments illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.
8
+
9
+ ## 1 Introduction
10
+
11
+ In recent years, reinforcement learning (RL) agents have achieved, or even surpassed, human performance in a variety of computer games by taking decisions autonomously, without human intervention (Mnih et al.,
12
+ 2015; Silver et al., 2016; 2017; Vinyals et al., 2019). Motivated by these successful stories, there has been a tremendous excitement on the possibility of using RL agents to operate fully autonomous cyberphysical systems, especially in the context of autonomous driving. Unfortunately, a number of technical, societal, and legal challenges have precluded this possibility to become so far a reality.
13
+
14
+ In this work, we argue that existing RL agents may still enhance the operation of cyberphysical systems if deployed under lower automation levels. For example, if we let RL agents take some of the actions and leave the remaining ones to human agents, the resulting performance may be better than the performance either of them would achieve on their own (Raghu et al., 2019a; De et al., 2020; Wilder et al., 2020). Once we depart from full automation, we need to address the following question: when should we switch control between machine and human agents? In this work, we look into this problem from a theoretical perspective and develop an online algorithm that learns to optimally switch control between multiple agents in a team automatically. However, to fulfill this goal, we need to address several challenges:
15
+ - *Level of automation.* In each application, what is considered an appropriate and tolerable load for each agent may differ (European Parliament, 2006). Therefore, we would like that our algorithms provide mechanisms to adjust the amount of control for each agent (*i.e.*, level of automation) during a given time period.
16
+
17
+ - *Number of switches.* Consider two different switching patterns resulting in the same amount of agent control and equivalent performance. Then, we would like our algorithms to favor the pattern with the least number of switches. For example, in a team consisting of human and machine agents, every time a machine defers (takes) control to (from) a human, there is an additional cognitive load for the human (Brookhuis et al., 2001).
18
+
19
+ - *Unknown agent policies.* The spectrum of human abilities spans a broad range (Macadam, 2003). As a result, there is a wide variety of potential human policies. Here, we would like that our algorithms learn personalized switching policies that, over time, adapt to the particular humans (and machines) they are dealing with.
20
+
21
+ - *Disentangling agents' policies and environment dynamics.* We would like that our algorithms learn to disentangle the influence of the agents' policies and the environment dynamics on the switching policies. By doing so, they could be used to efficiently find multiple personalized switching policies for different teams of agents operating in similar environments (*e.g.*, multiple semi-autonomous vehicles with different human drivers).
22
+
23
+ To tackle the above challenges, we first formally define the problem of learning to switch control among agents in a team using a 2-layer Markov decision process (Figure 1). Here, the team can be composed of any number of machines or human agents, and the agents' policies, as well as the transition probabilities of the environment, may be unknown. In our formulation, we assume that all agents follow Markovian policies1, similarly as other theoretical models of human decision making (Townsend et al., 2000; Daw &
24
+ Dayan, 2014; McGhan et al., 2015). Under this definition, the problem reduces to finding the switching policy that provides an optimal trade off between the environmental cost, the amount of agent control, and the number of switches. Then, we develop an online learning algorithm, which we refer to as UCRL2-MC2, that uses upper confidence bounds on the agents' policies and the transition probabilities of the environment to find a sequence of switching policies whose total regret with respect to the optimal switching policy is sublinear in the number of learning steps. In addition, we also demonstrate that the same algorithm can be used to find multiple sequences of switching policies across several independent teams of agents operating in similar environments, where it greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2, a very well known reinforcement learning algorithm that we view as the most natural competitor. Finally, we perform a variety of simulation experiments in the standard RiverSwim environment as well as an obstacle avoidance task, where we consider multiple teams of agents (drivers) composed by one human and one machine agent.
25
+
26
+ Our results illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic alternatives.
27
+
28
+ Before we proceed further, we would like to point out that, at a broader level, our methodology and theoretical results are applicable to the problem of switching control between agents following Markovian policies. As long as the agent policies are Markovian, our results do not distinguish between machine and human agents.
29
+
30
+ In this context, we view teams of human and machine agents as one potential application of our work, which we use as a motivating example throughout the paper. However, we would also like to acknowledge that a practical deployment of our methodology in a real application with human and machine agents would require considering a wide range of additional practical aspects (*e.g.*, transparency, explainability, and visualization). Moreover, one may also need to explicitly model the difference in reaction times between human and machine agents. Finally, there may be scenarios in which it might be beneficial to allow a human operator to switch control. Such considerations are out of the scope of our work.
31
+
32
+ 1In certain cases, it is possible to convert a non-Markovian human policy into a Markovian one by changing the state representation (Daw & Dayan, 2014). Addressing the problem of learning to switch control among agents in a team in a semi-Markovian setting is left as a very interesting venue for future work.
33
+
34
+ 2UCRL2 with Multiple Confidence sets.
35
+
36
+ ## 2 Related Work
37
+
38
+ One can think of applying existing RL algorithms (Jaksch et al., 2010; Osband et al., 2013; Osband &
39
+ Van Roy, 2014; Gopalan & Mannor, 2015), such as UCRL2 or Rmax, to find switching policies. However, these problem-agnostic algorithms are unable to exploit the specific structure of our problem. More specifically, our algorithm computes the confidence intervals separately over the agents' policies and the transition probabilities of the environment, instead of computing a single confidence interval, as problem-agnostic algorithms do. As a consequence, our algorithm learns to switch more efficiently across multiple teams of agents, as shown in Section 6.
40
+
41
+ There is a rapidly increasing line of work on learning to defer decisions in the machine learning literature (Bartlett & Wegkamp, 2008; Cortes et al., 2016; Geifman et al., 2018; Ramaswamy et al., 2018; Geifman
42
+ & El-Yaniv, 2019; Liu et al., 2019; Raghu et al., 2019a;b; Thulasidasan et al., 2019; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al., 2020; Shekhar et al., 2021). However, previous work has typically focused on supervised learning. More specifically, it has developed classifiers that learn to defer by considering the defer action as an additional label value, by training an independent classifier to decide about deferred decisions, or by reducing the problem to a combinatorial optimization problem. Moreover, except for a few recent notable exceptions (Raghu et al., 2019a; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al.,
43
+ 2020), they do not consider there is a human decision maker who takes a decision whenever the classifiers defer it. In contrast, we focus on reinforcement learning, and develop algorithms that learn to switch control between multiple agents, including human agents. Recently, Jacq et al. (2022) introduced a new framework called lazy-MDPs to decide when to act optimally for reinforcement learning agents. They propose to augment existing MDPs with a new default action and encourage agents to defer decision-making to default policy in non-critical states. Though their lazy-MDP is similar to our augmented 2-layer MDP framework, our approach is designed to switch optimally between possibly multiple agents, each having its own policy.
44
+
45
+ Our work is also connected to research on understanding switching behavior and switching costs in the context of human-computer interaction (Czerwinski et al., 2000; Horvitz & Apacible, 2003; Iqbal & Bailey, 2007; Kotowick & Shah, 2018; Janssen et al., 2019), which has been sometimes referred to as "adjustable autonomy" (Mostafa et al., 2019). At a technical level, our work advances state of the art in adjustable autonomy by introducing an algorithm with provable guarantees to efficiently find the optimal switching policy in a setting in which the dynamics of the environment and the agents' policies are unknown (*i.e.*, there is uncertainty about them). Moreover, our work also relates to a recent line of research that combines deep reinforcement learning with opponent modeling to robustly switch between multiple machine policies (Everett
46
+ & Roberts, 2018; Zheng et al., 2018). However, this line of research does not consider the presence of human agents, and there are no theoretical guarantees on the performance of the proposed algorithms.
47
+
48
+ Furthermore, our work contributes to an extensive body of work on human-machine collaboration (Stone et al., 2010; Taylor et al., 2011; Walsh et al., 2011; Barrett & Stone, 2012; Macindoe et al., 2012; Torrey &
49
+ Taylor, 2013; Nikolaidis et al., 2015; Hadfield-Menell et al., 2016; Nikolaidis et al., 2017; Grover et al., 2018; Haug et al., 2018; Reddy et al., 2018; Wilson & Daugherty, 2018; Brown & Niekum, 2019; Kamalaruban et al.,
50
+ 2019; Radanovic et al., 2019; Tschiatschek et al., 2019; Ghosh et al., 2020; Strouse et al., 2021). However, rather than developing algorithms that learn to switch control between humans and machines, previous work has predominantly considered settings in which the machine and the human interact with each other.
51
+
52
+ Finally, one can think of using option framework and the notion of macro-actions and micro-actions to formulate the problem of learning to switch (Sutton et al., 1999). However, the option framework is designed to address different levels of temporal abstraction in RL by defining macro-actions that correspond to sub-tasks
53
+ (skills). In our problem, each agent is not necessarily optimized to act for a specific task or sub-goal but for the whole environment/goal. Also, in our problem, we do not necessarily have control over all agents to learn the optimal policy for each agent, while in the option framework, a primary direction is to learn optimal options for each sub-task. In other words, even though we can mathematically refer to each agent policy as an option, they are not conceptually the same.
54
+
55
+ ## 3 Switching Control Among Agents As A 2-Layer Mdp
56
+
57
+ Given a team of agents D, at each time step t ∈ {1*, . . . , L*}, our (cyberphysical) system is characterized by a state st ∈ S, where S is a finite state space, and a control switch dt ∈ D, which determines who takes an action at ∈ A, where A is a finite action space. In the above, the switch value is given by a (deterministic and time-varying) switching policy dt = πt(st, dt−1)
58
+ 3. More specifically, if dt = d, the action at is sampled from the agent d's policy pd(at | st). Moreover, given a state st and an action at, the state st+1 is sampled from a transition probability p(st+1 | st, at). Here, we assume that the agents' policies and the transition probabilities may be unknown. Finally, given an initial state and switch value (s1, d0) and a trajectory τ = {(st, dt, at)}
59
+ L t=1 of states, switch values and actions, we define the total cost c(τ | s1, d0) as:
60
+
61
+ $$c(\tau\,|\,s_{1},d_{0})=\sum_{t=1}^{L}[c_{e}(s_{t},a_{t})+c_{c}(d_{t})+c_{x}(d_{t},d_{t-1})],\tag{1}$$
62
+ $$\left(2\right)$$
63
+
64
+ where ce(st, at) is the environment cost of taking action at at state st, cc(dt) is the cost of giving control to agent dt, cx(dt, dt−1) is the cost of switching from dt−1 to dt, and L is the time horizon4. Then, our goal is to find the optimal switching policy π
65
+ ∗ = (π
66
+
67
+ 1
68
+ , . . . , π∗L
69
+ ) that minimizes the expected cost, *i.e.*,
70
+
71
+ $$\pi^{*}=\operatorname*{argmin}_{\pi}\mathbb{E}\left[c(\tau\mid s_{1},d_{0})\right],$$
72
+ E [c(τ | s1, d0)] , (2)
73
+ where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies.
74
+
75
+ To solve the above problem, one could just resort to problem-agnostic RL algorithms, such as UCRL2 or Rmax, over a standard Markov decision process (MDP), defined as
76
+
77
+ $${\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{D}},\bar{P},\bar{C},L),$$
78
+ (4) $$\binom{4}{5}$$.
79
+ where *S × D* is an augmented state space, the set of actions D is just the switch values, the transition dynamics P¯ at time t are given by
80
+
81
+ $$p(s_{t+1},d_{t}\,|\,s_{t},d_{t-1})=\mathbb{I}[\pi_{t}(s_{t},d_{t-1})=d_{t}]\times\sum_{a\in\mathcal{A}}p(s_{t+1}\,|\,s_{t},a)p_{d_{t}}(a\,|\,s_{t}),\tag{3}$$
82
+
83
+ the immediate cost C¯ at time t is given by
84
+
85
+ $$\tilde{c}(s_{t},d_{t-1})=\mathbb{E}_{a_{t}\sim p_{\pi_{t}(s_{t},d_{t-1})}(\cdot\mid s_{t})}\left[c_{e}(s_{t},a_{t})\right]+c_{c}(\pi_{t}(s_{t},d_{t-1}))+c_{x}(\pi_{t}(s_{t},d_{t-1}),d_{t-1}).$$
86
+
87
+ Here, note that, by using conditional expectations, we can compute the average cost of a trajectory, given by Eq. 1, from the above immediate costs. However, these algorithms would not exploit the structure of the problem. More specifically, they would not use the observed agents' actions to improve the estimation of the transition dynamics over time.
88
+
89
+ $${\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{S}}\times{\mathcal{A}},{\mathcal{D}},P_{{\mathcal{D}}},P,C_{{\mathcal{D}}},C_{e},L)$$
90
+
91
+ To avoid the above shortcoming, we will resort instead to a 2-layer MDP where taking an action dt in state
92
+ (st, dt−1) leads first to an intermediate state (st, at) *∈ S × A* with probability pdt
93
+ (at | st) and immediate cost cdt
94
+ (st, dt−1) = cc(dt) + cx(dt, dt−1) and then to a final state (st+1, dt) *∈ S × D* with probability I[πt(st, dt−1) = dt] · p(st+1 | st, at) and immediate cost ce(st, at). More formally, the 2-layer MDP is defined by the following 8-tuple:
95
+ M = (S × D, S × A, D, PD, P, CD, Ce, L) (5)
96
+ where S×D is the final state space, S×A is the intermediate state space, the set of actions D is the switch values, the transition dynamics PD and P at time t are given by pdt
97
+ (at | st) and I[πt(st, dt−1) = dt] · p(st+1 | st, at),
98
+ and the immediate costs CD and Ce at time t are given by cdt
99
+ (st, dt−1) and ce(st, at), respectively.
100
+
101
+ The above 2-layer MDP will allow us to estimate separately the agents' policies pd(· | s) and the transition probability p(· | *s, a*) of the environment using both the intermediate and final states and design an algorithm that improves the regret that problem-agnostic RL algorithms achieve in our problem.
102
+
103
+ 3Note that, by making the switching policy dependent on the previous switch value dt−1, we can account for the switching cost. 4The specific choice of environment cost ce(·, ·), control cost cc(·) and switching cost cx(·, ·) is application dependent.
104
+
105
+ $$\left(5\right)$$
106
+
107
+ 4
108
+
109
+ ![4_image_0.png](4_image_0.png)
110
+
111
+ Figure 1: Transitions of a 2-layer Markov Decision Process (MDP) from state (*s, d*) to state (s 0, d0) after seleting agent d 0. d 0 and d denote the current and previous agents in control. In the first layer (switching layer), the switching policy chooses agent d 0, which takes action w.r.t. its action policy pd0 . Then, in the action layer, the environment transitions to the next state s 0 based on the taken action w.r.t. the transition probability p.
112
+
113
+ ## 4 Learning To Switch In A Team Of Agents
114
+
115
+ Since we may not know the agents' policies nor the transition probabilities, we need to trade off exploitation, i.e., minimizing the expected cost, and exploration, *i.e.*, learning about the agents' policies and the transition probabilities. To this end, we look at the problem from the perspective of episodic learning and proceed as follows.
116
+
117
+ We consider K independent subsequent episodes of length L and denote the aggregate length of all episodes as T = KL. Each of these episodes corresponds to a realization of the same finite horizon 2-layer Markov decision process, introduced in Section 3, with state spaces *S × A* and *S × D*, set of actions D, true agent policies P
118
+
119
+ D, true environment transition probability P
120
+ ∗, and immediate costs CD and Ce. However, since we do not know the true agent policies and environment transition probabilities, just before each episode k starts, our goal is to find a switching policy π k with desirable properties in terms of total regret R(T), which is given by:
121
+
122
+ $$R(T)=\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]\right],\tag{6}$$
123
+
124
+ where π
125
+ ∗is the optimal switching policy under the true agent policies and environment transition probabilities.
126
+
127
+ To achieve our goal, we apply the principle of optimism in the face of uncertainty, *i.e.*,
128
+
129
+ $$\pi^{k}=\operatorname*{argmin}_{\pi}\operatorname*{min}_{P_{\mathcal{D}}\in\mathcal{P}_{\mathcal{D}}^{k}}\operatorname*{min}_{P\in\mathcal{P}^{k}}\mathbb{E}_{\tau\sim\pi,P_{\mathcal{D}},P}\left[c(\tau\mid s_{1},d_{0})\right]$$
130
+ Eτ∼π,PD,P [c(τ | s1, d0)] (7)
131
+ where P
132
+ k D is a (*|S|×|D|×*L)-rectangular confidence set, *i.e.*, P
133
+ k D =×*s,d,t* P
134
+ k
135
+ · | *d,s,t*, and P
136
+ kis a (*|S|×|A|×*L)-
137
+ rectangular confidence set, *i.e.*, P
138
+ k = ×*s,a,t* P
139
+ k
140
+ · | *s,a,t*. Here, note that the confidence sets are constructed using data gathered during the first k − 1 episodes and allows for time-varying agent policies pd(· | *s, t*) and transition probabilities p(· | *s, a, t*).
141
+
142
+ $$\left(7\right)$$
143
+
144
+ However, to solve Eq. 7, we first need to explicitly define the confidence sets. To this end, we first define the empirical distributions pˆ
145
+ k d
146
+ (· | s) and pˆ
147
+ k(· | *s, a*) just before episode k starts as:
148
+
149
+ $$\hat{p}_{d}^{k}(a\,|\,s)=\begin{cases}\frac{N_{k}(s,d,a)}{N_{k}(s,d)}&\text{if}N_{k}(s,d)\neq0\\ \frac{1}{|A|}&\text{otherwise,}\end{cases}$$ $$\hat{p}^{k}(s^{\prime}\,|\,s,a)=\begin{cases}\frac{N_{k}^{\prime}(s,a,s^{\prime})}{N_{k}^{\prime}(s,a)}&\text{if}N_{k}^{\prime}(s,a)\neq0\\ \frac{1}{|\mathcal{S}|}&\text{otherwise,}\end{cases}$$
150
+ (8) $\binom{9}{2}$ .
151
+ where
152
+
153
+ $$N_{k}(s,d)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,d_{t}=d\text{in episode}l),\,N_{k}(s,d,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,d_{t}=d\text{in episode}l),$$ $$N_{k}^{\prime}(s,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a\text{in episode}l),\,N_{k}^{\prime}(s,a,s^{\prime})=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,s_{t+1}=s^{\prime}\text{in episode}l).$$
154
+
155
+ Then, similarly as in Jaksch et al. (2010), we opt for L
156
+ 1confidence sets5, *i.e.*,
157
+
158
+ $$\begin{array}{l}{{{\mathcal P}_{\cdot\mid d,s,t}^{k}(\delta)=\left\{\,p_{d}:||p_{d}(\cdot\mid s,t)-\hat{p}_{d}^{k}(\cdot\mid s)||_{1}\leq\beta_{\mathcal D}^{k}(s,d,\delta)\right\},}}\\ {{{\mathcal P}_{\cdot\mid s,a,t}^{k}(\delta)=\left\{\,p:||p(\cdot\mid s,a,t)-\hat{p}^{k}(\cdot\mid s,a)||_{1}\leq\beta^{k}(s,a,\delta)\right\},}}\end{array}$$
159
+
160
+ for all d ∈ D, s ∈ S, a ∈ A and t ∈ [L], where δ is a given parameter,
161
+
162
+ $$\beta_{D}^{k}(s,d,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[D\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,d)\}}}\quad\text{and}\quad\beta^{k}(s,a,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[L\right]\right]\left[S\left[L\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,a)\}}}.$$
163
+
164
+ Next, given the switching policy π and the transition dynamics PD and P, we define the value function as
165
+
166
+ $$V_{t\mid P_{\tau},P}^{\pi}(s,d)=\mathbb{E}\bigg{[}\sum_{\tau=t}^{L}c_{e}(s_{\tau},a_{\tau})+c_{e}(d_{\tau})+c_{s}(d_{\tau},d_{\tau-1})\,|\,s_{t}=s,d_{t-1}=d\bigg{]},\tag{10}$$
167
+
168
+ where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies. Then, for each episode k, we define the optimal value function v k t
169
+ (*s, d*) as
170
+
171
+ $$v_{t}^{k}(s,d)=\min_{\pi}\min_{P_{\mathbb{P}}\in\mathcal{P}_{P}^{k}(\delta)}\min_{P\in\overline{P}^{k}(\delta)}V_{t|P_{\mathbb{P}},P}^{\pi}(s,d).\tag{11}$$
172
+
173
+ Then, we are ready to use the following key theorem, which gives a solution to Eq. 7 (proven in Appendix A): Theorem 1. For any episode k*, the optimal value function* v k t
174
+ (s, d) *satisfies the following recursive equation:*
175
+
176
+ $$v_{t}^{k}(s,d)=\min_{a,t\in\mathcal{D}}\Big{[}c_{d_{t}}(s,d)+\min_{p_{a,t}\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\sum_{a\in\mathcal{A}}p_{d_{t}}(a\mid s,t)\times\Big{(}c_{s}(s,a)+\min_{p\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\mathbb{E}_{s^{\prime}\sim p(\cdot\mid s,a,t)}[v_{t+1}^{k}(s^{\prime},d_{t}))\Big{)}\Big{]},\tag{12}$$
177
+
178
+ with v k L+1(*s, d*) = 0 for all s ∈ S and d ∈ D*. Moreover, if* d
179
+ ∗ t is the solution to the minimization problem of the RHS of the above recursive equation, then π k t
180
+ (*s, d*) = d
181
+
182
+ t
183
+ .
184
+
185
+ The above result readily implies that, just before each episode k starts, we can find the optimal switching policy π k = (π k 1
186
+ , . . . , πkL
187
+ ) using dynamic programming, starting with vL+1(*s, d*) = 0 for all s ∈ S and d ∈ D.
188
+
189
+ Moreover, similarly as in Strehl & Littman (2008), we can solve the inner minimization problems in Eq. 12 analytically using Lemma 7 in Appendix B. To this end, we first find the optimal p(· | *s, a, t*) for all and a ∈ A
190
+ 5This choice will result into a sequence of switching policies with desirable properties in terms of total regret.
191
+
192
+ ALGORITHM 1: UCRL2-MC
193
+ 1: Cost functions CD and Ce, δ 2: {Nk, N0k} ← InitializeCounts()
194
+ 3: for k = 1*, . . . , K* do 4: {pˆ
195
+ k d}, pˆ
196
+ k ← UpdateDistribution({Nk, N0k})
197
+ 5: P
198
+ k D, P
199
+ k ← UpdateConfidenceSets({pˆ
200
+ k d}, pˆ
201
+ k, δ)
202
+ 6: π k ← GetOptimal(P
203
+ k D, P
204
+ k, CD, Ce),
205
+ 7: (s1, d0) ← InitializeConditions()
206
+ 8: for t = 1*, . . . , L* do 9: dt ← π k t (st, dt−1)
207
+ 10: at ∼ pdt
208
+ (·|st)
209
+ 11: st+1 ∼ P(·|st, at)
210
+ 12: N ← UpdateCounts((st, dt, at, st+1), {Nk, N0k})
211
+ 13: **end for**
212
+ 14: **end for**
213
+ 15: **Return** π K
214
+ and then we find the optimal pdt
215
+ (· | *s, t*) for all dt ∈ D. Algorithm 1 summarizes the whole procedure, which we refer to as UCRL2-MC.
216
+
217
+ Within the algorithm, the function GetOptimal(·) finds the optimal policy π k using dynamic programming, as described above, and UpdateDistribution(·) computes Eqs. 8 and 9. Moreover, it is important to notice that, in lines 8–10, the switching policy π kis actually deployed, the true agents take actions on the true environment and, as a result, action and state transition data from the true agents and the true environment is gathered.
218
+
219
+ Next, the following theorem shows that the sequence of policies {π k}
220
+ K
221
+ k=1 found by Algorithm 1 achieve a total regret that is sublinear with respect to the number of steps, as defined in Eq. 6 (proven in Appendix A):
222
+ Theorem 2. Assume we use Algorithm 1 *to find the switching policies* π k*. Then, with probability at least* 1 − δ*, it holds that*
223
+
224
+ $$R(T)\leq\rho_{1}L{\sqrt{|{\mathcal{A}}||{\mathcal{S}}||{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}+\rho_{2}L|{\mathcal{S}}|{\sqrt{|{\mathcal{A}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{A}}|T}{\delta}}\right)}}$$
225
+ (13)
226
+ where ρ1, ρ2 > 0 *are constants.*
227
+ The above regret bound suggests that our algorithm may achieve higher regret than standard UCRL2 (Jaksch et al., 2010), one of the most popular problem-agnostic RL algorithms. More specifically, one can readily show that, if we use UCRL2 to find the switching policies π k(refer to Appendix C), then, with probability at least 1 − δ, it holds that
228
+
229
+ $$(13)$$
230
+ $$R(T)\leq\rho L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}$$
231
+ $$(14)$$
232
+ (14)
233
+ where ρ is a constant. Then, if we omit constant and logarithmic factors and assume the size of the team of agents is smaller than the size of state space, i.e., |D|< |S|, we have that, for UCRL2, the regret bound is O˜(L|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T).
234
+
235
+ That being said, in practice, we have found that our algorithm achieves comparable regret with respect to UCRL2, as shown in Figure 4. In addition, after applying our algorithm on a specific team of agents and environment, we can reuse the confidence intervals over the transition probability p(· | *s, a*) we have learned to find the optimal switching policy for a different team of agents operating in a similar environment. In contrast, after applying UCRL2, we would only have a confidence interval over the conditional probability defined by Eq. 3, which would be of little use to find the optimal switching policy for a different team of agents. In the following section, we will build up on this insight by considering several independent teams of agents operating in similar environments. We will demonstrate that, whenever we aim to find multiple sequences of
236
+
237
+ ![7_image_0.png](7_image_0.png)
238
+
239
+ Figure 2: Three examples of environment realizations with different initial traffic level γ0.
240
+ switching policies for these independent teams, a straightforward variation of UCRL2-MC greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2.
241
+
242
+ Remarks. For ease of exposition, we have assumed that both the machine and human agents follow arbitrary Markov policies that do not change due to switching. However, our theoretical results still hold if we lift this assumption—we just need to define the agents' policies as pd(at|st, dt, dt−1) and construct separate confidence sets based on the switch values.
243
+
244
+ ## 5 Learning To Switch Across Multiple Teams Of Agents
245
+
246
+ In this section, rather than finding a sequence of switching policies for a single team of agents, we aim to find multiple sequences of switching policies across several independent teams operating in similar environments.
247
+
248
+ We will analyze our algorithm in scenarios where it can maintain shared confidence bounds for the transition probabilities of the environments across these independent teams. For instance, when the learning algorithm is deployed in centralized settings, it is possible to collect data across independent teams to maintain shared confidence intervals on the common parameters (i.e., the environment's transition probabilities in our problem setting). This setting fits a variety of real applications, more prominently, think of a car manufacturer continuously collecting driving data from million of human drivers wishing to learn different switching policies for each driver to implement a personalized semi-autonomous driving system. Similarly as in the previous section, we look at the problem from the perspective of episodic learning and proceed as follows.
249
+
250
+ Given N independent teams of agents {Di}
251
+ N
252
+ i=1, we consider K independent subsequent episodes of length L per team and denote the aggregate length of all of these episodes as T = KL. For each team of agents Di, every episode corresponds to a realization of a finite horizon 2-layer Markov decision process with state spaces *S × A* and *S × D*i, set of actions Di, true agent policies P
253
+
254
+ Di
255
+ , true environment transition probability P
256
+ ∗, and immediate costs CDi and Ce. Here, note that all the teams operate in a similar environment, *i.e.*,
257
+ P
258
+ ∗is shared across teams, and, without loss of generality, they share the same costs. Then, our goal is to find the switching policies π k i with desirable properties in terms of total regret R(*T, N*), which is given by:
259
+
260
+ $$R(T,N)=\sum_{i=1}^{N}\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]\right]\tag{15}$$
261
+
262
+ where π
263
+
264
+ i is the optimal switching policy for team i, under the true agent policies and environment transition probability.
265
+
266
+ To achieve our goal, we just run N instances of UCRL2-MC (Algorithm 1), each with a different confidence set P
267
+ k Di
268
+ (δ) for the agents' policies, similarly as in the case of a single team of agents, but with a shared confidence set P
269
+ k(δ) for the environment transition probability. Then, we have the following key corollary, which readily follows from Theorem 2:
270
+
271
+ ![8_image_0.png](8_image_0.png)
272
+
273
+ Figure 3: Trajectories induced by the switching policies found by Algorithm 1. The blue and orange segments indicate machine and human control, respectively. In both panels, we train Algorithm 1 within the same sequence of episodes, where the initial traffic level of each episode is sampled uniformly from
274
+ {no-car, light, heavy}, and show three episodes with different initial traffic levels. The results indicate that, in the latter episodes, the algorithm has learned to switch to the human driver in heavier traffic levels.
275
+ Corollary 3. Assume we use N instances of Algorithm 1 *to find the switching policies* π k i *using a shared* confidence set for the environment transition probability. Then, with probability at least 1 − δ*, it holds that*
276
+
277
+ $$R(T,N)\leq\rho_{1}NL\sqrt{|{\cal A}||{\cal S}||{\cal D}|T\log\left(\frac{|{\cal S}||{\cal D}|T}{\delta}\right)}+\rho_{2}L|{\cal S}|\sqrt{|{\cal A}|NT\log\left(\frac{|{\cal S}||{\cal A}|T}{\delta}\right)}\tag{16}$$
278
+
279
+ where ρ1, ρ2 > 0 *are constants.*
280
+ The above results suggests that our algorithm may achieve lower regret than UCRL2 in a scenario with multiple teams of agents operating in similar environments. This is because, under UCRL2, the confidence sets for the conditional probability defined by Eq. 3 cannot be shared across teams. More specifically, if we use N instances of UCLR2 to find the switching policies π k i
281
+ , then, with probability at least 1 − δ, it holds that
282
+
283
+ $$R(T,N)\leq\rho N L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}$$
284
+
285
+ where ρ is a constant. Then, if we omit constant and logarithmic factors and assume |Di|< |S| for all i ∈ [N], we have that, for UCRL2, the regret bound is O˜(NL|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T N + NLp*|A||S||D|*T). Importantly, in practice, we have found that UCRL2-MC does achieve a significant lower regret than UCRL2, as shown in the Figure 5.
286
+
287
+ ## 6 Experiments 6.1 Obstacle Avoidance
288
+
289
+ We perform a variety of simulations in obstacle avoidance, where teams of agents (drivers) consist of one human agent (H) and one machine agent (M), i.e., D = {H, M}. We consider a lane driving environment with three lanes and infinite rows, where the type of each individual cell (*i.e.*, road, car, stone or grass) in row r is sampled independently at random with a probability that depends on the traffic level γr, which can take three discrete values, γr ∈ {no-car, light, heavy}. The traffic level of each row γr+1 is sampled at random with a probability that depends on the traffic level of the previous row γr. The probability of each cell type based on traffic level, as well as the conditional distribution of traffic levels can be found in Appendix D.
290
+
291
+ At any given time t, we assume that whoever is in control—be it the machine or the human—can take three different actions A = {left, straight, right}. Action left steers the car to the left of the current lane,
292
+
293
+ ![9_image_1.png](9_image_1.png)
294
+
295
+ ![9_image_0.png](9_image_0.png)
296
+
297
+ Figure 4: Total regret of the trajectories induced by the switching policies found by Algorithm 1 and those induced by a variant of UCRL2 in comparison with the trajectories induced by a machine driver and a human driver in a setting with a single team of agents. In all panels, we run K = 20,000. For Algorithm 1 and the variant of UCRL2, the regret is sublinear with respect to the number of time steps whereas, for the machine and the human drivers, the regret is linear.
298
+ action right steers it to the right and action straight leaves the car in the current lane. If the car is already on the leftmost (rightmost) lane when taking action left (right), then the lane remains unchanged. Irrespective of the action taken, the car always moves forward. The goal of the cyberphysical system is to drive the car from an initial state in time t = 1 until the end of the episode t = L with the minimum total amount of cost.
299
+
300
+ In our experiments, we set L = 10. Figure 2 shows three examples of environment realizations.
301
+
302
+ State space. To evaluate the switching policies found by Algorithm 1, we experiment with a *sensor-based* state space, where the state values are the type of the current cell and the three cells the car can move into in the next time step, as well as the current traffic level—we assume the agents (be it a human or a machine)
303
+ can measure the traffic level. For example, assume at time t the traffic is light, the car is on a road cell and, if it moves forward left, it hits a stone, if it moves forward straight, it hits a car, and, if it moves forward right, it drives over grass, then its state value is st = (light, road, stone, car, grass). Moreover, if the car is on the leftmost (rightmost) lane, then we set the value of the third (fifth) dimension in st to ∅. Therefore, under this state representation, the resulting MDP has ∼3 × 5 4states.
304
+
305
+ Cost and human/machine policies. We consider a state-dependent environment cost ce(st, at) = ce(st)
306
+ that depends on the type of the cell the car is on at state st, *i.e.*, ce(st) = 0 if the type of the current cell is road, ce(st) = 2 if it is grass, ce(st) = 4 if it is stone and ce(st) = 10 if it is car. Moreover, in all simulations, we use a machine policy that has been trained using a standard RL algorithm on environment realizations with γ0 = no-car. In other words, the machine policy is trained to perform well under a low traffic level.
307
+
308
+ Moreover, we consider all the humans pick which action to take (left, straight or right) according to a noisy estimate of the environment cost of the three cells that the car can move into in the next time step.
309
+
310
+ More specifically, each human model H computes a noisy estimate of the cost cˆe(s) = ce(s) + s of each of the three cells the car can move into, where s ∼ N(0, σH), and picks the action that moves the car to the cell with the lowest noisy estimate6. As a result, human drivers are generally more reliable than the machine under high traffic levels, however, the machine is more reliable than humans under low traffic level, where its policy is near-optimal (See Appendix E for a comparison of the human and machine performance). Finally, we consider that only the car driven by our system moves in the environment.
311
+
312
+ ## 6.1.1 Results
313
+
314
+ First, we focus on a single team of one machine M and one human model H, with σH = 2, and use Algorithm 1 to find a sequence of switching policies with sublinear regret. At the beginning of each episode, the initial traffic level γ0 is sampled uniformly at random.
315
+
316
+ 6Note that, in our theoretical results, we have no assumption other than the Markov property regarding the human policy.
317
+
318
+ ![10_image_0.png](10_image_0.png)
319
+
320
+ ![10_image_1.png](10_image_1.png)
321
+
322
+ Figure 5: Total regret of the trajectories induced by the switching policies found by N instances of Algorithm 1 and those induced by N instances of a variant of UCRL2 in a setting with N team of agents. In both panels, each instance of Algorithm 1 shares the same confidence set for the environment transition probabilities and we run K = 5000 episodes. The sequence of policies found by Algorithm 1 outperform those found by the variant of UCRL2 in terms of total regret, in agreement with Corollary 3.
323
+ We look at the trajectories induced by the switching policies found by our algorithm across different episodes for different values of the switching cost cx and cost of human control cc(H)
324
+ 7. Figure 3 summarizes the results, which show that, in the latter episodes, the algorithm has learned to rely on the machine (blue segments)
325
+ whenever the traffic level is low and switches to the human driver when the traffic level increases. Moreover, whenever the amount of human control and number of switches is not penalized (*i.e.*, cx = cc(H) = 0), the algorithm switches to the human more frequently whenever the traffic level is high to reduce the environment cost. See Appendix F for a comparison of human control rate in environments with different initial traffic levels.
326
+
327
+ In addition, we compare the performance achieved by Algorithm 1 with three baselines: (i) a variant of UCRL2 (Jaksch et al., 2010) adapted to our finite horizon setting (see Appendix C), (ii) a human agent, and
328
+ (iii) a machine agent. As a measure of performance, we use the total regret, as defined in Eq. 6. Figure 4 summarizes the results for two different values of switching cost cx and cost of human control cc(H). The results show that both our algorithm and UCRL2 achieve sublinear regret with respect to the number of time steps and their performance is comparable in agreement with Theorem 2. In contrast, whenever the human or the machine drive on their own, they suffer linear regret, due to a lack of exploration.
329
+
330
+ Next, we consider N = 10 independent teams of agents, {Di}
331
+ N
332
+ i=1, operating in a similar lane driving environment. Each team Diis composed of a different human model Hi, with σHi sampled uniformly from
333
+ (0, 4), and the same machine driver M. Then, to find a sequence of switching policies for each of the teams, we run N instances of Algorithm 1 with shared confidence set for the environment transition probabilities.
334
+
335
+ We compare the performance of our algorithm against the same variant of UCRL2 used in the experiments with a single team of agents in terms of the total regret defined in Eq. 15. Here, note that the variant of UCRL2 does not maintain a shared confidence set for the environment transition probabilities across teams but instead creates a confidence set for the conditional probability defined by Eq. 3 for each team. Figure 5 summarizes the results for a sequence for different values of the switching cost cx and cost of human control cc(H), which shows that, in agreement with Corollary 3, our method outperforms UCRL2 significantly.
336
+
337
+ ## 6.2 Riverswim
338
+
339
+ In addition to the obstacle avoidance task, we consider the standard task of *RiverSwim* (Strehl & Littman, 2008). The MDP states and transition probabilities are shown in Figure 6. The cost of taking action in states s2 to s5 equals 1, while 0.995 and 0 for states s1 and s6, respectively. Each episode ends after L = 20
340
+
341
+ 7Here, we assume the cost of machine control cc(M) = 0.
342
+
343
+ ![11_image_0.png](11_image_0.png)
344
+
345
+ Figure 6: RiverSwim. Continuous (dashed) arrows show the transitions after taking actions right (left).
346
+
347
+ The optimal policy is to always take action right.
348
+
349
+ ![11_image_2.png](11_image_2.png)
350
+
351
+ ![11_image_1.png](11_image_1.png)
352
+
353
+ Figure 7: (a) Ratio of UCRL2-MC regret to UCRL2 for different number of teams. (b) Total regret of the trajectories induced by the switching policies found by UCRL2-MC and those induced by UCRL2 in a setting with N = 100 team of agents.
354
+ steps. We set the switching cost and cost of agent control to zero for all the simulations in this section, i.e.,
355
+ cx(·, ·) = cc(·) = 0. The set D consists of agents that choose action right with some probability value p, which may differ for different agents. In the following part, we investigate the effect of increasing the number of teams on the regret bound in the multiple teams of agents setting. See Appendix G for more simulations to study the impact of action size and number of agents in each team on the total regret.
356
+
357
+ ## 6.2.1 Results
358
+
359
+ We consider N independent teams of agents, each consisting of two agents with the probability p and 1 − p of choosing action right, where p is chosen uniformly at random for each team. We run the simulations for N = {3, 4, *· · ·* , 10} teams of agents. For each N, we run both UCRL2-MC and UCRL2 for 20,000 episodes and repeat each experiment 5 times. Figure 7 (a) summarizes our results, showing the advantage of the shared confidence bounds on the environment transition probabilities in our algorithm against its problem-agnostic version. To better illustrate the performance of UCRL2-MC, we also run an experiment with N = 100 teams of agents for 10,000 episodes and compare the total regret of our algorithm to UCRL2. Figure 7 (b) shows that our algorithm significantly outperforms UCRL2.
360
+
361
+ ## 7 Conclusions And Future Work
362
+
363
+ We have formally defined the problem of learning to switch control among agents in a team via a 2-layer Markov decision process and then developed UCRL2-MC, an online learning algorithm with desirable provable guarantees. Moreover, we have performed a variety of simulation experiments on the standard RiverSwim task and obstacle avoidance to illustrate our theoretical results and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms. Our work opens up many interesting avenues for future work. For example, we have assumed that the agents' policies are fixed. However, there are reasons to believe that simultaneously optimizing the agents' policies and the switching policy may lead to superior performance (De et al., 2020; 2021; Wilder et al., 2020; Wu et al., 2020). In our work, we have assumed that the state space is discrete and the horizon in finite. It would be very interesting to lift these assumptions and develop approximate value iteration methods to solve the learning to switch problem. Finally, it would be interesting to evaluate our algorithm using real human agents in a variety of tasks.
364
+
365
+ Acknowledgments. Gomez-Rodriguez acknowledges support from the European Research Council (ERC)
366
+ under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 945719).
367
+
368
+ ## References
369
+
370
+ Samuel Barrett and Peter Stone. An analysis framework for ad hoc teamwork tasks. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 357–364, 2012.
371
+
372
+ P. Bartlett and M. Wegkamp. Classification with a reject option using a hinge loss. *JMLR*, 2008.
373
+
374
+ K. Brookhuis, D. De Waard, and W. Janssen. Behavioural impacts of advanced driver assistance systems–an overview. *European Journal of Transport and Infrastructure Research*, 1(3), 2001.
375
+
376
+ Daniel S Brown and Scott Niekum. Machine teaching for inverse reinforcement learning: Algorithms and applications. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 7749–7758, 2019.
377
+
378
+ C. Cortes, G. DeSalvo, and M. Mohri. Learning with rejection. In ALT, 2016. Mary Czerwinski, Edward Cutrell, and Eric Horvitz. Instant messaging and interruption: Influence of task type on performance. In *OZCHI 2000 conference proceedings*, volume 356, pp. 361–367, 2000.
379
+
380
+ Nathaniel D. Daw and Peter Dayan. The algorithmic anatomy of model-based evaluation. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655):20130478, 2014.
381
+
382
+ A. De, P. Koley, N. Ganguly, and M. Gomez-Rodriguez. Regression under human assistance. In *AAAI*, 2020. Abir De, Nastaran Okati, Ali Zarezade, and Manuel Gomez-Rodriguez. Classification under human assistance.
383
+
384
+ In *AAAI*, 2021.
385
+
386
+ European Parliament. Regulation (EC) No 561/2006. *http://data.europa.eu/eli/reg/2006/561/2015-03-02*,
387
+ 2006.
388
+
389
+ R. Everett and S. Roberts. Learning against non-stationary agents with opponent modelling and deep reinforcement learning. In *2018 AAAI Spring Symposium Series*, 2018.
390
+
391
+ Y. Geifman and R. El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. *arXiv* preprint arXiv:1901.09192, 2019.
392
+
393
+ Y. Geifman, G. Uziel, and R. El-Yaniv. Bias-reduced uncertainty estimation for deep neural classifiers. In ICLR, 2018.
394
+
395
+ A. Ghosh, S. Tschiatschek, H. Mahdavi, and A. Singla. Towards deployment of robust cooperative ai agents:
396
+ An algorithmic framework for learning adaptive policies. In *AAMAS*, 2020.
397
+
398
+ Aditya Gopalan and Shie Mannor. Thompson sampling for learning parameterized markov decision processes.
399
+
400
+ In *Conference on Learning Theory*, pp. 861–898, 2015.
401
+
402
+ A. Grover, M. Al-Shedivat, J. Gupta, Y. Burda, and H. Edwards. Learning policy representations in multiagent systems. In *ICML*, 2018.
403
+
404
+ D. Hadfield-Menell, S. Russell, P. Abbeel, and A. Dragan. Cooperative inverse reinforcement learning. In NIPS, 2016.
405
+
406
+ L. Haug, S. Tschiatschek, and A. Singla. Teaching inverse reinforcement learners via features and demonstrations. In *NeurIPS*, 2018.
407
+
408
+ Eric Horvitz and Johnson Apacible. Learning and reasoning about interruption. In *Proceedings of the 5th* international conference on Multimodal interfaces, pp. 20–27, 2003.
409
+
410
+ Shamsi T Iqbal and Brian P Bailey. Understanding and developing models for detecting and differentiating breakpoints during interactive tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 697–706, 2007.
411
+
412
+ Alexis Jacq, Johan Ferret, Olivier Pietquin, and Matthieu Geist. Lazy-mdps: Towards interpretable reinforcement learning by learning when to act. In *AAMAS*, 2022.
413
+
414
+ T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. *Journal of* Machine Learning Research, 2010.
415
+
416
+ Christian P Janssen, Shamsi T Iqbal, Andrew L Kun, and Stella F Donker. Interrupted by my car?
417
+
418
+ implications of interruption and interleaving research for automated vehicles. International Journal of Human-Computer Studies, 130:221–233, 2019.
419
+
420
+ Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla. Interactive teaching algorithms for inverse reinforcement learning. In *IJCAI*, 2019.
421
+
422
+ Kyle Kotowick and Julie Shah. Modality switching for mitigation of sensory adaptation and habituation in personal navigation systems. In *23rd International Conference on Intelligent User Interfaces*, pp. 115–127, 2018.
423
+
424
+ Z. Liu, Z. Wang, P. Liang, R. Salakhutdinov, L. Morency, and M. Ueda. Deep gamblers: Learning to abstain with portfolio theory. In *NeurIPS*, 2019.
425
+
426
+ C. Macadam. Understanding and modeling the human driver. *Vehicle system dynamics*, 40(1-3):101–134, 2003.
427
+
428
+ O. Macindoe, L. Kaelbling, and T. Lozano-Pérez. Pomcop: Belief space planning for sidekicks in cooperative games. In *AIIDE*, 2012.
429
+
430
+ Catharine L. R. McGhan, Ali Nasir, and Ella M. Atkins. Human intent prediction using markov decision processes. *Journal of Aerospace Information Systems*, 12(5):393–397, 2015.
431
+
432
+ V. Mnih et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529, 2015.
433
+
434
+ Salama A Mostafa, Mohd Sharifuddin Ahmad, and Aida Mustapha. Adjustable autonomy: a systematic literature review. *Artificial Intelligence Review*, 51(2):149–186, 2019.
435
+
436
+ Hussein Mozannar and David Sontag. Consistent estimators for learning to defer to an expert. In *ICML*,
437
+ 2020.
438
+
439
+ S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah. Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In HRI, 2015.
440
+
441
+ S. Nikolaidis, J. Forlizzi, D. Hsu, J. Shah, and S. Srinivasa. Mathematical models of adaptation in human-robot collaboration. *arXiv preprint arXiv:1707.02586*, 2017.
442
+
443
+ Ian Osband and Benjamin Van Roy. Near-optimal reinforcement learning in factored mdps. In Advances in Neural Information Processing Systems, pp. 604–612, 2014.
444
+
445
+ Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. In *Advances in Neural Information Processing Systems*, pp. 3003–3011, 2013.
446
+
447
+ Goran Radanovic, Rati Devidze, David C. Parkes, and Adish Singla. Learning to collaborate in markov decision processes. In *ICML*, 2019.
448
+
449
+ M. Raghu, K. Blumer, G. Corrado, J. Kleinberg, Z. Obermeyer, and S. Mullainathan. The algorithmic automation problem: Prediction, triage, and human effort. *arXiv preprint arXiv:1903.12220*, 2019a.
450
+
451
+ M. Raghu, K. Blumer, R. Sayres, Z. Obermeyer, B. Kleinberg, S. Mullainathan, and J. Kleinberg. Direct uncertainty prediction for medical second opinions. In *ICML*, 2019b.
452
+
453
+ H. Ramaswamy, A. Tewari, and S. Agarwal. Consistent algorithms for multiclass classification with an abstain option. *Electronic J. of Statistics*, 2018.
454
+
455
+ Siddharth Reddy, Anca D Dragan, and Sergey Levine. Shared autonomy via deep reinforcement learning.
456
+
457
+ arXiv preprint arXiv:1802.01744, 2018.
458
+
459
+ Shubhanshu Shekhar, Mohammad Ghavamzadeh, and Tara Javidi. Active learning for classification with abstention. *IEEE Journal on Selected Areas in Information Theory*, 2(2):705–719, 2021.
460
+
461
+ D. Silver et al. Mastering the game of go with deep neural networks and tree search. *Nature*, 529(7587):484, 2016.
462
+
463
+ D. Silver et al. Mastering the game of go without human knowledge. *Nature*, 550(7676):354, 2017.
464
+
465
+ Peter Stone, Gal A Kaminka, Sarit Kraus, and Jeffrey S Rosenschein. Ad hoc autonomous agent teams:
466
+ Collaboration without pre-coordination. In *Twenty-Fourth AAAI Conference on Artificial Intelligence*,
467
+ 2010.
468
+
469
+ A. Strehl and M. Littman. An analysis of model-based interval estimation for markov decision processes.
470
+
471
+ Journal of Computer and System Sciences, 74(8):1309–1331, 2008.
472
+
473
+ DJ Strouse, Kevin McKee, Matt Botvinick, Edward Hughes, and Richard Everett. Collaborating with humans without human data. In *Advances in Neural Information Processing Systems*, volume 34, 2021.
474
+
475
+ Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. *Artificial intelligence*, 112(1-2):181–211, 1999.
476
+
477
+ Matthew E Taylor, Halit Bener Suay, and Sonia Chernova. Integrating reinforcement learning with human demonstrations of varying ability. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 617–624. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
478
+
479
+ S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof. Combating label noise in deep learning using abstention. *arXiv preprint arXiv:1905.10964*, 2019.
480
+
481
+ Lisa Torrey and Matthew Taylor. Teaching on a budget: Agents advising agents in reinforcement learning.
482
+
483
+ In *Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems*, pp.
484
+
485
+ 1053–1060, 2013.
486
+
487
+ James T. Townsend, Kam M. Silva, Jesse Spencer-Smith, and Michael J. Wenger. Exploring the relations between categorization and decision making with regard to realistic face stimuli. *Pragmatics & Cognition*, 8(1):83–105, 2000.
488
+
489
+ S. Tschiatschek, A. Ghosh, L. Haug, R. Devidze, and A. Singla. Learner-aware teaching: Inverse reinforcement learning with preferences and constraints. In *NeurIPS*, 2019.
490
+
491
+ O. Vinyals et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, pp. 1–5, 2019.
492
+
493
+ Thomas J Walsh, Daniel K Hewlett, and Clayton T Morrison. Blending autonomous exploration and apprenticeship learning. In *Advances in Neural Information Processing Systems*, pp. 2258–2266, 2011.
494
+
495
+ Bryan Wilder, Eric Horvitz, and Ece Kamar. Learning to complement humans. In *IJCAI*, 2020. H. Wilson and P. Daugherty. Collaborative intelligence: humans and ai are joining forces. *Harvard Business* Review, 2018.
496
+
497
+ Bohan Wu, Jayesh K Gupta, and Mykel Kochenderfer. Model primitives for hierarchical lifelong reinforcement learning. *Autonomous Agents and Multi-Agent Systems*, 34(1):1–38, 2020.
498
+
499
+ Y. Zheng, Z. Meng, J. Hao, Z. Zhang, T. Yang, and C. Fan. A deep bayesian policy reuse approach against non-stationary agents. In *NeurIPS*, 2018.
500
+
501
+ ## A Proofs A.1 Proof Of Theorem 1
502
+
503
+ We first define P
504
+ k D|.,t+ := ×s∈S,d∈D,t0∈{*t,...,L*}P
505
+ k
506
+ .|*d,s,t*0 , P
507
+ k |.,t+ = ×s∈S,a∈A,t0∈{*t,...,L*}P
508
+ k |*s,a,t*0 and πt+ =
509
+ {πt*, . . . , π*L}. Next, we get a lower bound the optimistic value function v k t
510
+ (*s, d*) as follows:
511
+ v k t
512
+ (*s, d*)
513
+ = min πmin PD∈PkD
514
+ min P ∈Pk V
515
+ π t|PD,P (*s, d*)
516
+ = min πt+min PD∈PkD
517
+ min P ∈Pk V
518
+ π t|PD,P (*s, d*)
519
+
520
+ (i) = min πt(s,d) min pπt(s,d)(.|s,t)∈Pk · | πt(s,d),s,t min p(.|s,.,t)∈Pk · | s,·,t min π(t+1)+ PD∈PkD | ·,(t+1)+ P ∈Pk · | ·,(t+1)+ hcπt(s,d)(s, d) + Ea∼pπt(s,d)(· | s,t) ce(s, a) + Es 0∼p(· | s,a,t)V π t+1|PD,P (s 0, πt(s, d))i (ii)
521
+ · | ·,(t+1)+ (ii) ≥ min πt(s,d) min pπt(s,d)(.|s,t)∈Pk · | πt(s,d),s,t min p(.|s,.,t)∈Pk · | s,·,t cπt(s,d)(s, d) +Ea∼pπt(s,d)(· | s,t) ce(s, a) + Es 0∼p(· | s,a,t) " min π(t+1)+ min PD∈PkD | ·,(t+1)+ min P ∈Pk · | ·,(t+1)+ V π t+1|PD,P (s 0, πt(s, d))#!# " cdt (s, d) + min pdt (.|s,t)∈Pk ·|dt,s,t X a∈A pdt (a|s, t) · ce(s, a) + min p(.|s,a,t)∈Pk · | s,a,t Es 0∼p(· | s,a,t)v k t+1(s 0, dt) !# , = min dt
522
+ where (i) follows from Lemma 8 and (ii) follows from the fact that mina E[X(a)] ≥ E[mina X(a)]. Next, we provide an upper bound of the optimistic value function v k t
523
+ (*s, d*) as follows:
524
+
525
+ $$v_{t}^{k}(s,d)$$
526
+
527
+ = min πmin PD∈PkD
528
+ min P ∈Pk V
529
+ π t|PD,P (*s, d*)
530
+
531
+ $${\overset{(i)}{=}}\operatorname*{min}_{\pi_{t}}\quad\quad\quad\quad\quad\operatorname*{min}_{p\pi_{t}(s,d)\,(.|s,t)\in{\mathcal{P}}}$$
532
+
533
+ min
534
+ π(t+1)+
535
+ PD∈Pk*D | ·*,(t+1)+
536
+ P ∈Pk
537
+ $$\min_{a,(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\pi_{t}(s,d),s,t}\min_{p(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\ldots,t}$$ $$\left.\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}_{\alpha\sim p_{\pi_{t}(s,d)}(\,\mid s,t)}\left(c_{e}(s,a)+\mathbb{E}_{s^{\prime}\sim p(\,\mid s,a,t)}V_{t+1\mid p_{\mathbb{D}},p}^{\pi}(s^{\prime},\pi_{t}(s,d))\right)\right]\right.\right.$$
538
+ · | ·,(t+1)+
539
+ (ii)
540
+ $$\begin{array}{r l}{\operatorname*{min}}&{{}\operatorname*{min}}\\ {\pi_{t}(s,d)}&{{}\quad p_{\pi_{t}(x,d)}(\cdot|s,t){\in}\mathcal{P}_{\mathrm{-}|\pi_{t}(x,d),x,t}^{k}}\end{array}$$
541
+ $$\min_{p(:\mid s,t_{i}\rangle\in P^{s}_{t_{i}\mid s,t_{i}\rangle}}\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{\pi_{t}(s,d)}(\ \mid s,t)}\left(c_{e}(s,a)+\mathbb{E}_{s^{\prime}\sim p(\ \mid s,a,t)}V^{\pi^{\prime}}_{t+1\mid P^{\pi}_{\pi},P^{\pi}}(s^{\prime},\pi_{t}(s,d))\right)\right]$$
542
+ $$(\stackrel{i i i}{=})\operatorname*{min}_{\pi_{t}(s,d)}$$
543
+ $$\min_{p(\cdot,\cdot),p(\cdot),p(\cdot)}\left[c_{n_{1}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{a_{1}(s,d)}(\ \mid s,d)}\left(c_{\nu}(s,a)+\mathbb{E}_{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},\pi_{t}(s,d))\right)\right]$$ $$=\min_{d_{t}}\left[c_{a}(s,d)+\min_{p_{a_{1}(s,d)}\in\mathbb{P}_{d_{1},s,a_{1}}^{\nu}}\sum_{a_{i}\in A}p_{d_{i}}(a|s,t)\cdot\left(c_{\nu}(s,a)+\min_{p(\ \mid s,a,t)\in\mathbb{P}_{\nu\sim p(\ \mid s,a,t)}}\mathbb{E}_{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},d_{t})\right)\right].$$
544
+ $$\operatorname*{min}_{P\pi_{t}(s,d)\,(\cdot|s,t)\in{\mathcal{P}}_{\cdot}^{k}}$$
545
+ Here, (i) follows from Lemma 8, (ii) follows from the fact that:
546
+
547
+ hcπt(s,d)(*s, d*) + Ea∼pπt(s,d)(· | s,t)
548
+ ce(*s, a*) + Es
549
+ 0∼p(· | *s,a,t*)V
550
+ π
551
+ t+1|PD,P (s
552
+ 0, πt(*s, d*))i
553
+ min
554
+ π(t+1)+
555
+ PD∈Pk*D | ·*,(t+1)+
556
+ P ∈Pk
557
+ · | ·,(t+1)+
558
+
559
+ hcπt(s,d)(*s, d*) + Ea∼pπt(s,d)(· | s,t)
560
+ ce(*s, a*) + Es
561
+ 0∼p(· | *s,a,t*)V
562
+ π
563
+ t+1|PD,P (s
564
+ 0, πt(*s, d*))i
565
+ ∀π, PD ∈ Pk*D | ·*,(t+1)+ , P ∈ Pk
566
+ · | ·,(t+1)+ (17)
567
+ and if we set π(t+1)+ = {π
568
+
569
+ t+1*, ..., π*∗L
570
+ }, PD = P
571
+
572
+ D ∈ Pk*D | ·*,(t+1)+ and P = P
573
+ ∗ ∈ Pk
574
+ | ·,(t+1)+ , where
575
+
576
+
577
+ t+1, ..., π∗L}, P∗D, P∗ = argmin
578
+ π(t+1)+
579
+ PD∈Pk*D | ·*,(t+1)+
580
+ P ∈Pk
581
+ · | ·,(t+1)+
582
+ $$(17)$$
583
+ $$(18)$$
584
+ V
585
+ π
586
+ t+1|PD,P (s
587
+ 0, πt(*s, d*)), (18)
588
+ PK
589
+ k=1 ∆kI(P
590
+
591
+ D *6∈ P*kD ∨ P
592
+ ∗ *6∈ P*k).
593
+ then equality (iii) holds. Since the upper and lower bounds are the same, we can conclude that the optimistic value function satisfies Eq. 12, which completes the proof.
594
+
595
+ ## A.2 Proof Of Theorem 2
596
+
597
+ In this proof, we assume that ce(*s, a*) + cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D. Throughout the proof, we will omit the subscripts P
598
+
599
+ D, P∗in Vt | P ∗D,P ∗ and write Vt instead in case of true agent policies P
600
+
601
+ D and true transition probabilities P
602
+ ∗. Then, we define the following quantities:
603
+ where, recall from Eq. 7 that, π k = argminπ minPD∈PkD
604
+ , minP ∈Pk V
605
+ π 1|PD,P
606
+ (s1, d0); and, ∆k indicates the regret for episode k. Hence, we have
607
+
608
+ $$R(T)=\sum_{k=1}^{K}\Delta_{k}=\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})+\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\tag{22}$$ Next, we split the analysis into two parts. We first bound $\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})$ and then bound
609
+ $$(19)$$
610
+ $$(20)$$
611
+ $$(21)$$
612
+ $$(22)$$
613
+
614
+ - *Computing the bound on* PK
615
+ k=1 ∆kI(P
616
+
617
+ D ∈ PkD ∧ P
618
+ ∗ ∈ Pk)
619
+ First, we note that
620
+
621
+ $$\Delta_{k}=V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1}^{\pi^{\pi^{*}}}(s_{1},d_{0})\leq V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\Sigma,P^{k}}^{k}}^{\pi^{k}}(s_{1},d_{0})$$
622
+ This is because
623
+ V π k 1|P kD,P k (s1, d0) (i) = min πmin PD∈PkD min P ∈Pk V π 1|PD,P (s1, d0) (ii) ≤ min πV π 1|P ∗D,P ∗ (s1, d0) = V π ∗
624
+ 1(s1, d0), (24)
625
+ where (i) follows from Eqs. 19, 20, and (ii) holds because of the fact that the true transition probabilities P
626
+
627
+ D ∈ PkD and P
628
+ ∗ ∈ Pk. Next, we use Lemma 4 (Appendix B) to bound PK
629
+ k=1(V
630
+ π k 1(s1, d0)−V
631
+ π k 1|P kD,P k (s1, d0)).
632
+
633
+ $$\sum_{k=1}^{K}(V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\mathfrak{D}}^{k},P^{k}}^{\pi^{k}}(s_{1},d_{0}))\leq\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\mathfrak{D}}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t}\delta)\}\right|s_{1},$$
634
+ s1, d0
635
+ $$\left.\begin{array}{l}{{}}\\ {{}}\\ {{}}\end{array}\right]$$
636
+ $$(23)$$
637
+ $$(24)$$
638
+ P k D = argmin PD∈PkD(δ) min P ∈Pk(δ) V π k 1|PD,P (s1, d0), (19) P k = argmin P ∈Pk(δ) V π k 1|P kD,P (s1, d0), (20) ∆k = V π k 1(s1, d0) − V π ∗ 1(s1, d0), (21)
639
+ Since by assumption, ce(*s, a*) + cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D, the worst-case regret is bounded by T. Therefore, we have that:
640
+
641
+ k=1 ∆kI(P ∗ D ∈ PkD ∧ P ∗ ∈ Pk) ≤ min (T,X K X K k=1 LE "X L t=1 min{1, βkD(st, dt, δ)}|s1, d0 # + X K k=1 LE "X L t=1 min{1, βk(st, at, δ)}|s1, d0 #) ≤ min (T,X K k=1 LE "X L t=1 min{1, βkD(st, dt, δ)}|s1, d0 #) + min (T,X K k=1 LE "X L t=1 min{1, βk(st, at, δ)}|s1, d0 #) , (26)
642
+ where, the last inequality follows from Lemma 9. Now, we aim to bound the first term in the RHS of the above inequality.
643
+
644
+ $$\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right]$$
645
+ #(i) = L X K 1, vuut 2 log ((k−1)L) 7|S||D|2 |A|+1 δ t=1 min k=1 E X L s1, d0 max{1, Nk(st, dt)} 1, vuut 2 log (KL) 7|S||D|2 |A|+1 δ k=1 E X L t=1 min (ii) ≤ L X K max{1, Nk(st, dt)} (iii) ≤ 2 √2L s 2 log (KL) 7|S||D|2 |A|+1 δ |S||D|KL + 2L 2|S||D| (27) ≤ 2 √2 s 14|A|log (KL)|S||D| δ |S||D|KL + 2L 2|S||D| = √112s|A|log (KL)|S||D| δ |S||D|KL + 2L 2|S||D| (28)
646
+
647
+ where (i) follows by replacing β k D(st, dt, δ) with its definition, (ii) follows by the fact that (k − 1)L ≤ KL, (iii)
648
+ follows from Lemma 5, in which, we put W := *S ×D*, c := r2 log (KL)
649
+ 7*|S||D|*2 |A|+1 δ
650
+ , Tk = (wk,1, . . . , wk,L) :=
651
+ ((s1, d1), . . . ,(sL, dL)). Now, due to Eq. 28, we have the following.
652
+
653
+ $$\min\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq\min\left\{T,\sqrt{112}L\sqrt{|A||S||D|}T\log\left(\frac{T|S||D|}{\delta}\right)+2L^{2}|S||D|\right\}\tag{29}$$
654
+
655
+ Now, if T ≤ 2L
656
+ 2*|S||A||D|*log T*|S||D|* δ
657
+ ,
658
+
659
+ $$T^{2}\leq2L^{2}|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)\implies T\leq{\sqrt{2}}L{\sqrt{|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)}}$$
660
+
661
+ and if T > 2L
662
+ 2*|S||A||D|*log T*|S||D|* δ
663
+ ,
664
+
665
+ $$2L^{2}|S|<{\frac{\sqrt{2L^{2}|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}{|A||D|\log\left({\frac{T|S||D|}{\delta}}\right)}}\leq{\sqrt{2}}L{\sqrt{|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}.$$
666
+ $$(30)$$
667
+ . (30)
668
+ Thus, the minimum in Eq. 29 is less than
669
+
670
+ $$({\sqrt{2}}+{\sqrt{112}})L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}}<12L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}}$$
671
+ (31)
672
+ A similar analysis can be done for the second term of the RHS of Eq. 26, which would show that,
673
+
674
+ $$\operatorname*{min}\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq12L|\mathcal{S}|{\sqrt{|\mathcal{A}|T\log\left({\frac{T|\mathcal{S}|\mathcal{A}|}{\delta}}\right)}}.$$
675
+ . (32)
676
+ Combining Eqs. 26, 31 and 32, we can bound the first term of the total regret as follows:
677
+
678
+ $$\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})\leq12L\sqrt{|\mathcal{A}|\mathcal{S}||\mathcal{D}|T\log\left(\frac{T|\mathcal{S}||\mathcal{D}|}{\delta}\right)}+12L|\mathcal{S}|\sqrt{|\mathcal{A}|T\log\left(\frac{T|\mathcal{S}||\mathcal{A}|}{\delta}\right)}\tag{33}$$ _Computing the bound on $\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})$_
679
+ Here, we use a similar approach to Jaksch et al. (2010). Note that
680
+
681
+ $$\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=\sum_{k=1}^{\lfloor\sqrt{\frac{K}{k}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})+\sum_{k=\lfloor\sqrt{\frac{K}{k}}\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k}).\tag{34}$$
682
+ $$(31)$$
683
+ $$(32)$$
684
+ $$(36)$$
685
+
686
+ Now, our goal is to show the second term of the RHS of above equation vanishes with high probability. If we succeed, then it holds that, with high probability, PK
687
+ k=1 ∆kI(P
688
+
689
+ D *6∈ P*kD ∨ P
690
+ ∗ *6∈ P*k) equals the first term of the RHS and then we will be done because
691
+
692
+ $$\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\leq\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor L=\sqrt{\overline{K}L},\tag{35}$$
693
+
694
+ where (i) follows from the fact that ∆k ≤ L since we assumed the cost of each step ce(s, a)+cc(d)+cx(*d, d*0) ≤ 1 for all s ∈ S, a ∈ A, and d, d0 ∈ D.
695
+
696
+ To prove that PK
697
+ k=√ K
698
+ L
699
+ +1 ∆kI(P
700
+
701
+ D *6∈ P*kD ∨ P
702
+ ∗ *6∈ P*k) = 0 with high probability, we proceed as follows. By applying Lemma 6 to P
703
+
704
+ D and P
705
+ ∗, we have
706
+
707
+ $$\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}},\ \operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}}$$
708
+
709
+ Thus,
710
+
711
+ $$\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k}\lor P^{*}\not\in{\mathcal{P}}^{k})\leq\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})+\operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{t_{k}^{~6}}}$$
712
+ 6(37)
713
+ $$(37)$$
714
+
715
+ where tk = (k − 1)L is the end time of episode k − 1. Therefore, it follows that
716
+
717
+ Pr X K k=√ K L +1 ∆kI(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk) = 0!= Pr ∀k : $rK L % + 1 ≤ k ≤ K; P ∗ D ∈ PkD ∧ P ∗ ∈ Pk ! = 1 − Pr ∃k : $rK L % + 1 ≤ k ≤ K; P ∗ D 6∈ PkD ∨ P ∗6∈ Pk ! (i) ≥ 1 −X K k=√ K L +1 Pr(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk)
718
+ $$\stackrel{{(ii)}}{{\geq}}1-\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\frac{\delta}{t_{k}^{6}}$$ $$\stackrel{{(iii)}}{{\geq}}1-\sum_{t=\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\int_{\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\frac{\delta}{5(KL)^{\frac{\pi}{4}}}.\tag{38}$$ follows from Eq. [37] and (iii) holds using that $t_{k}=(k-1)L$. Hence
719
+ $$(39)$$
720
+ where (i) follows from a union bound, (ii) follows from Eq. 37 and (iii) holds using that tk = (k − 1)L. Hence, with probability at least 1 −δ 5(KL)
721
+ 5 4 we have that
722
+
723
+ $$\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=0.$$
724
+ If we combine the above equation and Eq. 35, we can conclude that, with probability at least 1 −δ
725
+ 5T 5/4, we
726
+ have that
727
+ $$\sum_{k=1}^{\lfloor{\sqrt{\frac{T}{k}}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq{\sqrt{T}}$$
728
+
729
+ where T = KL. Next, if we combine Eqs. 33 and 40, we have
730
+
731
+ R(T) = X K k=1 ∆kI(P ∗ D ∈ PkD ∧ P ∗ ∈ Pk) +X K k=1 ∆kI(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk) ≤ 12L s |A||S||D|T log T|S||D| δ + 12L|S|s|A|T log T|S||A| δ + √ T ≤ 13L s |A||S||D|T log T|S||D| δ + 12L|S|s|A|T log T|S||A| δ (41) Finally, since P∞ T =1δ 5T 5/4 ≤ δ, with probability at least 1 − δ, the above inequality holds. This concludes the
732
+ $$(40)$$
733
+ proof.
734
+
735
+ ## B Useful Lemmas
736
+
737
+ Lemma 4. Suppose PD and P are true transitions and PD ∈ PkD, P ∈ Pkfor episode k*. Then, for arbitrary*
738
+ policy $x^{k}$, and arbitrary $P_{D}^{k}\in\mathcal{P}_{D}^{k}$, $P^{k}\in\mathcal{P}^{k}$, it holds that_ $$V_{1}^{s}{}_{\mid P_{D},\,P}(s,d)-V_{1}^{s}{}_{\mid P_{D}^{k},\,P^{k}}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{H}\min\{1,\beta^{k}(s_{t},a_{t},\delta)\}\mid s_{1}=s,d_{0}=d\right],\tag{42}$$
739
+ where the expectation is taken over the MDP with policy π k under true transitions PD and P.
740
+
741
+ Proof. For ease of notation, let v k t
742
+ := V
743
+ π k t | PD,P , v k t | k
744
+ := V
745
+ π k t | P kD,P k and c π t
746
+ (*s, d*) = cπ k t
747
+ (s,d)
748
+ (*s, d*). We also define d 0 = π k 1
749
+ (*s, d*). From Eq 68, we have
750
+
751
+ $$\mathbb{P}_{k}^{\sharp}(s,d)=c_{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}(a\,|\,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime}|s,a)\cdot\mathbb{P}_{2}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{43}$$ $$\mathbb{P}_{1\,|\,k}^{\sharp}(s,d)=c_{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}^{k}(a\,|\,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p^{k}(s^{\prime}|s,a)\cdot\mathbb{P}_{2\,|\,k}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{44}$$ \[\begin{array}{
752
+ Now, using above equations, we rewrite v k 1
753
+ (*s, d*) − v k 1 | k
754
+ (*s, d*) as
755
+
756
+ v
757
+ k
758
+ 1
759
+ (*s, d*) − v
760
+ k
761
+ 1 | k
762
+ (*s, d*) = X
763
+ a∈A
764
+
765
+ k
766
+ 1
767
+ (s,d)
768
+ (a|s)
769
+
770
+ ce(*s, a*) + X
771
+ s
772
+ 0∈S
773
+ p(s
774
+ 0|*s, a*) · v
775
+ k
776
+ 2
777
+ (s
778
+ 0, d0)
779
+ !
780
+ a∈A
781
+ p
782
+ k
783
+ π
784
+ k
785
+ 1
786
+ (s,d)
787
+ (a|s)
788
+
789
+ ce(*s, a*) + X
790
+ s
791
+ 0∈S
792
+ p
793
+ k(s
794
+ 0|*s, a*) · v
795
+ k
796
+ 2 | k
797
+ (s
798
+ 0, d0)
799
+ !
800
+
801
+ X
802
+ a∈A
803
+
804
+ k
805
+ 1
806
+ (s,d)
807
+ (a | s)
808
+
809
+ ce(*s, a*) + X
810
+ s
811
+ 0∈S
812
+ p(s
813
+ 0|*s, a*) · v
814
+ k
815
+ 2
816
+ (s
817
+ 0, d0) − ce(*s, a*) −
818
+ X
819
+ s
820
+ 0∈S
821
+ p
822
+ k(s
823
+ 0| s) · v
824
+ k
825
+ 2 | k
826
+ (s
827
+ 0, d0)
828
+ !
829
+ (i)
830
+ =
831
+ X
832
+ a∈A
833
+
834
+ k
835
+ 1
836
+ (s,d)
837
+ (a | s) − p
838
+ k
839
+ π
840
+ k
841
+ 1
842
+ (s,d)
843
+ (a | s)
844
+
845
+ ce(*s, a*) + X
846
+ s
847
+ 0∈S
848
+ p
849
+ k(s
850
+ 0| *s, a*) · v
851
+ k
852
+ 2 | k
853
+ (s
854
+ 0, d0)
855
+
856
+ +
857
+ X
858
+ | {z }
859
+ ≤L
860
+ a∈A "pπ
861
+ k
862
+ 1
863
+ (s,d)
864
+ (a | s) ·
865
+ X
866
+ s
867
+ 0∈S
868
+ hp(s
869
+ 0| *s, a*)v
870
+ k
871
+ 2
872
+ (s
873
+ 0, d0) − p
874
+ k(s
875
+ 0| *s, a*)v
876
+ k
877
+ 2 | k
878
+ (s
879
+ 0, d0)
880
+ i#
881
+ (ii)
882
+
883
+ X
884
+ a∈A
885
+ hpπ
886
+ k
887
+ 1
888
+ (s,d)
889
+ (a | s) − p
890
+ k
891
+ π
892
+ k
893
+ 1
894
+ (s,d)
895
+ (a | s)
896
+ i
897
+ + L
898
+ X
899
+ (iii)
900
+ =X
901
+ a∈A "pπ
902
+ k
903
+ 1
904
+ (s,d)
905
+ (a | s) ·
906
+ X
907
+ s
908
+ 0∈S
909
+ p(s
910
+ 0| *s, a*) ·
911
+ v
912
+ k
913
+ 2
914
+ (s
915
+ 0, d0) − v
916
+ k
917
+ 2 | k
918
+ (s
919
+ 0, d0)
920
+ #
921
+ a∈A
922
+
923
+
924
+ k
925
+ 1
926
+ (s,d)
927
+ (a | s)
928
+ X
929
+ s
930
+ 0∈S
931
+ p(s
932
+ 0| *s, a*) − p
933
+ k(s
934
+ 0| *s, a*)v
935
+ k
936
+ 2 | k
937
+ (s
938
+ 0, d0)
939
+ | {z }
940
+ ≤L
941
+
942
+ +
943
+ X
944
+
945
+ a∈A
946
+ hpπ
947
+ k
948
+ 1
949
+ (s,d)
950
+ (a | s) − p
951
+ k
952
+ π
953
+ k
954
+ 1
955
+ (s,d)
956
+ (a | *s, d*)
957
+ i
958
+ + L
959
+ X
960
+ (iv)
961
+ ≤ Ea∼pπk
962
+ 1
963
+ (s,d)
964
+ (. | s),s0∼p(· | s,a)
965
+ hv
966
+ k
967
+ 2
968
+ (s
969
+ 0, d0) − v
970
+ k
971
+ 2 | k
972
+ (s
973
+ 0, d0)
974
+ i
975
+ + LEa∼pπk
976
+ 1
977
+ (s,d)
978
+ (· | s)
979
+ "X
980
+ s
981
+ 0∈S
982
+ -p(s
983
+ 0| *s, a*) − p
984
+ k(s
985
+ 0| *s, a*)#+ L
986
+ X
987
+ a∈A
988
+ hpπ
989
+ k
990
+ 1
991
+ (s,d)
992
+ (a | s) − p
993
+ k
994
+ π
995
+ k
996
+ 1
997
+ (s,d)
998
+ (a | s)
999
+ i,
1000
+ (45)
1001
+ where (i) follows by adding and subtracting term pπ k 1
1002
+ (s,d)
1003
+ (a | s)
1004
+ ce(*s, a*) + Ps 0∈S p k(s 0| *s, a*) · v k 2 | k
1005
+ (s 0, d0)
1006
+ ,
1007
+ (ii) follows from the fact that ce(*s, a*) + Ps 0∈S p k(s 0| *s, a*) · v k 2 | k
1008
+ (s 0, d0) ≤ L, since, by assumption, ce(*s, a*) +
1009
+ cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D.. Similarly, (iii) follows by adding and subtracting p(s 0| *s, a*)v k 2 | k
1010
+ (s 0, d0), and (iv) follows from the fact that v k 2 | k ≤ L. By assumption, both PD and P
1011
+ k D lie in the confidence set P
1012
+ k D(δ), so
1013
+
1014
+ $$\sum_{a\in\mathcal{A}}\left[p_{\pi_{1}^{k}(s,d)}(a\,|\,s)-p_{\pi_{1}^{k}(s,d)}^{k}(a\,|\,s)\right]\leq\operatorname*{min}\{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)\}$$
1015
+
1016
+ Similarly,
1017
+
1018
+ $$\sum_{s^{\prime}\in S}\left[p(s^{\prime}\,|\,s,a)-p^{k}(s^{\prime}\,|\,s,a)\right]\leq\min\{1,\beta^{k}(s,a,\delta)\}$$
1019
+
1020
+ If we combine Eq. 46 and Eq. 47 in Eq. 45, for all s ∈ S, it holds that
1021
+
1022
+ $\overline{v}_{1}^{k}(s,d)-\overline{v}_{1\,|\,k}^{k}(s,d)\leq\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{(1,s)},s^{\prime}\sim p(1,s,a)}\left[\overline{v}_{2}^{k}(s^{\prime},d^{\prime})-\overline{v}_{2\,|\,k}^{k}(s^{\prime},d^{\prime})\right]$ $$+\,\mathrm{LE}_{a\sim p_{\pi_{1}^{k}(s,d)}^{(1,s)}}\left[\min\{1,\beta^{k}(s,a,\delta)\}\right]$$ $$+\,L\left[\min\{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)\}\right]$$
1023
+ $$(46)$$
1024
+ $$(47)$$
1025
+ Similarly, for all s ∈ S, d ∈ D we can show
1026
+
1027
+ $$\begin{array}{l}{{\overline{{v}}_{2}^{k}(s,d)-\overline{{v}}_{2\,|\,k}^{k}(s,d)\leq\,\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k},s^{\prime}\sim p_{\zeta}^{k}[s,a)}\left[\overline{{v}}_{3}^{k}(s^{\prime},\pi_{2}(s,d))-\overline{{v}}_{3\,|\,k}^{k}(s^{\prime},\pi_{2}(s,d))\right]}}\\ {{+L\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k}}\left[\min\{1,\beta^{k}(s,a,\delta)\}\right]}}\\ {{+L\left[\min\{1,\beta_{D}^{k}(s,\pi_{2}^{k}(s,d),\delta)\}\right]}}\end{array}$$
1028
+
1029
+ Hence, by induction we have
1030
+
1031
+ $$\overline{v}_{1}^{k}(s,d)-\overline{v}_{1\,\,1\,\,k}^{k}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\rm P}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1}=s,d_{0}=d\right]$$ where $s_{t}$ is the $t$th element of $L^{p}$. The $s_{t}$ is the $t$th element of $L^{p}$.
1032
+ $$(48)$$
1033
+ $$(49)$$
1034
+ $$\square$$
1035
+ $$\quad(50)$$
1036
+ where the expectation is taken over the MDP with policy $\pi^k$ under true transitions $P_{\mathcal{D}}$ and $P_{\pi^k}$.
1037
+ Lemma 5. Let W be a finite set and c be a constant. For k ∈ [K]*, suppose* Tk = (wk,1, wk,2, . . . , wk,H) is a random variable with distribution P(.|wk,1), where wk,i ∈ W*. Then,*
1038
+
1039
+ $$\sum_{k=1}^{K}\mathbb{E}_{T_{k}\sim P(.|w_{k,i})}\left[\sum_{t=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,t})\}}}\}\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{51}$$ $=\sum_{i=1}^{k-1}\sum_{t=1}^{H}\mathbb{I}(w_{i,t}=w)$.
1040
+ with Nk(w) := Pk−1
1041
+ j=1
1042
+ Proof. The proof is adapted from Osband et al. (2013). We first note that
1043
+
1044
+ E "X K k=1 X H t=1 min{1,c pmax{1, Nk(wk,t)} } # = E "X K k=1 X H t=1 I(Nk(wk,t) ≤ H) min{1,c pmax{1, Nk(wk,t)} } # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) min{1,c pmax{1, Nk(wk,t)} } # ≤ E "X K k=1 X H t=1 I(Nk(wk,t) ≤ H) · 1 # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) ·c pNk(wk,t) # (52)
1045
+
1046
+ Then, we bound the first term of the above equation
1047
+
1048
+ $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})\leq H)\right]=\mathbb{E}\left[\sum_{w\in\mathcal{W}}\{\#\text{of times}w\text{is observed and}N_{k}(w)\leq H\}\right]$$ $$\leq\mathbb{E}\left[|\mathcal{W}|\cdot2H|=2H|\mathcal{W}|\right.\tag{1}$$
1049
+
1050
+ To bound the second term, we first define nτ (w) as the number of times w has been observed in the first τ steps, *i.e.*, if we are at the t th index of trajectory Tk, then τ = tk + t, where tk = (k − 1)H, and note that
1051
+
1052
+ $$n_{t_{k}+t}(w)\leq N_{k}(w)+t$$
1053
+ $$\mathbf{\Phi}_{k}(w)+H+1\leq2N_{k}(w).$$
1054
+ ntk+t(w) ≤ Nk(w) + t (54)
1055
+ because we will observe w at most t ∈ {1*, . . . , H*} times within trajectory Tk. Now, if Nk(w) > H, we have that
1056
+
1057
+ $$n_{t_{k}+t}(w)+1\leq t$$
1058
+
1059
+ ntk+t(w) + 1 ≤ Nk(w) + t + 1 ≤ Nk(w) + H + 1 ≤ 2Nk(w). (55)
1060
+ Hence we have,
1061
+
1062
+ $$\mathbb{I}(N_{k}(w_{k,t})>H)(n_{t_{k}+t}(w_{k,t})+1)\leq2N_{k}(w_{k,t})\implies{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\leq{\frac{2}{n_{t_{k}+t}(w_{k,t})+1}}$$
1063
+ Then, using the above equation, we can bound the second term in Eq. 52:
1064
+ The above equation, we can bound the second term in Eq. (1) as $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]=\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}c_{i}\sqrt{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\right]$$ $$\stackrel{{(i)}}{{\leq}}\sqrt{2}c\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\sqrt{\frac{1}{n_{t_{k}+t}(w_{k,t})+1}}\right],$$
1065
+
1066
+ $$\quad(53)$$
1067
+
1068
+ $$(55)$$
1069
+ $$(56)$$
1070
+ $$\quad(57)$$
1071
+
1072
+ where (i) follows from Eq. 56.
1073
+
1074
+ Next, we can further bound E
1075
+ hPK
1076
+ k=1 E "X K k=1 X H t=1 s1 ntk+t(wk,t) + 1#= E "X KH τ=1 s1 nτ (wτ ) + 1# (i) = E r1 ν + 1 X w∈W NKX+1(w) ν=0 w∈W E r1 ν + 1 NKX+1(w) ν=0 = X w∈W E "Z NK+1(w)+1 r1 x dx# ≤ X 1 w∈W E h2pNK+1(w) i ≤ X (ii) ≤ E 2 s|W| X w∈W NK+1(w) (iii) = E h2p|W|KHi= 2p|W|KH, (58) where (i) follows from summing over different w ∈ W instead of time and from the fact that we observe each
1077
+ PH
1078
+ t=1 q 1
1079
+ ntk+t(wk,t)+1 ias follows:
1080
+
1081
+ $$\left({\mathrm{58}}\right)$$
1082
+ w exactly NK+1(w) times after K trajectories, (ii) follows from Jensen's inequality and (iii) follows from the fact that Pw∈W NK+1(w) = KH. Next, we combine Eqs 57 and 58 to obtain
1083
+
1084
+ $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]\leq\sqrt{2}c\times2\sqrt{|\mathcal{W}|K\bar{H}}=2\sqrt{2}c\sqrt{|\mathcal{W}|K\bar{H}}\tag{59}$$
1085
+
1086
+ Further, we plug in Eqs. 53 and 59 in Eq.52
1087
+
1088
+ $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{r=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,i})\}}}\}\}\,|\,|\,\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{60}$$ the proof.
1089
+
1090
+ This concludes the proof.
1091
+
1092
+ Lemma 6. Let W *be a finite set and* Pt(δ) := {p : ∀w ∈ W, ||p(.|w) − pˆt(.|w)||1≤ βt(w, δ)} be a |W|rectangular confidence set over probability distributions p
1093
+ ∗(.|w) with m outcomes, where pˆt(.|w) is the empirical
1094
+ estimation of $\overline{P^{\prime}(.|w)}$. Suppose at each time $\tau$, we observe an state $w_{\tau}=w$ and sample from $p^{\prime}(.|w)$. If $\beta_{t}(w,\delta)=\sqrt{\frac{2\log\left(\frac{\delta^{2}(\overline{P^{\prime}(.|w)})^{2-\delta+1}}{2\log\left(1,\overline{N_{t}(w)}\right)}\right)}{\max\left(1,N_{t}(w)\right)}}$ with $N_{t}(w)=\sum_{\tau=1}^{t}\mathbb{I}(w_{\tau}=w)$, then the true distributions $p^{*}$ lie in the confidence set $\mathcal{P}_{t}(\delta)$ with probability at least $1-\frac{\delta}{2^{t}}$.
1095
+ ∗(.|w)*. If* Proof. We adapt the proof from Lemma 17 in Jaksch et al. (2010). We note that,
1096
+
1097
+ Pr(p ∗6∈ Pt) (i) = Pr [ w∈W kp ∗(· | w) − pˆt(· | w)k1 ≥ βt(w, δ) ! (ii) ≤X w∈W Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ max{1, Nt(w)} (iii) ≤X w∈W Xt n=0 Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ , max{1, n}
1098
+ where (i) follows from the definition of the confidence set, *i.e.*, the probability distributions do not lie in the confidence set if there is at least one state w in which kp
1099
+ ∗(· | w) − pˆ(· | w)k1 ≥ βt(*w, δ*), (ii) follows from the definition of βt(*w, δ*) and a union bound over all w ∈ W and (iii) follows from a union bound over all possible values of Nt(w). To continue, we split the sum into n = 0 and n > 0:
1100
+
1101
+ w∈W Xt n=0 Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ X max{1, n} w∈W Xt n=1 Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut2 log t 7|W|2m+1 δ = X n if n=0 z }| { X w∈W Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ s 2 log t 7|W|2m+1 δ ! + w∈W Xt n=1 Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut2 log t 7|W|2m+1 δ (i) = X + 0 n (ii) ≤ t|W|2 m exp log − t 7|W|2 m+1 δ ≤δ 2t 6 ,
1102
+ where (i) follows from the fact that kp
1103
+ ∗(· | w) − pˆt(· | w)k1 <
1104
+ r2 log t 7|W|2m+1 δ for non-trivial cases. More specifically,
1105
+
1106
+ $$\delta<1,\,t\geq2\implies\sqrt{2log\left(\frac{t^{\tau}|\mathcal{W}|2^{m+1}}{\delta}\right)}>\sqrt{2\log(512)}>2,$$ $$\|p^{*}(\cdot\,|\,w)-\hat{p}_{t}(\cdot\,|\,w)\|_{1}\leq\sum_{i\in[m]}\left(p^{*}(i\,|\,s)+\hat{p}_{t}(i\,|\,w)\right)\leq2,\tag{61}$$
1107
+
1108
+ and (ii) follows from the fact that, after observing n samples, the L
1109
+ 1-deviation of the true distribution p
1110
+
1111
+ from the empirical one pˆ over m events is bounded by:
1112
+
1113
+ $$\mathrm{Pr}\left(\|p^{*}(\cdot)-{\hat{p}}(\cdot)\|_{1}\geq\epsilon\right)\leq2^{m}\exp\left(-n{\frac{\epsilon^{2}}{2}}\right)$$
1114
+
1115
+ Lemma 7. *Consider the following minimization problem:*
1116
+
1117
+ $$(62)$$
1118
+ $$\square$$
1119
+ $$(63)$$
1120
+ $$(65)$$
1121
+ $$(66)$$
1122
+
1123
+ where d ≥ 0, bi ≥ 0 ∀i ∈ {1, . . . , m},Pi bi = 1 and 0 ≤ w1 ≤ w2 . . . ≤ wm. Then, the solution to the above minimization problem is given by:
1124
+
1125
+ $$x_{i}^{*}={\left\{\begin{array}{l l}{\operatorname*{min}\{1,b_{1}+{\frac{d}{2}}\}}&{{\mathrm{if~}}i=1}\\ {b_{i}}&{{\mathrm{if~}}i>1\,a n d\,\sum_{l=1}^{i}x_{l}\leq1}\\ {0}&{{\mathrm{otherwise.}}}\end{array}\right.}$$
1126
+ $$(64)$$
1127
+
1128
+ Proof. Suppose there is {x 0 i
1129
+ ;Pi x 0 i = 1, x0i ≥ 0} such that Pi x 0 iwi <Pi x
1130
+
1131
+ i wi. Let j ∈ {1*, . . . , m*} be the first index where x 0 j 6= x
1132
+ ∗ j
1133
+ , then it's clear that x 0 j > x∗
1134
+ j
1135
+ .
1136
+
1137
+ If j = 1:
1138
+
1139
+ $$\sum_{i=1}^{m}\vert x_{i}^{\prime}-b_{i}\vert=\vert x_{1}^{\prime}-b_{1}\vert+\sum_{i=2}^{m}\vert x_{i}^{\prime}-b_{i}\vert>{\frac{d}{2}}+\sum_{i=2}^{m}b_{i}-x_{i}^{\prime}={\frac{d}{2}}+x_{1}^{\prime}-b_{1}>d$$
1140
+
1141
+ If j > 1:
1142
+
1143
+ $$\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|=|x_{1}^{\prime}-b_{1}|+\sum_{i=j}^{m}|x_{i}^{\prime}-b_{i}|>\frac{d}{2}+\sum_{i=j+1}^{m}b_{i}-x_{i}^{\prime}>\frac{d}{2}+x_{1}^{\prime}-b_{1}=d$$ radical the condition $\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|\leq d$.
1144
+ Both cases contradict the condition Pm Lemma 8. *For the value function* V
1145
+ π t|PD,P defined in Eq. *10, we have that:*
1146
+
1147
+ V π t|PD,P (s, d) = cπt(s,d)(s, d) + X a∈A pπt(s,d)(a|s) · ce(s, a) + X s 0∈S p(s 0| s, a) · V π t+1|PD,P (s
1148
+ 0, πt(*s, d*))!(67)
1149
+ Proof.
1150
+
1151
+ $$V_{t|P_{\mathsf{D}},P}^{\pi}(s,d)\stackrel{(i)}{=}\bar{c}(s,d)+\sum_{s^{\prime}\in\mathcal{S}}p(s^{\prime},\pi_{t}(s,d)|(s,d))V_{t+1|P_{\mathsf{D}},P}^{\pi}(s^{\prime},\pi_{t}(s,d))$$
1152
+ $$\square$$
1153
+ $$(67)$$
1154
+ $$\begin{array}{ll}\underset{\boldsymbol{x}}{minimize}&\sum_{i=1}^{m}x_{i}w_{i}\\ \text{subject to}&\sum_{i=1}^{m}|x_{i}-b_{i}|\leq d,\ \sum_{i}x_{i}=1,\\ &x_{i}\geq0\ \forall i\in\{1,\ldots,m\},\end{array}$$
1155
+ $$\sum_{s\in A}p_{\pi_{1}(s,a)}(a\,|\,s)c_{\pi_{2}}(s,a)+c_{\pi}(\pi_{1}(s,d))+c_{\pi}(\pi_{1}(s,d),d)+\sum_{s^{\prime}\in S}p(s^{\prime}\,|\,s,a)p_{\pi_{1}(s,d)}(a\,|\,s)V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d))$$ $$\stackrel{{(iii)}}{{=}}c_{\pi_{1}(s,a)}(s,d)+\sum_{n\in A}p_{\pi_{1}(s,a)}(a|s)\cdot\left(c_{\pi_{1}}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime}\,|\,s,a)\cdot V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d))\right),\tag{68}$$
1156
+
1157
+ where (i) is the standard Bellman equation in the standard MDP defined with dynamics 3 and costs 4, (ii)
1158
+ follows by replacing c¯ and p with equations 3 and 4, and (iii) follows by cd0 (*s, d*) = cc(d 0) + cx(d 0, d).
1159
+
1160
+ Lemma 9. min{T, a + b} ≤ min{*T, a*} + min{T, b} for T, a, b ≥ 0. Proof. Assume that a ≤ b ≤ a + b. Then,
1161
+
1162
+ $$\min\{T,a+b\}=\left\{\begin{array}{ll}T\leq a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq T\leq a+b\\ T\leq a+T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq T\leq b\leq a+b\\ T\leq2T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ T\leq a\leq b\leq a+b\\ a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq a+b\leq T\end{array}\right.\tag{69}$$
1163
+
1164
+ ## C Implementation Of Ucrl2 In Finite Horizon Setting
1165
+
1166
+ ALGORITHM 2: Modified UCRL2 algorithm for a finite horizon MDP M = (S, A*, P, C, L*).
1167
+
1168
+ Require: Cost C = [c(*s, a*)], confidence parameter δ ∈ (0, 1).
1169
+
1170
+ 1: ({Nk(s, a)}, {Nk(*s, a, s*0)}) ← InitializeCounts()
1171
+ 2: for k = 1*, . . . , K* do 3: for s, s0 ∈ S, a ∈ A do 4: if Nk(*s, a*) 6= 0 **then** pˆk(s 0|*s, a*) ←
1172
+ Nk(*s, a, s*0)
1173
+ Nk(*s, a*)**else** pˆk(s 0|*s, a*) ← 1 |S| 5: βk(*s, a, δ*) ←
1174
+ s14|S|log 2(k−1)L*|A||S|* δ max{1, Nk(*s, a*)}
1175
+ 6: **end for**
1176
+ 7: π k ← ExtendedValueIteration(ˆpk, βk, C)
1177
+ 8: s0 ← InitialConditions()
1178
+ 9: for t = 0*, . . . , L* − 1 do 10: Take action at = π k t (st), and observe next state st+1.
1179
+
1180
+ 11: Nk(st, at) ← Nk(st, at) + 1 12: Nk(st, at, st+1) ← Nk(st, at, st+1) + 1 13: **end for**
1181
+ 14: **end for** 15: **Return** π K
1182
+ ALGORITHM 3: It implements ExtendedValueIteration, which is used in Algorithm 2.
1183
+
1184
+ Require: Empirical transition distribution pˆ(.|*s, a*), cost c(*s, a*), and confidence interval β(*s, a, δ*).
1185
+
1186
+ 1: π ← InitializePolicy(), v ← InitializeValueFunction()
1187
+ 2: n *← |S|* 3: for t = T − 1*, . . . ,* 0 do 4: for s ∈ S do 5: for a ∈ A do 6: s 0 1*, . . . , s*0n ← Sort(vt+1) \# vt+1(s 0 1) ≤ *. . .* ≤ vt+1(s 0 n)
1188
+ 7: p(s 0 1) ← min{1, pˆ(s 0 1|*s, a*) + β(*s,a,δ*)
1189
+ 2}
1190
+ 8: p(s 0 i) ← pˆ(s 0 i|s, a) ∀ 1 < i ≤ n 9: l ← n 10: **while** Ps 0 i
1191
+ ∈S p(s 0 i) > 1 do 11: p(s 0 l) = max{0, 1 −Ps 0 i 6=s 0 l p(s 0 i)}
1192
+ 12: l ← l − 1 13: **end while**
1193
+ 14: q(*s, a*) = c(*s, a*) + Es0∼p [vt+1(s 0)]
1194
+ 15: **end for**
1195
+ 16: vt(s) ← mina∈A{q(*s, a*)}
1196
+ 17: πt(s) ← arg mina∈A{q(*s, a*)}
1197
+ 18: **end for**
1198
+ 19: **end for** 20: **Return** π
1199
+
1200
+ ## D Distribution Of Cell Types And Traffic Levels In The Lane Driving Environment
1201
+
1202
+ | road | grass | stone | car | |
1203
+ |--------|---------|---------|-------|-----|
1204
+ | no-car | 0.7 | 0.2 | 0.1 | 0 |
1205
+ | light | 0.6 | 0.2 | 0.1 | 0.1 |
1206
+ | heavy | 0.5 | 0.2 | 0.1 | 0.2 |
1207
+
1208
+ Table 1: Probability of cell types based on traffic level.
1209
+
1210
+ | no-car | light | heavy | |
1211
+ |----------|---------|---------|------|
1212
+ | no-car | 0.99 | 0.01 | 0 |
1213
+ | light | 0.01 | 0.98 | 0.01 |
1214
+ | heavy | 0 | 0.01 | 0.99 |
1215
+
1216
+ Table 2: Probability of traffic levels based on the previous row.
1217
+
1218
+ E Performance of the human and machine agents in obstacle avoidance task
1219
+
1220
+ ![28_image_0.png](28_image_0.png)
1221
+
1222
+ Figure 8: Performance of the machine policy, a human policy with σH = 2, and the optimal policy in terms of total cost. In panel (a), the episodes start with an initial traffic level γ0 = no-car and, in panel (b), the episodes start with an initial traffic level γ0 ∈ {light, heavy}.
1223
+
1224
+ F The amount of human control for different initial traffic levels
1225
+
1226
+ ![28_image_1.png](28_image_1.png)
1227
+
1228
+ Figure 9: The amount of human control rate using UCRL2-MC switching algorithm for different initial traffic levels. For each traffic level, we sample 500 environment and average the human control rate over them.
1229
+
1230
+ Higher traffic level results in more human control, as the human agent is more reliable in heavier traffic.
1231
+
1232
+ ![29_image_0.png](29_image_0.png)
1233
+
1234
+ Figure 10: Ratio of UCRL2-MC regret to UCRL2 for (a) a set of action sizes and (b) different numbers of agents. By increasing the action space size, the performance of UCRL2-MC gets worse but remains within the same scale. In addition, UCRL2-MC outperforms UCRL2 in environments with a larger number of agents.
1235
+
1236
+ ## G Additional Experiments
1237
+
1238
+ In this section, we run additional experiments in the RiverSwim environment to investigate the effect of action space size and the number of agents in a team on the total regret.
1239
+
1240
+ ## G.1 Action Space Size
1241
+
1242
+ To study the effect of action space size on the total regret, we artificially increase the number of actions by planning m steps ahead. More concretely, we consider a new MDP, where each time step consists of m steps of the original RiverSwim MDP, and the switching policy decides for all the m steps at once. The number of actions in the new MDP increases to 2 m, while the state space remains unchanged. We consider a setting with a single team of two agents with p = 0 and p = 1, i.e., one agent always takes action right and the other takes left. We run the simulations for 20,000 episodes with m = {1, 2, *· · ·* , 4}, i.e., with the action size of 2, 4, 8, 16. Each experiment is repeated for 5 times. We compare the performance of our algorithm against UCRL2 in terms of total regret. Figure 10 (a) summarizes our results; The performance of UCRL2-MC gets worse by increasing the number of actions as the regret bound directly depends on the action size (Theorem 2). However, the regret ratio still remains within the same scale even after doubling the number of actions. One reason is that our algorithm only needs to learn *the actions taken by the agents* to find the optimal switching policy. If the agents' policies include a small subset of actions, our algorithms will maintain a small regret bound even in environments with huge action space. Therefore, we believe a more careful analysis can improve our regret bound by making it a function of agents' action space instead of the whole action size.
1243
+
1244
+ ## G.2 Number Of Agents
1245
+
1246
+ Here, our goal is to examine the impact of the number of agents on the total regret achieved by our algorithm.
1247
+
1248
+ To this end, we consider the original RiverSwim MDP (i.e., two actions) with a single team of n agents, where we run our simulations for n = {3, 4, *· · ·* , 10} and 20,000 episodes for each n. We choose p, i.e.,
1249
+ the probability of taking action right for n agents as {0,1 n−1
1250
+ , *· · ·* ,
1251
+ n−2 n−1
1252
+ , 1}. As shown in Figure 10 (b),
1253
+ UCRL2-MC outperforms UCRL2 as the number of agents increases. This agrees with Theorem 2, as our derived regret bound mainly depends on the action space size |A|, while the UCRL2 regret bound depends on the number of agents |D|.
NT9zgedd3I/NT9zgedd3I_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 30,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 30,
14
+ "code": 0,
15
+ "table": 2,
16
+ "equations": {
17
+ "successful_ocr": 114,
18
+ "unsuccessful_ocr": 16,
19
+ "equations": 130
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }