tmlr-md-dump / NT9zgedd3I /NT9zgedd3I.md
RedTachyon's picture
Upload folder using huggingface_hub
b6d9e08 verified

Learning To Switch Among Agents In A Team Via 2**-Layer** Markov Decision Processes

Vahid Balazadeh vahid@cs.toronto.edu University of Toronto Abir De abir@cse.iitb.ac.in Indian Institute of Technology Bombay Adish Singla adishs@mpi-sws.dot.org Max Planck Institute for Software Systems Manuel Gomez Rodriguez manuelgr@mpi-sws.org Max Planck Institute for Software Systems Reviewed on OpenReview: https://openreview.net/forum?id=NT9zgedd3I

Abstract

Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous mannerβ€”they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. The total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps and, whenever multiple teams of agents operate in a similar environment, our algorithm greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities and it enjoys a better regret bound than problem-agnostic algorithms. Simulation experiments illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.

1 Introduction

In recent years, reinforcement learning (RL) agents have achieved, or even surpassed, human performance in a variety of computer games by taking decisions autonomously, without human intervention (Mnih et al., 2015; Silver et al., 2016; 2017; Vinyals et al., 2019). Motivated by these successful stories, there has been a tremendous excitement on the possibility of using RL agents to operate fully autonomous cyberphysical systems, especially in the context of autonomous driving. Unfortunately, a number of technical, societal, and legal challenges have precluded this possibility to become so far a reality.

In this work, we argue that existing RL agents may still enhance the operation of cyberphysical systems if deployed under lower automation levels. For example, if we let RL agents take some of the actions and leave the remaining ones to human agents, the resulting performance may be better than the performance either of them would achieve on their own (Raghu et al., 2019a; De et al., 2020; Wilder et al., 2020). Once we depart from full automation, we need to address the following question: when should we switch control between machine and human agents? In this work, we look into this problem from a theoretical perspective and develop an online algorithm that learns to optimally switch control between multiple agents in a team automatically. However, to fulfill this goal, we need to address several challenges:

  • Level of automation. In each application, what is considered an appropriate and tolerable load for each agent may differ (European Parliament, 2006). Therefore, we would like that our algorithms provide mechanisms to adjust the amount of control for each agent (i.e., level of automation) during a given time period.

  • Number of switches. Consider two different switching patterns resulting in the same amount of agent control and equivalent performance. Then, we would like our algorithms to favor the pattern with the least number of switches. For example, in a team consisting of human and machine agents, every time a machine defers (takes) control to (from) a human, there is an additional cognitive load for the human (Brookhuis et al., 2001).

  • Unknown agent policies. The spectrum of human abilities spans a broad range (Macadam, 2003). As a result, there is a wide variety of potential human policies. Here, we would like that our algorithms learn personalized switching policies that, over time, adapt to the particular humans (and machines) they are dealing with.

  • Disentangling agents' policies and environment dynamics. We would like that our algorithms learn to disentangle the influence of the agents' policies and the environment dynamics on the switching policies. By doing so, they could be used to efficiently find multiple personalized switching policies for different teams of agents operating in similar environments (e.g., multiple semi-autonomous vehicles with different human drivers).

To tackle the above challenges, we first formally define the problem of learning to switch control among agents in a team using a 2-layer Markov decision process (Figure 1). Here, the team can be composed of any number of machines or human agents, and the agents' policies, as well as the transition probabilities of the environment, may be unknown. In our formulation, we assume that all agents follow Markovian policies1, similarly as other theoretical models of human decision making (Townsend et al., 2000; Daw & Dayan, 2014; McGhan et al., 2015). Under this definition, the problem reduces to finding the switching policy that provides an optimal trade off between the environmental cost, the amount of agent control, and the number of switches. Then, we develop an online learning algorithm, which we refer to as UCRL2-MC2, that uses upper confidence bounds on the agents' policies and the transition probabilities of the environment to find a sequence of switching policies whose total regret with respect to the optimal switching policy is sublinear in the number of learning steps. In addition, we also demonstrate that the same algorithm can be used to find multiple sequences of switching policies across several independent teams of agents operating in similar environments, where it greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2, a very well known reinforcement learning algorithm that we view as the most natural competitor. Finally, we perform a variety of simulation experiments in the standard RiverSwim environment as well as an obstacle avoidance task, where we consider multiple teams of agents (drivers) composed by one human and one machine agent.

Our results illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic alternatives.

Before we proceed further, we would like to point out that, at a broader level, our methodology and theoretical results are applicable to the problem of switching control between agents following Markovian policies. As long as the agent policies are Markovian, our results do not distinguish between machine and human agents.

In this context, we view teams of human and machine agents as one potential application of our work, which we use as a motivating example throughout the paper. However, we would also like to acknowledge that a practical deployment of our methodology in a real application with human and machine agents would require considering a wide range of additional practical aspects (e.g., transparency, explainability, and visualization). Moreover, one may also need to explicitly model the difference in reaction times between human and machine agents. Finally, there may be scenarios in which it might be beneficial to allow a human operator to switch control. Such considerations are out of the scope of our work.

1In certain cases, it is possible to convert a non-Markovian human policy into a Markovian one by changing the state representation (Daw & Dayan, 2014). Addressing the problem of learning to switch control among agents in a team in a semi-Markovian setting is left as a very interesting venue for future work.

2UCRL2 with Multiple Confidence sets.

2 Related Work

One can think of applying existing RL algorithms (Jaksch et al., 2010; Osband et al., 2013; Osband & Van Roy, 2014; Gopalan & Mannor, 2015), such as UCRL2 or Rmax, to find switching policies. However, these problem-agnostic algorithms are unable to exploit the specific structure of our problem. More specifically, our algorithm computes the confidence intervals separately over the agents' policies and the transition probabilities of the environment, instead of computing a single confidence interval, as problem-agnostic algorithms do. As a consequence, our algorithm learns to switch more efficiently across multiple teams of agents, as shown in Section 6.

There is a rapidly increasing line of work on learning to defer decisions in the machine learning literature (Bartlett & Wegkamp, 2008; Cortes et al., 2016; Geifman et al., 2018; Ramaswamy et al., 2018; Geifman & El-Yaniv, 2019; Liu et al., 2019; Raghu et al., 2019a;b; Thulasidasan et al., 2019; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al., 2020; Shekhar et al., 2021). However, previous work has typically focused on supervised learning. More specifically, it has developed classifiers that learn to defer by considering the defer action as an additional label value, by training an independent classifier to decide about deferred decisions, or by reducing the problem to a combinatorial optimization problem. Moreover, except for a few recent notable exceptions (Raghu et al., 2019a; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al., 2020), they do not consider there is a human decision maker who takes a decision whenever the classifiers defer it. In contrast, we focus on reinforcement learning, and develop algorithms that learn to switch control between multiple agents, including human agents. Recently, Jacq et al. (2022) introduced a new framework called lazy-MDPs to decide when to act optimally for reinforcement learning agents. They propose to augment existing MDPs with a new default action and encourage agents to defer decision-making to default policy in non-critical states. Though their lazy-MDP is similar to our augmented 2-layer MDP framework, our approach is designed to switch optimally between possibly multiple agents, each having its own policy.

Our work is also connected to research on understanding switching behavior and switching costs in the context of human-computer interaction (Czerwinski et al., 2000; Horvitz & Apacible, 2003; Iqbal & Bailey, 2007; Kotowick & Shah, 2018; Janssen et al., 2019), which has been sometimes referred to as "adjustable autonomy" (Mostafa et al., 2019). At a technical level, our work advances state of the art in adjustable autonomy by introducing an algorithm with provable guarantees to efficiently find the optimal switching policy in a setting in which the dynamics of the environment and the agents' policies are unknown (i.e., there is uncertainty about them). Moreover, our work also relates to a recent line of research that combines deep reinforcement learning with opponent modeling to robustly switch between multiple machine policies (Everett & Roberts, 2018; Zheng et al., 2018). However, this line of research does not consider the presence of human agents, and there are no theoretical guarantees on the performance of the proposed algorithms.

Furthermore, our work contributes to an extensive body of work on human-machine collaboration (Stone et al., 2010; Taylor et al., 2011; Walsh et al., 2011; Barrett & Stone, 2012; Macindoe et al., 2012; Torrey & Taylor, 2013; Nikolaidis et al., 2015; Hadfield-Menell et al., 2016; Nikolaidis et al., 2017; Grover et al., 2018; Haug et al., 2018; Reddy et al., 2018; Wilson & Daugherty, 2018; Brown & Niekum, 2019; Kamalaruban et al., 2019; Radanovic et al., 2019; Tschiatschek et al., 2019; Ghosh et al., 2020; Strouse et al., 2021). However, rather than developing algorithms that learn to switch control between humans and machines, previous work has predominantly considered settings in which the machine and the human interact with each other.

Finally, one can think of using option framework and the notion of macro-actions and micro-actions to formulate the problem of learning to switch (Sutton et al., 1999). However, the option framework is designed to address different levels of temporal abstraction in RL by defining macro-actions that correspond to sub-tasks (skills). In our problem, each agent is not necessarily optimized to act for a specific task or sub-goal but for the whole environment/goal. Also, in our problem, we do not necessarily have control over all agents to learn the optimal policy for each agent, while in the option framework, a primary direction is to learn optimal options for each sub-task. In other words, even though we can mathematically refer to each agent policy as an option, they are not conceptually the same.

3 Switching Control Among Agents As A 2-Layer Mdp

Given a team of agents D, at each time step t ∈ {1*, . . . , L*}, our (cyberphysical) system is characterized by a state st ∈ S, where S is a finite state space, and a control switch dt ∈ D, which determines who takes an action at ∈ A, where A is a finite action space. In the above, the switch value is given by a (deterministic and time-varying) switching policy dt = Ο€t(st, dtβˆ’1) 3. More specifically, if dt = d, the action at is sampled from the agent d's policy pd(at | st). Moreover, given a state st and an action at, the state st+1 is sampled from a transition probability p(st+1 | st, at). Here, we assume that the agents' policies and the transition probabilities may be unknown. Finally, given an initial state and switch value (s1, d0) and a trajectory Ο„ = {(st, dt, at)} L t=1 of states, switch values and actions, we define the total cost c(Ο„ | s1, d0) as:

c(Ο„β€‰βˆ£β€‰s1,d0)=βˆ‘t=1L[ce(st,at)+cc(dt)+cx(dt,dtβˆ’1)],(1)c(\tau\,|\,s_{1},d_{0})=\sum_{t=1}^{L}[c_{e}(s_{t},a_{t})+c_{c}(d_{t})+c_{x}(d_{t},d_{t-1})],\tag{1} (2)\left(2\right)

where ce(st, at) is the environment cost of taking action at at state st, cc(dt) is the cost of giving control to agent dt, cx(dt, dtβˆ’1) is the cost of switching from dtβˆ’1 to dt, and L is the time horizon4. Then, our goal is to find the optimal switching policy Ο€ βˆ— = (Ο€ βˆ— 1 , . . . , Ο€βˆ—L ) that minimizes the expected cost, i.e.,

Ο€βˆ—=argmin⁑πE[c(Ο„βˆ£s1,d0)],\pi^{*}=\operatorname*{argmin}_{\pi}\mathbb{E}\left[c(\tau\mid s_{1},d_{0})\right], E [c(Ο„ | s1, d0)] , (2) where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies.

To solve the above problem, one could just resort to problem-agnostic RL algorithms, such as UCRL2 or Rmax, over a standard Markov decision process (MDP), defined as

M=(S×D,D,Pˉ,Cˉ,L),{\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{D}},\bar{P},\bar{C},L), (4) $$\binom{4}{5}$$. where S × D is an augmented state space, the set of actions D is just the switch values, the transition dynamics P¯ at time t are given by

p(st+1,dtβ€‰βˆ£β€‰st,dtβˆ’1)=I[Ο€t(st,dtβˆ’1)=dt]Γ—βˆ‘a∈Ap(st+1β€‰βˆ£β€‰st,a)pdt(aβ€‰βˆ£β€‰st),(3)p(s_{t+1},d_{t}\,|\,s_{t},d_{t-1})=\mathbb{I}[\pi_{t}(s_{t},d_{t-1})=d_{t}]\times\sum_{a\in\mathcal{A}}p(s_{t+1}\,|\,s_{t},a)p_{d_{t}}(a\,|\,s_{t}),\tag{3}

the immediate cost CΒ― at time t is given by

c~(st,dtβˆ’1)=Eat∼pΟ€t(st,dtβˆ’1)(β‹…βˆ£st)[ce(st,at)]+cc(Ο€t(st,dtβˆ’1))+cx(Ο€t(st,dtβˆ’1),dtβˆ’1).\tilde{c}(s_{t},d_{t-1})=\mathbb{E}_{a_{t}\sim p_{\pi_{t}(s_{t},d_{t-1})}(\cdot\mid s_{t})}\left[c_{e}(s_{t},a_{t})\right]+c_{c}(\pi_{t}(s_{t},d_{t-1}))+c_{x}(\pi_{t}(s_{t},d_{t-1}),d_{t-1}).

Here, note that, by using conditional expectations, we can compute the average cost of a trajectory, given by Eq. 1, from the above immediate costs. However, these algorithms would not exploit the structure of the problem. More specifically, they would not use the observed agents' actions to improve the estimation of the transition dynamics over time.

M=(SΓ—D,SΓ—A,D,PD,P,CD,Ce,L){\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{S}}\times{\mathcal{A}},{\mathcal{D}},P_{{\mathcal{D}}},P,C_{{\mathcal{D}}},C_{e},L)

To avoid the above shortcoming, we will resort instead to a 2-layer MDP where taking an action dt in state (st, dtβˆ’1) leads first to an intermediate state (st, at) ∈ S Γ— A with probability pdt (at | st) and immediate cost cdt (st, dtβˆ’1) = cc(dt) + cx(dt, dtβˆ’1) and then to a final state (st+1, dt) ∈ S Γ— D with probability I[Ο€t(st, dtβˆ’1) = dt] Β· p(st+1 | st, at) and immediate cost ce(st, at). More formally, the 2-layer MDP is defined by the following 8-tuple: M = (S Γ— D, S Γ— A, D, PD, P, CD, Ce, L) (5) where SΓ—D is the final state space, SΓ—A is the intermediate state space, the set of actions D is the switch values, the transition dynamics PD and P at time t are given by pdt (at | st) and I[Ο€t(st, dtβˆ’1) = dt] Β· p(st+1 | st, at), and the immediate costs CD and Ce at time t are given by cdt (st, dtβˆ’1) and ce(st, at), respectively.

The above 2-layer MDP will allow us to estimate separately the agents' policies pd(Β· | s) and the transition probability p(Β· | s, a) of the environment using both the intermediate and final states and design an algorithm that improves the regret that problem-agnostic RL algorithms achieve in our problem.

3Note that, by making the switching policy dependent on the previous switch value dtβˆ’1, we can account for the switching cost. 4The specific choice of environment cost ce(Β·, Β·), control cost cc(Β·) and switching cost cx(Β·, Β·) is application dependent.

(5)\left(5\right)

4

4_image_0.png

Figure 1: Transitions of a 2-layer Markov Decision Process (MDP) from state (s, d) to state (s 0, d0) after seleting agent d 0. d 0 and d denote the current and previous agents in control. In the first layer (switching layer), the switching policy chooses agent d 0, which takes action w.r.t. its action policy pd0 . Then, in the action layer, the environment transitions to the next state s 0 based on the taken action w.r.t. the transition probability p.

4 Learning To Switch In A Team Of Agents

Since we may not know the agents' policies nor the transition probabilities, we need to trade off exploitation, i.e., minimizing the expected cost, and exploration, i.e., learning about the agents' policies and the transition probabilities. To this end, we look at the problem from the perspective of episodic learning and proceed as follows.

We consider K independent subsequent episodes of length L and denote the aggregate length of all episodes as T = KL. Each of these episodes corresponds to a realization of the same finite horizon 2-layer Markov decision process, introduced in Section 3, with state spaces S Γ— A and S Γ— D, set of actions D, true agent policies P βˆ— D, true environment transition probability P βˆ—, and immediate costs CD and Ce. However, since we do not know the true agent policies and environment transition probabilities, just before each episode k starts, our goal is to find a switching policy Ο€ k with desirable properties in terms of total regret R(T), which is given by:

R(T)=βˆ‘k=1K[EΟ„βˆΌΟ€k,PDβˆ—,Pβˆ—[c(Ο„β€‰βˆ£β€‰s1,d0)]βˆ’EΟ„βˆΌΟ€k,PDβˆ—,Pβˆ—[c(Ο„β€‰βˆ£β€‰s1,d0)]],(6)R(T)=\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]\right],\tag{6}

where Ο€ βˆ—is the optimal switching policy under the true agent policies and environment transition probabilities.

To achieve our goal, we apply the principle of optimism in the face of uncertainty, i.e.,

Ο€k=argmin⁑πmin⁑PD∈PDkmin⁑P∈PkEΟ„βˆΌΟ€,PD,P[c(Ο„βˆ£s1,d0)]\pi^{k}=\operatorname*{argmin}_{\pi}\operatorname*{min}_{P_{\mathcal{D}}\in\mathcal{P}_{\mathcal{D}}^{k}}\operatorname*{min}_{P\in\mathcal{P}^{k}}\mathbb{E}_{\tau\sim\pi,P_{\mathcal{D}},P}\left[c(\tau\mid s_{1},d_{0})\right] EΟ„βˆΌΟ€,PD,P [c(Ο„ | s1, d0)] (7) where P k D is a (|S|Γ—|D|Γ—L)-rectangular confidence set, i.e., P k D =Γ—s,d,t P k Β· | d,s,t, and P kis a (|S|Γ—|A|Γ—L)- rectangular confidence set, i.e., P k = Γ—s,a,t P k Β· | s,a,t. Here, note that the confidence sets are constructed using data gathered during the first k βˆ’ 1 episodes and allows for time-varying agent policies pd(Β· | s, t) and transition probabilities p(Β· | s, a, t).

(7)\left(7\right)

However, to solve Eq. 7, we first need to explicitly define the confidence sets. To this end, we first define the empirical distributions pˆ k d (· | s) and pˆ k(· | s, a) just before episode k starts as:

p^dk(aβ€‰βˆ£β€‰s)={Nk(s,d,a)Nk(s,d)ifNk(s,d)β‰ 01∣A∣otherwise,\hat{p}_{d}^{k}(a\,|\,s)=\begin{cases}\frac{N_{k}(s,d,a)}{N_{k}(s,d)}&\text{if}N_{k}(s,d)\neq0\\ \frac{1}{|A|}&\text{otherwise,}\end{cases} $$\hat{p}^{k}(s^{\prime},|,s,a)=\begin{cases}\frac{N_{k}^{\prime}(s,a,s^{\prime})}{N_{k}^{\prime}(s,a)}&\text{if}N_{k}^{\prime}(s,a)\neq0\ \frac{1}{|\mathcal{S}|}&\text{otherwise,}\end{cases}$$ (8) $\binom{9}{2}$ . where

Nk(s,d)=βˆ‘l=1kβˆ’1βˆ‘t∈[l]I(st=s,dt=din episodel), Nk(s,d,a)=βˆ‘l=1kβˆ’1βˆ‘t∈[l]I(st=s,at=a,dt=din episodel),N_{k}(s,d)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,d_{t}=d\text{in episode}l),\,N_{k}(s,d,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,d_{t}=d\text{in episode}l), $$N_{k}^{\prime}(s,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a\text{in episode}l),,N_{k}^{\prime}(s,a,s^{\prime})=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,s_{t+1}=s^{\prime}\text{in episode}l).$$

Then, similarly as in Jaksch et al. (2010), we opt for L 1confidence sets5, i.e.,

Pβ‹…βˆ£d,s,tk(Ξ΄)={ pd:∣∣pd(β‹…βˆ£s,t)βˆ’p^dk(β‹…βˆ£s)∣∣1≀βDk(s,d,Ξ΄)},Pβ‹…βˆ£s,a,tk(Ξ΄)={ p:∣∣p(β‹…βˆ£s,a,t)βˆ’p^k(β‹…βˆ£s,a)∣∣1≀βk(s,a,Ξ΄)},\begin{array}{l}{{{\mathcal P}_{\cdot\mid d,s,t}^{k}(\delta)=\left\{\,p_{d}:||p_{d}(\cdot\mid s,t)-\hat{p}_{d}^{k}(\cdot\mid s)||_{1}\leq\beta_{\mathcal D}^{k}(s,d,\delta)\right\},}}\\ {{{\mathcal P}_{\cdot\mid s,a,t}^{k}(\delta)=\left\{\,p:||p(\cdot\mid s,a,t)-\hat{p}^{k}(\cdot\mid s,a)||_{1}\leq\beta^{k}(s,a,\delta)\right\},}}\end{array}

for all d ∈ D, s ∈ S, a ∈ A and t ∈ [L], where δ is a given parameter,

Ξ²Dk(s,d,Ξ΄)=2log⁑((kβˆ’1)TfUfβ€²[S[D]]2(d+1)Ξ΄)max⁑{1,Nk(s,d)}andΞ²k(s,a,Ξ΄)=2log⁑((kβˆ’1)TfUfβ€²[S[L]][S[L]]2(d+1)Ξ΄)max⁑{1,Nk(s,a)}.\beta_{D}^{k}(s,d,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[D\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,d)\}}}\quad\text{and}\quad\beta^{k}(s,a,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[L\right]\right]\left[S\left[L\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,a)\}}}.

Next, given the switching policy Ο€ and the transition dynamics PD and P, we define the value function as

V_{t\mid P_{\tau},P}^{\pi}(s,d)=\mathbb{E}\bigg{[}\sum_{\tau=t}^{L}c_{e}(s_{\tau},a_{\tau})+c_{e}(d_{\tau})+c_{s}(d_{\tau},d_{\tau-1})\,|\,s_{t}=s,d_{t-1}=d\bigg{]},\tag{10}

where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies. Then, for each episode k, we define the optimal value function v k t (s, d) as

vtk(s,d)=min⁑πmin⁑PP∈PPk(Ξ΄)min⁑P∈Pβ€Ύk(Ξ΄)Vt∣PP,PΟ€(s,d).(11)v_{t}^{k}(s,d)=\min_{\pi}\min_{P_{\mathbb{P}}\in\mathcal{P}_{P}^{k}(\delta)}\min_{P\in\overline{P}^{k}(\delta)}V_{t|P_{\mathbb{P}},P}^{\pi}(s,d).\tag{11}

Then, we are ready to use the following key theorem, which gives a solution to Eq. 7 (proven in Appendix A): Theorem 1. For any episode k*, the optimal value function* v k t (s, d) satisfies the following recursive equation:

v_{t}^{k}(s,d)=\min_{a,t\in\mathcal{D}}\Big{[}c_{d_{t}}(s,d)+\min_{p_{a,t}\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\sum_{a\in\mathcal{A}}p_{d_{t}}(a\mid s,t)\times\Big{(}c_{s}(s,a)+\min_{p\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\mathbb{E}_{s^{\prime}\sim p(\cdot\mid s,a,t)}[v_{t+1}^{k}(s^{\prime},d_{t}))\Big{)}\Big{]},\tag{12}

with v k L+1(s, d) = 0 for all s ∈ S and d ∈ D*. Moreover, if* d βˆ— t is the solution to the minimization problem of the RHS of the above recursive equation, then Ο€ k t (s, d) = d βˆ— t .

The above result readily implies that, just before each episode k starts, we can find the optimal switching policy Ο€ k = (Ο€ k 1 , . . . , Ο€kL ) using dynamic programming, starting with vL+1(s, d) = 0 for all s ∈ S and d ∈ D.

Moreover, similarly as in Strehl & Littman (2008), we can solve the inner minimization problems in Eq. 12 analytically using Lemma 7 in Appendix B. To this end, we first find the optimal p(· | s, a, t) for all and a ∈ A 5This choice will result into a sequence of switching policies with desirable properties in terms of total regret.

ALGORITHM 1: UCRL2-MC 1: Cost functions CD and Ce, Ξ΄ 2: {Nk, N0k} ← InitializeCounts() 3: for k = 1*, . . . , K* do 4: {pΛ† k d}, pΛ† k ← UpdateDistribution({Nk, N0k}) 5: P k D, P k ← UpdateConfidenceSets({pΛ† k d}, pΛ† k, Ξ΄) 6: Ο€ k ← GetOptimal(P k D, P k, CD, Ce), 7: (s1, d0) ← InitializeConditions() 8: for t = 1*, . . . , L* do 9: dt ← Ο€ k t (st, dtβˆ’1) 10: at ∼ pdt (Β·|st) 11: st+1 ∼ P(Β·|st, at) 12: N ← UpdateCounts((st, dt, at, st+1), {Nk, N0k}) 13: end for 14: end for 15: Return Ο€ K and then we find the optimal pdt (Β· | s, t) for all dt ∈ D. Algorithm 1 summarizes the whole procedure, which we refer to as UCRL2-MC.

Within the algorithm, the function GetOptimal(Β·) finds the optimal policy Ο€ k using dynamic programming, as described above, and UpdateDistribution(Β·) computes Eqs. 8 and 9. Moreover, it is important to notice that, in lines 8–10, the switching policy Ο€ kis actually deployed, the true agents take actions on the true environment and, as a result, action and state transition data from the true agents and the true environment is gathered.

Next, the following theorem shows that the sequence of policies {Ο€ k} K k=1 found by Algorithm 1 achieve a total regret that is sublinear with respect to the number of steps, as defined in Eq. 6 (proven in Appendix A): Theorem 2. Assume we use Algorithm 1 to find the switching policies Ο€ k*. Then, with probability at least* 1 βˆ’ Ξ΄*, it holds that*

R(T)≀ρ1L∣A∣∣S∣∣D∣Tlog⁑(∣S∣∣D∣TΞ΄)+ρ2L∣S∣∣A∣Tlog⁑(∣S∣∣A∣TΞ΄)R(T)\leq\rho_{1}L{\sqrt{|{\mathcal{A}}||{\mathcal{S}}||{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}+\rho_{2}L|{\mathcal{S}}|{\sqrt{|{\mathcal{A}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{A}}|T}{\delta}}\right)}} (13) where ρ1, ρ2 > 0 are constants. The above regret bound suggests that our algorithm may achieve higher regret than standard UCRL2 (Jaksch et al., 2010), one of the most popular problem-agnostic RL algorithms. More specifically, one can readily show that, if we use UCRL2 to find the switching policies Ο€ k(refer to Appendix C), then, with probability at least 1 βˆ’ Ξ΄, it holds that

(13)(13) R(T)≀ρL∣S∣∣D∣Tlog⁑(∣S∣∣D∣TΞ΄)R(T)\leq\rho L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}} (14)(14) (14) where ρ is a constant. Then, if we omit constant and logarithmic factors and assume the size of the team of agents is smaller than the size of state space, i.e., |D|< |S|, we have that, for UCRL2, the regret bound is O˜(L|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T).

That being said, in practice, we have found that our algorithm achieves comparable regret with respect to UCRL2, as shown in Figure 4. In addition, after applying our algorithm on a specific team of agents and environment, we can reuse the confidence intervals over the transition probability p(Β· | s, a) we have learned to find the optimal switching policy for a different team of agents operating in a similar environment. In contrast, after applying UCRL2, we would only have a confidence interval over the conditional probability defined by Eq. 3, which would be of little use to find the optimal switching policy for a different team of agents. In the following section, we will build up on this insight by considering several independent teams of agents operating in similar environments. We will demonstrate that, whenever we aim to find multiple sequences of

7_image_0.png

Figure 2: Three examples of environment realizations with different initial traffic level Ξ³0. switching policies for these independent teams, a straightforward variation of UCRL2-MC greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2.

Remarks. For ease of exposition, we have assumed that both the machine and human agents follow arbitrary Markov policies that do not change due to switching. However, our theoretical results still hold if we lift this assumptionβ€”we just need to define the agents' policies as pd(at|st, dt, dtβˆ’1) and construct separate confidence sets based on the switch values.

5 Learning To Switch Across Multiple Teams Of Agents

In this section, rather than finding a sequence of switching policies for a single team of agents, we aim to find multiple sequences of switching policies across several independent teams operating in similar environments.

We will analyze our algorithm in scenarios where it can maintain shared confidence bounds for the transition probabilities of the environments across these independent teams. For instance, when the learning algorithm is deployed in centralized settings, it is possible to collect data across independent teams to maintain shared confidence intervals on the common parameters (i.e., the environment's transition probabilities in our problem setting). This setting fits a variety of real applications, more prominently, think of a car manufacturer continuously collecting driving data from million of human drivers wishing to learn different switching policies for each driver to implement a personalized semi-autonomous driving system. Similarly as in the previous section, we look at the problem from the perspective of episodic learning and proceed as follows.

Given N independent teams of agents {Di} N i=1, we consider K independent subsequent episodes of length L per team and denote the aggregate length of all of these episodes as T = KL. For each team of agents Di, every episode corresponds to a realization of a finite horizon 2-layer Markov decision process with state spaces S Γ— A and S Γ— Di, set of actions Di, true agent policies P βˆ— Di , true environment transition probability P βˆ—, and immediate costs CDi and Ce. Here, note that all the teams operate in a similar environment, i.e., P βˆ—is shared across teams, and, without loss of generality, they share the same costs. Then, our goal is to find the switching policies Ο€ k i with desirable properties in terms of total regret R(T, N), which is given by:

R(T,N)=βˆ‘i=1Nβˆ‘k=1K[EΟ„βˆΌni+,PPiβˆ—,Pβˆ—[c(Ο„βˆ£s1,d0)]βˆ’EΟ„βˆΌni+,PPiβˆ—,Pβˆ—[c(Ο„βˆ£s1,d0)]](15)R(T,N)=\sum_{i=1}^{N}\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]\right]\tag{15}

where Ο€ βˆ— i is the optimal switching policy for team i, under the true agent policies and environment transition probability.

To achieve our goal, we just run N instances of UCRL2-MC (Algorithm 1), each with a different confidence set P k Di (Ξ΄) for the agents' policies, similarly as in the case of a single team of agents, but with a shared confidence set P k(Ξ΄) for the environment transition probability. Then, we have the following key corollary, which readily follows from Theorem 2:

8_image_0.png

Figure 3: Trajectories induced by the switching policies found by Algorithm 1. The blue and orange segments indicate machine and human control, respectively. In both panels, we train Algorithm 1 within the same sequence of episodes, where the initial traffic level of each episode is sampled uniformly from {no-car, light, heavy}, and show three episodes with different initial traffic levels. The results indicate that, in the latter episodes, the algorithm has learned to switch to the human driver in heavier traffic levels. Corollary 3. Assume we use N instances of Algorithm 1 to find the switching policies Ο€ k i using a shared confidence set for the environment transition probability. Then, with probability at least 1 βˆ’ Ξ΄*, it holds that*

R(T,N)≀ρ1NL∣A∣∣S∣∣D∣Tlog⁑(∣S∣∣D∣TΞ΄)+ρ2L∣S∣∣A∣NTlog⁑(∣S∣∣A∣TΞ΄)(16)R(T,N)\leq\rho_{1}NL\sqrt{|{\cal A}||{\cal S}||{\cal D}|T\log\left(\frac{|{\cal S}||{\cal D}|T}{\delta}\right)}+\rho_{2}L|{\cal S}|\sqrt{|{\cal A}|NT\log\left(\frac{|{\cal S}||{\cal A}|T}{\delta}\right)}\tag{16}

where ρ1, ρ2 > 0 are constants. The above results suggests that our algorithm may achieve lower regret than UCRL2 in a scenario with multiple teams of agents operating in similar environments. This is because, under UCRL2, the confidence sets for the conditional probability defined by Eq. 3 cannot be shared across teams. More specifically, if we use N instances of UCLR2 to find the switching policies Ο€ k i , then, with probability at least 1 βˆ’ Ξ΄, it holds that

R(T,N)≀ρNL∣S∣∣D∣Tlog⁑(∣S∣∣D∣TΞ΄)R(T,N)\leq\rho N L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}

where ρ is a constant. Then, if we omit constant and logarithmic factors and assume |Di|< |S| for all i ∈ [N], we have that, for UCRL2, the regret bound is O˜(NL|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T N + NLp*|A||S||D|*T). Importantly, in practice, we have found that UCRL2-MC does achieve a significant lower regret than UCRL2, as shown in the Figure 5.

6 Experiments 6.1 Obstacle Avoidance

We perform a variety of simulations in obstacle avoidance, where teams of agents (drivers) consist of one human agent (H) and one machine agent (M), i.e., D = {H, M}. We consider a lane driving environment with three lanes and infinite rows, where the type of each individual cell (i.e., road, car, stone or grass) in row r is sampled independently at random with a probability that depends on the traffic level γr, which can take three discrete values, γr ∈ {no-car, light, heavy}. The traffic level of each row γr+1 is sampled at random with a probability that depends on the traffic level of the previous row γr. The probability of each cell type based on traffic level, as well as the conditional distribution of traffic levels can be found in Appendix D.

At any given time t, we assume that whoever is in controlβ€”be it the machine or the humanβ€”can take three different actions A = {left, straight, right}. Action left steers the car to the left of the current lane,

9_image_1.png

9_image_0.png

Figure 4: Total regret of the trajectories induced by the switching policies found by Algorithm 1 and those induced by a variant of UCRL2 in comparison with the trajectories induced by a machine driver and a human driver in a setting with a single team of agents. In all panels, we run K = 20,000. For Algorithm 1 and the variant of UCRL2, the regret is sublinear with respect to the number of time steps whereas, for the machine and the human drivers, the regret is linear. action right steers it to the right and action straight leaves the car in the current lane. If the car is already on the leftmost (rightmost) lane when taking action left (right), then the lane remains unchanged. Irrespective of the action taken, the car always moves forward. The goal of the cyberphysical system is to drive the car from an initial state in time t = 1 until the end of the episode t = L with the minimum total amount of cost.

In our experiments, we set L = 10. Figure 2 shows three examples of environment realizations.

State space. To evaluate the switching policies found by Algorithm 1, we experiment with a sensor-based state space, where the state values are the type of the current cell and the three cells the car can move into in the next time step, as well as the current traffic levelβ€”we assume the agents (be it a human or a machine) can measure the traffic level. For example, assume at time t the traffic is light, the car is on a road cell and, if it moves forward left, it hits a stone, if it moves forward straight, it hits a car, and, if it moves forward right, it drives over grass, then its state value is st = (light, road, stone, car, grass). Moreover, if the car is on the leftmost (rightmost) lane, then we set the value of the third (fifth) dimension in st to βˆ…. Therefore, under this state representation, the resulting MDP has ∼3 Γ— 5 4states.

Cost and human/machine policies. We consider a state-dependent environment cost ce(st, at) = ce(st) that depends on the type of the cell the car is on at state st, i.e., ce(st) = 0 if the type of the current cell is road, ce(st) = 2 if it is grass, ce(st) = 4 if it is stone and ce(st) = 10 if it is car. Moreover, in all simulations, we use a machine policy that has been trained using a standard RL algorithm on environment realizations with Ξ³0 = no-car. In other words, the machine policy is trained to perform well under a low traffic level.

Moreover, we consider all the humans pick which action to take (left, straight or right) according to a noisy estimate of the environment cost of the three cells that the car can move into in the next time step.

More specifically, each human model H computes a noisy estimate of the cost cΛ†e(s) = ce(s) + s of each of the three cells the car can move into, where s ∼ N(0, ΟƒH), and picks the action that moves the car to the cell with the lowest noisy estimate6. As a result, human drivers are generally more reliable than the machine under high traffic levels, however, the machine is more reliable than humans under low traffic level, where its policy is near-optimal (See Appendix E for a comparison of the human and machine performance). Finally, we consider that only the car driven by our system moves in the environment.

6.1.1 Results

First, we focus on a single team of one machine M and one human model H, with ΟƒH = 2, and use Algorithm 1 to find a sequence of switching policies with sublinear regret. At the beginning of each episode, the initial traffic level Ξ³0 is sampled uniformly at random.

6Note that, in our theoretical results, we have no assumption other than the Markov property regarding the human policy.

10_image_0.png

10_image_1.png

Figure 5: Total regret of the trajectories induced by the switching policies found by N instances of Algorithm 1 and those induced by N instances of a variant of UCRL2 in a setting with N team of agents. In both panels, each instance of Algorithm 1 shares the same confidence set for the environment transition probabilities and we run K = 5000 episodes. The sequence of policies found by Algorithm 1 outperform those found by the variant of UCRL2 in terms of total regret, in agreement with Corollary 3. We look at the trajectories induced by the switching policies found by our algorithm across different episodes for different values of the switching cost cx and cost of human control cc(H) 7. Figure 3 summarizes the results, which show that, in the latter episodes, the algorithm has learned to rely on the machine (blue segments) whenever the traffic level is low and switches to the human driver when the traffic level increases. Moreover, whenever the amount of human control and number of switches is not penalized (i.e., cx = cc(H) = 0), the algorithm switches to the human more frequently whenever the traffic level is high to reduce the environment cost. See Appendix F for a comparison of human control rate in environments with different initial traffic levels.

In addition, we compare the performance achieved by Algorithm 1 with three baselines: (i) a variant of UCRL2 (Jaksch et al., 2010) adapted to our finite horizon setting (see Appendix C), (ii) a human agent, and (iii) a machine agent. As a measure of performance, we use the total regret, as defined in Eq. 6. Figure 4 summarizes the results for two different values of switching cost cx and cost of human control cc(H). The results show that both our algorithm and UCRL2 achieve sublinear regret with respect to the number of time steps and their performance is comparable in agreement with Theorem 2. In contrast, whenever the human or the machine drive on their own, they suffer linear regret, due to a lack of exploration.

Next, we consider N = 10 independent teams of agents, {Di} N i=1, operating in a similar lane driving environment. Each team Diis composed of a different human model Hi, with ΟƒHi sampled uniformly from (0, 4), and the same machine driver M. Then, to find a sequence of switching policies for each of the teams, we run N instances of Algorithm 1 with shared confidence set for the environment transition probabilities.

We compare the performance of our algorithm against the same variant of UCRL2 used in the experiments with a single team of agents in terms of the total regret defined in Eq. 15. Here, note that the variant of UCRL2 does not maintain a shared confidence set for the environment transition probabilities across teams but instead creates a confidence set for the conditional probability defined by Eq. 3 for each team. Figure 5 summarizes the results for a sequence for different values of the switching cost cx and cost of human control cc(H), which shows that, in agreement with Corollary 3, our method outperforms UCRL2 significantly.

6.2 Riverswim

In addition to the obstacle avoidance task, we consider the standard task of RiverSwim (Strehl & Littman, 2008). The MDP states and transition probabilities are shown in Figure 6. The cost of taking action in states s2 to s5 equals 1, while 0.995 and 0 for states s1 and s6, respectively. Each episode ends after L = 20

7Here, we assume the cost of machine control cc(M) = 0.

11_image_0.png

Figure 6: RiverSwim. Continuous (dashed) arrows show the transitions after taking actions right (left).

The optimal policy is to always take action right.

11_image_2.png

11_image_1.png

Figure 7: (a) Ratio of UCRL2-MC regret to UCRL2 for different number of teams. (b) Total regret of the trajectories induced by the switching policies found by UCRL2-MC and those induced by UCRL2 in a setting with N = 100 team of agents. steps. We set the switching cost and cost of agent control to zero for all the simulations in this section, i.e., cx(Β·, Β·) = cc(Β·) = 0. The set D consists of agents that choose action right with some probability value p, which may differ for different agents. In the following part, we investigate the effect of increasing the number of teams on the regret bound in the multiple teams of agents setting. See Appendix G for more simulations to study the impact of action size and number of agents in each team on the total regret.

6.2.1 Results

We consider N independent teams of agents, each consisting of two agents with the probability p and 1 βˆ’ p of choosing action right, where p is chosen uniformly at random for each team. We run the simulations for N = {3, 4, Β· Β· Β· , 10} teams of agents. For each N, we run both UCRL2-MC and UCRL2 for 20,000 episodes and repeat each experiment 5 times. Figure 7 (a) summarizes our results, showing the advantage of the shared confidence bounds on the environment transition probabilities in our algorithm against its problem-agnostic version. To better illustrate the performance of UCRL2-MC, we also run an experiment with N = 100 teams of agents for 10,000 episodes and compare the total regret of our algorithm to UCRL2. Figure 7 (b) shows that our algorithm significantly outperforms UCRL2.

7 Conclusions And Future Work

We have formally defined the problem of learning to switch control among agents in a team via a 2-layer Markov decision process and then developed UCRL2-MC, an online learning algorithm with desirable provable guarantees. Moreover, we have performed a variety of simulation experiments on the standard RiverSwim task and obstacle avoidance to illustrate our theoretical results and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms. Our work opens up many interesting avenues for future work. For example, we have assumed that the agents' policies are fixed. However, there are reasons to believe that simultaneously optimizing the agents' policies and the switching policy may lead to superior performance (De et al., 2020; 2021; Wilder et al., 2020; Wu et al., 2020). In our work, we have assumed that the state space is discrete and the horizon in finite. It would be very interesting to lift these assumptions and develop approximate value iteration methods to solve the learning to switch problem. Finally, it would be interesting to evaluate our algorithm using real human agents in a variety of tasks.

Acknowledgments. Gomez-Rodriguez acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 945719).

References

Samuel Barrett and Peter Stone. An analysis framework for ad hoc teamwork tasks. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 357–364, 2012.

P. Bartlett and M. Wegkamp. Classification with a reject option using a hinge loss. JMLR, 2008.

K. Brookhuis, D. De Waard, and W. Janssen. Behavioural impacts of advanced driver assistance systems–an overview. European Journal of Transport and Infrastructure Research, 1(3), 2001.

Daniel S Brown and Scott Niekum. Machine teaching for inverse reinforcement learning: Algorithms and applications. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 7749–7758, 2019.

C. Cortes, G. DeSalvo, and M. Mohri. Learning with rejection. In ALT, 2016. Mary Czerwinski, Edward Cutrell, and Eric Horvitz. Instant messaging and interruption: Influence of task type on performance. In OZCHI 2000 conference proceedings, volume 356, pp. 361–367, 2000.

Nathaniel D. Daw and Peter Dayan. The algorithmic anatomy of model-based evaluation. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655):20130478, 2014.

A. De, P. Koley, N. Ganguly, and M. Gomez-Rodriguez. Regression under human assistance. In AAAI, 2020. Abir De, Nastaran Okati, Ali Zarezade, and Manuel Gomez-Rodriguez. Classification under human assistance.

In AAAI, 2021.

European Parliament. Regulation (EC) No 561/2006. http://data.europa.eu/eli/reg/2006/561/2015-03-02, 2006.

R. Everett and S. Roberts. Learning against non-stationary agents with opponent modelling and deep reinforcement learning. In 2018 AAAI Spring Symposium Series, 2018.

Y. Geifman and R. El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. arXiv preprint arXiv:1901.09192, 2019.

Y. Geifman, G. Uziel, and R. El-Yaniv. Bias-reduced uncertainty estimation for deep neural classifiers. In ICLR, 2018.

A. Ghosh, S. Tschiatschek, H. Mahdavi, and A. Singla. Towards deployment of robust cooperative ai agents: An algorithmic framework for learning adaptive policies. In AAMAS, 2020.

Aditya Gopalan and Shie Mannor. Thompson sampling for learning parameterized markov decision processes.

In Conference on Learning Theory, pp. 861–898, 2015.

A. Grover, M. Al-Shedivat, J. Gupta, Y. Burda, and H. Edwards. Learning policy representations in multiagent systems. In ICML, 2018.

D. Hadfield-Menell, S. Russell, P. Abbeel, and A. Dragan. Cooperative inverse reinforcement learning. In NIPS, 2016.

L. Haug, S. Tschiatschek, and A. Singla. Teaching inverse reinforcement learners via features and demonstrations. In NeurIPS, 2018.

Eric Horvitz and Johnson Apacible. Learning and reasoning about interruption. In Proceedings of the 5th international conference on Multimodal interfaces, pp. 20–27, 2003.

Shamsi T Iqbal and Brian P Bailey. Understanding and developing models for detecting and differentiating breakpoints during interactive tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 697–706, 2007.

Alexis Jacq, Johan Ferret, Olivier Pietquin, and Matthieu Geist. Lazy-mdps: Towards interpretable reinforcement learning by learning when to act. In AAMAS, 2022.

T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 2010.

Christian P Janssen, Shamsi T Iqbal, Andrew L Kun, and Stella F Donker. Interrupted by my car?

implications of interruption and interleaving research for automated vehicles. International Journal of Human-Computer Studies, 130:221–233, 2019.

Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla. Interactive teaching algorithms for inverse reinforcement learning. In IJCAI, 2019.

Kyle Kotowick and Julie Shah. Modality switching for mitigation of sensory adaptation and habituation in personal navigation systems. In 23rd International Conference on Intelligent User Interfaces, pp. 115–127, 2018.

Z. Liu, Z. Wang, P. Liang, R. Salakhutdinov, L. Morency, and M. Ueda. Deep gamblers: Learning to abstain with portfolio theory. In NeurIPS, 2019.

C. Macadam. Understanding and modeling the human driver. Vehicle system dynamics, 40(1-3):101–134, 2003.

O. Macindoe, L. Kaelbling, and T. Lozano-PΓ©rez. Pomcop: Belief space planning for sidekicks in cooperative games. In AIIDE, 2012.

Catharine L. R. McGhan, Ali Nasir, and Ella M. Atkins. Human intent prediction using markov decision processes. Journal of Aerospace Information Systems, 12(5):393–397, 2015.

V. Mnih et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.

Salama A Mostafa, Mohd Sharifuddin Ahmad, and Aida Mustapha. Adjustable autonomy: a systematic literature review. Artificial Intelligence Review, 51(2):149–186, 2019.

Hussein Mozannar and David Sontag. Consistent estimators for learning to defer to an expert. In ICML, 2020.

S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah. Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In HRI, 2015.

S. Nikolaidis, J. Forlizzi, D. Hsu, J. Shah, and S. Srinivasa. Mathematical models of adaptation in human-robot collaboration. arXiv preprint arXiv:1707.02586, 2017.

Ian Osband and Benjamin Van Roy. Near-optimal reinforcement learning in factored mdps. In Advances in Neural Information Processing Systems, pp. 604–612, 2014.

Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pp. 3003–3011, 2013.

Goran Radanovic, Rati Devidze, David C. Parkes, and Adish Singla. Learning to collaborate in markov decision processes. In ICML, 2019.

M. Raghu, K. Blumer, G. Corrado, J. Kleinberg, Z. Obermeyer, and S. Mullainathan. The algorithmic automation problem: Prediction, triage, and human effort. arXiv preprint arXiv:1903.12220, 2019a.

M. Raghu, K. Blumer, R. Sayres, Z. Obermeyer, B. Kleinberg, S. Mullainathan, and J. Kleinberg. Direct uncertainty prediction for medical second opinions. In ICML, 2019b.

H. Ramaswamy, A. Tewari, and S. Agarwal. Consistent algorithms for multiclass classification with an abstain option. Electronic J. of Statistics, 2018.

Siddharth Reddy, Anca D Dragan, and Sergey Levine. Shared autonomy via deep reinforcement learning.

arXiv preprint arXiv:1802.01744, 2018.

Shubhanshu Shekhar, Mohammad Ghavamzadeh, and Tara Javidi. Active learning for classification with abstention. IEEE Journal on Selected Areas in Information Theory, 2(2):705–719, 2021.

D. Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484, 2016.

D. Silver et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.

Peter Stone, Gal A Kaminka, Sarit Kraus, and Jeffrey S Rosenschein. Ad hoc autonomous agent teams: Collaboration without pre-coordination. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010.

A. Strehl and M. Littman. An analysis of model-based interval estimation for markov decision processes.

Journal of Computer and System Sciences, 74(8):1309–1331, 2008.

DJ Strouse, Kevin McKee, Matt Botvinick, Edward Hughes, and Richard Everett. Collaborating with humans without human data. In Advances in Neural Information Processing Systems, volume 34, 2021.

Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181–211, 1999.

Matthew E Taylor, Halit Bener Suay, and Sonia Chernova. Integrating reinforcement learning with human demonstrations of varying ability. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 617–624. International Foundation for Autonomous Agents and Multiagent Systems, 2011.

S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof. Combating label noise in deep learning using abstention. arXiv preprint arXiv:1905.10964, 2019.

Lisa Torrey and Matthew Taylor. Teaching on a budget: Agents advising agents in reinforcement learning.

In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems, pp.

1053–1060, 2013.

James T. Townsend, Kam M. Silva, Jesse Spencer-Smith, and Michael J. Wenger. Exploring the relations between categorization and decision making with regard to realistic face stimuli. Pragmatics & Cognition, 8(1):83–105, 2000.

S. Tschiatschek, A. Ghosh, L. Haug, R. Devidze, and A. Singla. Learner-aware teaching: Inverse reinforcement learning with preferences and constraints. In NeurIPS, 2019.

O. Vinyals et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, pp. 1–5, 2019.

Thomas J Walsh, Daniel K Hewlett, and Clayton T Morrison. Blending autonomous exploration and apprenticeship learning. In Advances in Neural Information Processing Systems, pp. 2258–2266, 2011.

Bryan Wilder, Eric Horvitz, and Ece Kamar. Learning to complement humans. In IJCAI, 2020. H. Wilson and P. Daugherty. Collaborative intelligence: humans and ai are joining forces. Harvard Business Review, 2018.

Bohan Wu, Jayesh K Gupta, and Mykel Kochenderfer. Model primitives for hierarchical lifelong reinforcement learning. Autonomous Agents and Multi-Agent Systems, 34(1):1–38, 2020.

Y. Zheng, Z. Meng, J. Hao, Z. Zhang, T. Yang, and C. Fan. A deep bayesian policy reuse approach against non-stationary agents. In NeurIPS, 2018.

A Proofs A.1 Proof Of Theorem 1

We first define P k D|.,t+ := Γ—s∈S,d∈D,t0∈{t,...,L}P k .|d,s,t0 , P k |.,t+ = Γ—s∈S,a∈A,t0∈{t,...,L}P k |s,a,t0 and Ο€t+ = {Ο€t*, . . . , Ο€*L}. Next, we get a lower bound the optimistic value function v k t (s, d) as follows: v k t (s, d) = min Ο€min PD∈PkD min P ∈Pk V Ο€ t|PD,P (s, d) = min Ο€t+min PD∈PkD min P ∈Pk V Ο€ t|PD,P (s, d)

(i) = min Ο€t(s,d) min pΟ€t(s,d)(.|s,t)∈Pk Β· | Ο€t(s,d),s,t min p(.|s,.,t)∈Pk Β· | s,Β·,t min Ο€(t+1)+ PD∈PkD | Β·,(t+1)+ P ∈Pk Β· | Β·,(t+1)+ hcΟ€t(s,d)(s, d) + Ea∼pΟ€t(s,d)(Β· | s,t) ce(s, a) + Es 0∼p(Β· | s,a,t)V Ο€ t+1|PD,P (s 0, Ο€t(s, d))i (ii) Β· | Β·,(t+1)+ (ii) β‰₯ min Ο€t(s,d) min pΟ€t(s,d)(.|s,t)∈Pk Β· | Ο€t(s,d),s,t min p(.|s,.,t)∈Pk Β· | s,Β·,t cΟ€t(s,d)(s, d) +Ea∼pΟ€t(s,d)(Β· | s,t) ce(s, a) + Es 0∼p(Β· | s,a,t) " min Ο€(t+1)+ min PD∈PkD | Β·,(t+1)+ min P ∈Pk Β· | Β·,(t+1)+ V Ο€ t+1|PD,P (s 0, Ο€t(s, d))#!# " cdt (s, d) + min pdt (.|s,t)∈Pk Β·|dt,s,t X a∈A pdt (a|s, t) Β· ce(s, a) + min p(.|s,a,t)∈Pk Β· | s,a,t Es 0∼p(Β· | s,a,t)v k t+1(s 0, dt) !# , = min dt where (i) follows from Lemma 8 and (ii) follows from the fact that mina E[X(a)] β‰₯ E[mina X(a)]. Next, we provide an upper bound of the optimistic value function v k t (s, d) as follows:

vtk(s,d)v_{t}^{k}(s,d)

= min Ο€min PD∈PkD min P ∈Pk V Ο€ t|PD,P (s, d)

min⁑πtmin⁑pΟ€t(s,d) (.∣s,t)∈P{\overset{(i)}{=}}\operatorname*{min}_{\pi_{t}}\quad\quad\quad\quad\quad\operatorname*{min}_{p\pi_{t}(s,d)\,(.|s,t)\in{\mathcal{P}}}

min Ο€(t+1)+ PD∈PkD | Β·,(t+1)+ P ∈Pk min⁑a,(β€‰βˆ£s,t)∈P+1n πt(s,d),s,tmin⁑p(β€‰βˆ£s,t)∈P+1n …,t\min_{a,(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\pi_{t}(s,d),s,t}\min_{p(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\ldots,t} $$\left.\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}{\alpha\sim p{\pi_{t}(s,d)}(,\mid s,t)}\left(c_{e}(s,a)+\mathbb{E}{s^{\prime}\sim p(,\mid s,a,t)}V{t+1\mid p_{\mathbb{D}},p}^{\pi}(s^{\prime},\pi_{t}(s,d))\right)\right]\right.\right.$$ Β· | Β·,(t+1)+ (ii) min⁑min⁑πt(s,d)pΟ€t(x,d)(β‹…βˆ£s,t)∈Pβˆ’βˆ£Ο€t(x,d),x,tk\begin{array}{r l}{\operatorname*{min}}&{{}\operatorname*{min}}\\ {\pi_{t}(s,d)}&{{}\quad p_{\pi_{t}(x,d)}(\cdot|s,t){\in}\mathcal{P}_{\mathrm{-}|\pi_{t}(x,d),x,t}^{k}}\end{array} min⁑p(:∣s,ti⟩∈Pti∣s,ti⟩s[cΟ€t(s,d)(s,d)+Ea∼pΟ€t(s,d)( βˆ£s,t)(ce(s,a)+Esβ€²βˆΌp( βˆ£s,a,t)Vt+1∣Pππ,Pππ′(sβ€²,Ο€t(s,d)))]\min_{p(:\mid s,t_{i}\rangle\in P^{s}_{t_{i}\mid s,t_{i}\rangle}}\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{\pi_{t}(s,d)}(\ \mid s,t)}\left(c_{e}(s,a)+\mathbb{E}_{s^{\prime}\sim p(\ \mid s,a,t)}V^{\pi^{\prime}}_{t+1\mid P^{\pi}_{\pi},P^{\pi}}(s^{\prime},\pi_{t}(s,d))\right)\right] ()min⁑πt(s,d)(\stackrel{i i i}{=})\operatorname*{min}_{\pi_{t}(s,d)} min⁑p(β‹…,β‹…),p(β‹…),p(β‹…)[cn1(s,d)(s,d)+Ea∼pa1(s,d)( βˆ£s,d)(cΞ½(s,a)+Eν∼p( βˆ£s,a,t)vt+1k(sβ€²,Ο€t(s,d)))]\min_{p(\cdot,\cdot),p(\cdot),p(\cdot)}\left[c_{n_{1}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{a_{1}(s,d)}(\ \mid s,d)}\left(c_{\nu}(s,a)+\mathbb{E}_{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},\pi_{t}(s,d))\right)\right] $$=\min_{d_{t}}\left[c_{a}(s,d)+\min_{p_{a_{1}(s,d)}\in\mathbb{P}{d{1},s,a_{1}}^{\nu}}\sum_{a_{i}\in A}p_{d_{i}}(a|s,t)\cdot\left(c_{\nu}(s,a)+\min_{p(\ \mid s,a,t)\in\mathbb{P}{\nu\sim p(\ \mid s,a,t)}}\mathbb{E}{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},d_{t})\right)\right].$$ min⁑PΟ€t(s,d) (β‹…βˆ£s,t)∈Pβ‹…k\operatorname*{min}_{P\pi_{t}(s,d)\,(\cdot|s,t)\in{\mathcal{P}}_{\cdot}^{k}} Here, (i) follows from Lemma 8, (ii) follows from the fact that:

hcΟ€t(s,d)(s, d) + Ea∼pΟ€t(s,d)(Β· | s,t) ce(s, a) + Es 0∼p(Β· | s,a,t)V Ο€ t+1|PD,P (s 0, Ο€t(s, d))i min Ο€(t+1)+ PD∈PkD | Β·,(t+1)+ P ∈Pk Β· | Β·,(t+1)+ ≀ hcΟ€t(s,d)(s, d) + Ea∼pΟ€t(s,d)(Β· | s,t) ce(s, a) + Es 0∼p(Β· | s,a,t)V Ο€ t+1|PD,P (s 0, Ο€t(s, d))i βˆ€Ο€, PD ∈ PkD | Β·,(t+1)+ , P ∈ Pk Β· | Β·,(t+1)+ (17) and if we set Ο€(t+1)+ = {Ο€ βˆ— t+1*, ..., Ο€βˆ—L }, PD = P βˆ— D ∈ PkD | Β·*,(t+1)+ and P = P βˆ— ∈ Pk | Β·,(t+1)+ , where {Ο€ βˆ— t+1, ..., Ο€βˆ—L}, Pβˆ—D, Pβˆ— = argmin Ο€(t+1)+ PD∈PkD | Β·,(t+1)+ P ∈Pk Β· | Β·,(t+1)+ (17)(17) (18)(18) V Ο€ t+1|PD,P (s 0, Ο€t(s, d)), (18) PK k=1 βˆ†kI(P βˆ— D 6∈ PkD ∨ P βˆ— 6∈ Pk). then equality (iii) holds. Since the upper and lower bounds are the same, we can conclude that the optimistic value function satisfies Eq. 12, which completes the proof.

A.2 Proof Of Theorem 2

In this proof, we assume that ce(s, a) + cc(d) + cx(d, d0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D. Throughout the proof, we will omit the subscripts P βˆ— D, Pβˆ—in Vt | P βˆ—D,P βˆ— and write Vt instead in case of true agent policies P βˆ— D and true transition probabilities P βˆ—. Then, we define the following quantities: where, recall from Eq. 7 that, Ο€ k = argminΟ€ minPD∈PkD , minP ∈Pk V Ο€ 1|PD,P (s1, d0); and, βˆ†k indicates the regret for episode k. Hence, we have

R(T)=βˆ‘k=1KΞ”k=βˆ‘k=1KΞ”kI(PDβˆ—βˆˆPDk∧Pβˆ—βˆˆPk)+βˆ‘k=1KΞ”kI(PDβˆ—βˆˆΜΈPDk∨Pβˆ—βˆˆΜΈPk)(22)R(T)=\sum_{k=1}^{K}\Delta_{k}=\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})+\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\tag{22} Next, we split the analysis into two parts. We first bound $\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{}\in\mathcal{P}^{k})$ and then bound (19)(19) (20)(20) (21)(21) (22)(22)

  • Computing the bound on PK k=1 βˆ†kI(P βˆ— D ∈ PkD ∧ P βˆ— ∈ Pk) First, we note that

Ξ”k=V1Ο€k(s1,d0)βˆ’V1Ο€Ο€βˆ—(s1,d0)≀V1Ο€k(s1,d0)βˆ’V1∣PΞ£,PkkΟ€k(s1,d0)\Delta_{k}=V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1}^{\pi^{\pi^{*}}}(s_{1},d_{0})\leq V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\Sigma,P^{k}}^{k}}^{\pi^{k}}(s_{1},d_{0}) This is because V Ο€ k 1|P kD,P k (s1, d0) (i) = min Ο€min PD∈PkD min P ∈Pk V Ο€ 1|PD,P (s1, d0) (ii) ≀ min Ο€V Ο€ 1|P βˆ—D,P βˆ— (s1, d0) = V Ο€ βˆ— 1(s1, d0), (24) where (i) follows from Eqs. 19, 20, and (ii) holds because of the fact that the true transition probabilities P βˆ— D ∈ PkD and P βˆ— ∈ Pk. Next, we use Lemma 4 (Appendix B) to bound PK k=1(V Ο€ k 1(s1, d0)βˆ’V Ο€ k 1|P kD,P k (s1, d0)).

βˆ‘k=1K(V1Ο€k(s1,d0)βˆ’V1∣PDk,PkΟ€k(s1,d0))β‰€βˆ‘k=1KLE[βˆ‘t=1Lmin⁑{1,Ξ²Dk(st,dt,Ξ΄)}+βˆ‘t=1Lmin⁑{1,Ξ²k(st,atΞ΄)}∣s1,\sum_{k=1}^{K}(V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\mathfrak{D}}^{k},P^{k}}^{\pi^{k}}(s_{1},d_{0}))\leq\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\mathfrak{D}}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t}\delta)\}\right|s_{1}, s1, d0 ]\left.\begin{array}{l}{{}}\\ {{}}\\ {{}}\end{array}\right] (23)(23) (24)(24) P k D = argmin PD∈PkD(Ξ΄) min P ∈Pk(Ξ΄) V Ο€ k 1|PD,P (s1, d0), (19) P k = argmin P ∈Pk(Ξ΄) V Ο€ k 1|P kD,P (s1, d0), (20) βˆ†k = V Ο€ k 1(s1, d0) βˆ’ V Ο€ βˆ— 1(s1, d0), (21) Since by assumption, ce(s, a) + cc(d) + cx(d, d0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D, the worst-case regret is bounded by T. Therefore, we have that:

k=1 βˆ†kI(P βˆ— D ∈ PkD ∧ P βˆ— ∈ Pk) ≀ min (T,X K X K k=1 LE "X L t=1 min{1, Ξ²kD(st, dt, Ξ΄)}|s1, d0 # + X K k=1 LE "X L t=1 min{1, Ξ²k(st, at, Ξ΄)}|s1, d0 #) ≀ min (T,X K k=1 LE "X L t=1 min{1, Ξ²kD(st, dt, Ξ΄)}|s1, d0 #) + min (T,X K k=1 LE "X L t=1 min{1, Ξ²k(st, at, Ξ΄)}|s1, d0 #) , (26) where, the last inequality follows from Lemma 9. Now, we aim to bound the first term in the RHS of the above inequality.

βˆ‘k=1KLE[βˆ‘t=1Lmin⁑{1,Ξ²Dk(st,dt,Ξ΄)}∣s1,d0]\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right] #(i) = L X K 1, vuut 2 log ((kβˆ’1)L) 7|S||D|2 |A|+1 Ξ΄ t=1 min k=1 E X L s1, d0 max{1, Nk(st, dt)} 1, vuut 2 log (KL) 7|S||D|2 |A|+1 Ξ΄ k=1 E X L t=1 min (ii) ≀ L X K max{1, Nk(st, dt)} (iii) ≀ 2 √2L s 2 log (KL) 7|S||D|2 |A|+1 Ξ΄ |S||D|KL + 2L 2|S||D| (27) ≀ 2 √2 s 14|A|log (KL)|S||D| Ξ΄ |S||D|KL + 2L 2|S||D| = √112s|A|log (KL)|S||D| Ξ΄ |S||D|KL + 2L 2|S||D| (28)

where (i) follows by replacing Ξ² k D(st, dt, Ξ΄) with its definition, (ii) follows by the fact that (k βˆ’ 1)L ≀ KL, (iii) follows from Lemma 5, in which, we put W := S Γ—D, c := r2 log (KL) 7*|S||D|*2 |A|+1 Ξ΄ , Tk = (wk,1, . . . , wk,L) := ((s1, d1), . . . ,(sL, dL)). Now, due to Eq. 28, we have the following.

min⁑{T,βˆ‘k=1KLE[βˆ‘t=1Lmin⁑{1,Ξ²Dk(st,dt,Ξ΄)}∣s1,d0]}≀min⁑{T,112L∣A∣∣S∣∣D∣Tlog⁑(T∣S∣∣D∣δ)+2L2∣S∣∣D∣}(29)\min\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq\min\left\{T,\sqrt{112}L\sqrt{|A||S||D|}T\log\left(\frac{T|S||D|}{\delta}\right)+2L^{2}|S||D|\right\}\tag{29}

Now, if T ≀ 2L 2*|S||A||D|log T|S||D|* Ξ΄ ,

T2≀2L2∣S∣∣A∣∣D∣Tlog⁑(T∣S∣∣D∣δ)β€…β€ŠβŸΉβ€…β€ŠT≀2L∣S∣∣A∣∣D∣Tlog⁑(T∣S∣∣D∣δ)T^{2}\leq2L^{2}|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)\implies T\leq{\sqrt{2}}L{\sqrt{|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)}}

and if T > 2L 2*|S||A||D|log T|S||D|* Ξ΄ ,

2L2∣S∣<2L2∣S∣∣A∣∣D∣Tlog⁑(T∣S∣∣D∣δ)∣A∣∣D∣log⁑(T∣S∣∣D∣δ)≀2L∣S∣∣A∣∣D∣Tlog⁑(T∣S∣∣D∣δ).2L^{2}|S|<{\frac{\sqrt{2L^{2}|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}{|A||D|\log\left({\frac{T|S||D|}{\delta}}\right)}}\leq{\sqrt{2}}L{\sqrt{|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}. (30)(30) . (30) Thus, the minimum in Eq. 29 is less than

(2+112)L∣S∣∣A∣∣D∣Tlog⁑(∣S∣∣D∣Tδ)<12L∣S∣∣A∣∣D∣Tlog⁑(∣S∣∣D∣Tδ)({\sqrt{2}}+{\sqrt{112}})L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}}<12L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}} (31) A similar analysis can be done for the second term of the RHS of Eq. 26, which would show that,

min⁑{T,βˆ‘k=1KLE[βˆ‘t=1Lmin⁑{1,Ξ²k(st,at,Ξ΄)}∣s1,d0]}≀12L∣S∣∣A∣Tlog⁑(T∣S∣A∣δ).\operatorname*{min}\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq12L|\mathcal{S}|{\sqrt{|\mathcal{A}|T\log\left({\frac{T|\mathcal{S}|\mathcal{A}|}{\delta}}\right)}}. . (32) Combining Eqs. 26, 31 and 32, we can bound the first term of the total regret as follows:

βˆ‘k=1KΞ”kI(PDβˆ—βˆˆPDk∧Pβˆ—βˆˆPk)≀12L∣A∣S∣∣D∣Tlog⁑(T∣S∣∣D∣δ)+12L∣S∣∣A∣Tlog⁑(T∣S∣∣A∣δ)(33)\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})\leq12L\sqrt{|\mathcal{A}|\mathcal{S}||\mathcal{D}|T\log\left(\frac{T|\mathcal{S}||\mathcal{D}|}{\delta}\right)}+12L|\mathcal{S}|\sqrt{|\mathcal{A}|T\log\left(\frac{T|\mathcal{S}||\mathcal{A}|}{\delta}\right)}\tag{33} Computing the bound on $\sum{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{}\not\in\mathcal{P}^{k})$_ Here, we use a similar approach to Jaksch et al. (2010). Note that

βˆ‘k=1KΞ”kI(PDk∉PDk∨Pβˆ—βˆˆΜΈPk)=βˆ‘k=1⌊KkβŒ‹Ξ”kI(PDk∉PDk∨Pβˆ—βˆˆΜΈPk)+βˆ‘k=⌊KkβŒ‹+1KΞ”kI(PDk∉PDk∨Pβˆ—βˆˆΜΈPk).(34)\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=\sum_{k=1}^{\lfloor\sqrt{\frac{K}{k}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})+\sum_{k=\lfloor\sqrt{\frac{K}{k}}\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k}).\tag{34} (31)(31) (32)(32) (36)(36)

Now, our goal is to show the second term of the RHS of above equation vanishes with high probability. If we succeed, then it holds that, with high probability, PK k=1 βˆ†kI(P βˆ— D 6∈ PkD ∨ P βˆ— 6∈ Pk) equals the first term of the RHS and then we will be done because

βˆ‘k=1⌊KLβŒ‹Ξ”kI(PDβˆ—βˆˆΜΈPDk∨Pβˆ—βˆˆΜΈPk)β‰€βˆ‘k=1⌊KLβŒ‹Ξ”kβ‰€βŒŠKLβŒ‹L=Kβ€ΎL,(35)\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\leq\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor L=\sqrt{\overline{K}L},\tag{35}

where (i) follows from the fact that βˆ†k ≀ L since we assumed the cost of each step ce(s, a)+cc(d)+cx(d, d0) ≀ 1 for all s ∈ S, a ∈ A, and d, d0 ∈ D.

To prove that PK k=√ K L +1 βˆ†kI(P βˆ— D 6∈ PkD ∨ P βˆ— 6∈ Pk) = 0 with high probability, we proceed as follows. By applying Lemma 6 to P βˆ— D and P βˆ—, we have

Pr⁑(PDβˆ—βˆˆΜΈPDk)≀δ2tk6, Pr⁑(Pβˆ—βˆˆΜΈPk)≀δ2tk6\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}},\ \operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}}

Thus,

Pr⁑(PDβˆ—βˆˆΜΈPDk∨Pβˆ—βˆˆΜΈPk)≀Pr⁑(PDβˆ—βˆˆΜΈPDk)+Pr⁑(Pβˆ—βˆˆΜΈPk)≀δtk 6\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k}\lor P^{*}\not\in{\mathcal{P}}^{k})\leq\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})+\operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{t_{k}^{~6}}} 6(37) (37)(37)

where tk = (k βˆ’ 1)L is the end time of episode k βˆ’ 1. Therefore, it follows that

Pr X K k=√ K L +1 βˆ†kI(P βˆ— D 6∈ PkD ∨ P βˆ—6∈ Pk) = 0!= Pr βˆ€k : $rK L % + 1 ≀ k ≀ K; P βˆ— D ∈ PkD ∧ P βˆ— ∈ Pk ! = 1 βˆ’ Pr βˆƒk : $rK L % + 1 ≀ k ≀ K; P βˆ— D 6∈ PkD ∨ P βˆ—6∈ Pk ! (i) β‰₯ 1 βˆ’X K k=√ K L +1 Pr(P βˆ— D 6∈ PkD ∨ P βˆ—6∈ Pk) 1βˆ’βˆ‘k=⌊KLβŒ‹+1KΞ΄tk6\stackrel{{(ii)}}{{\geq}}1-\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\frac{\delta}{t_{k}^{6}} $$\stackrel{{(iii)}}{{\geq}}1-\sum_{t=\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\int_{\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\frac{\delta}{5(KL)^{\frac{\pi}{4}}}.\tag{38}$$ follows from Eq. [37] and (iii) holds using that $t_{k}=(k-1)L$. Hence (39)(39) where (i) follows from a union bound, (ii) follows from Eq. 37 and (iii) holds using that tk = (k βˆ’ 1)L. Hence, with probability at least 1 βˆ’Ξ΄ 5(KL) 5 4 we have that

βˆ‘k=⌊KLβŒ‹+1KΞ”kI(PDβˆ—βˆˆΜΈPDk∨Pβˆ—βˆˆΜΈPk)=0.\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=0. If we combine the above equation and Eq. 35, we can conclude that, with probability at least 1 βˆ’Ξ΄ 5T 5/4, we have that βˆ‘k=1⌊TkβŒ‹Ξ”kI(PDβˆ—βˆˆΜΈPDk∨Pβˆ—βˆˆΜΈPk)≀T\sum_{k=1}^{\lfloor{\sqrt{\frac{T}{k}}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq{\sqrt{T}}

where T = KL. Next, if we combine Eqs. 33 and 40, we have

R(T) = X K k=1 βˆ†kI(P βˆ— D ∈ PkD ∧ P βˆ— ∈ Pk) +X K k=1 βˆ†kI(P βˆ— D 6∈ PkD ∨ P βˆ—6∈ Pk) ≀ 12L s |A||S||D|T log T|S||D| Ξ΄ + 12L|S|s|A|T log T|S||A| Ξ΄ + √ T ≀ 13L s |A||S||D|T log T|S||D| Ξ΄ + 12L|S|s|A|T log T|S||A| Ξ΄ (41) Finally, since P∞ T =1Ξ΄ 5T 5/4 ≀ Ξ΄, with probability at least 1 βˆ’ Ξ΄, the above inequality holds. This concludes the (40)(40) proof.

B Useful Lemmas

Lemma 4. Suppose PD and P are true transitions and PD ∈ PkD, P ∈ Pkfor episode k*. Then, for arbitrary* policy $x^{k}$, and arbitrary $P_{D}^{k}\in\mathcal{P}{D}^{k}$, $P^{k}\in\mathcal{P}^{k}$, it holds that $$V_{1}^{s}{}{\mid P{D},,P}(s,d)-V_{1}^{s}{}{\mid P{D}^{k},,P^{k}}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min{1,\beta_{D}^{k}(s_{t},d_{t},\delta)}+\sum_{t=1}^{H}\min{1,\beta^{k}(s_{t},a_{t},\delta)}\mid s_{1}=s,d_{0}=d\right],\tag{42}$$ where the expectation is taken over the MDP with policy Ο€ k under true transitions PD and P.

Proof. For ease of notation, let v k t := V Ο€ k t | PD,P , v k t | k := V Ο€ k t | P kD,P k and c Ο€ t (s, d) = cΟ€ k t (s,d) (s, d). We also define d 0 = Ο€ k 1 (s, d). From Eq 68, we have

Pkβ™―(s,d)=c1β™―(s,d)+βˆ‘a∈Apx1β™―(s,d)(aβ€‰βˆ£β€‰s)β‹…(cΞ΅(s,a)+βˆ‘sβ€²βˆˆSp(sβ€²βˆ£s,a)β‹…P2β™―(sβ€²,dβ€²))(43)\mathbb{P}_{k}^{\sharp}(s,d)=c_{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}(a\,|\,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime}|s,a)\cdot\mathbb{P}_{2}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{43} $$\mathbb{P}{1,|,k}^{\sharp}(s,d)=c{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}^{k}(a,|,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p^{k}(s^{\prime}|s,a)\cdot\mathbb{P}_{2,|,k}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{44}$$ [\begin{array}{ Now, using above equations, we rewrite v k 1 (s, d) βˆ’ v k 1 | k (s, d) as

v k 1 (s, d) βˆ’ v k 1 | k (s, d) = X a∈A pΟ€ k 1 (s,d) (a|s) ce(s, a) + X s 0∈S p(s 0|s, a) Β· v k 2 (s 0, d0) ! a∈A p k Ο€ k 1 (s,d) (a|s) ce(s, a) + X s 0∈S p k(s 0|s, a) Β· v k 2 | k (s 0, d0) ! βˆ’ X a∈A pΟ€ k 1 (s,d) (a | s) ce(s, a) + X s 0∈S p(s 0|s, a) Β· v k 2 (s 0, d0) βˆ’ ce(s, a) βˆ’ X s 0∈S p k(s 0| s) Β· v k 2 | k (s 0, d0) ! (i)

X a∈A pΟ€ k 1 (s,d) (a | s) βˆ’ p k Ο€ k 1 (s,d) (a | s)

ce(s, a) + X s 0∈S p k(s 0| s, a) · v k 2 | k (s 0, d0)

  • X | {z } ≀L a∈A "pΟ€ k 1 (s,d) (a | s) Β· X s 0∈S hp(s 0| s, a)v k 2 (s 0, d0) βˆ’ p k(s 0| s, a)v k 2 | k (s 0, d0) i#

(ii) ≀ X a∈A hpΟ€ k 1 (s,d) (a | s) βˆ’ p k Ο€ k 1 (s,d) (a | s) i

  • L X (iii) =X a∈A "pΟ€ k 1 (s,d) (a | s) Β· X s 0∈S p(s 0| s, a) Β· v k 2 (s 0, d0) βˆ’ v k 2 | k (s 0, d0)

a∈A

pΟ€ k 1 (s,d) (a | s) X s 0∈S p(s 0| s, a) βˆ’ p k(s 0| s, a)v k 2 | k (s 0, d0) | {z } ≀L

  • X

a∈A hpΟ€ k 1 (s,d) (a | s) βˆ’ p k Ο€ k 1 (s,d) (a | s, d) i

  • L X (iv) ≀ Ea∼pΟ€k 1 (s,d) (. | s),s0∼p(Β· | s,a) hv k 2 (s 0, d0) βˆ’ v k 2 | k (s 0, d0) i
  • LEa∼pΟ€k 1 (s,d) (Β· | s) "X s 0∈S -p(s 0| s, a) βˆ’ p k(s 0| s, a)#+ L X a∈A hpΟ€ k 1 (s,d) (a | s) βˆ’ p k Ο€ k 1 (s,d) (a | s) i, (45) where (i) follows by adding and subtracting term pΟ€ k 1 (s,d) (a | s) ce(s, a) + Ps 0∈S p k(s 0| s, a) Β· v k 2 | k (s 0, d0) , (ii) follows from the fact that ce(s, a) + Ps 0∈S p k(s 0| s, a) Β· v k 2 | k (s 0, d0) ≀ L, since, by assumption, ce(s, a) + cc(d) + cx(d, d0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D.. Similarly, (iii) follows by adding and subtracting p(s 0| s, a)v k 2 | k (s 0, d0), and (iv) follows from the fact that v k 2 | k ≀ L. By assumption, both PD and P k D lie in the confidence set P k D(Ξ΄), so

βˆ‘a∈A[pΟ€1k(s,d)(aβ€‰βˆ£β€‰s)βˆ’pΟ€1k(s,d)k(aβ€‰βˆ£β€‰s)]≀min⁑{1,Ξ²Dk(s,dβ€²=Ο€1k(s,d),Ξ΄)}\sum_{a\in\mathcal{A}}\left[p_{\pi_{1}^{k}(s,d)}(a\,|\,s)-p_{\pi_{1}^{k}(s,d)}^{k}(a\,|\,s)\right]\leq\operatorname*{min}\{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)\}

Similarly,

βˆ‘sβ€²βˆˆS[p(sβ€²β€‰βˆ£β€‰s,a)βˆ’pk(sβ€²β€‰βˆ£β€‰s,a)]≀min⁑{1,Ξ²k(s,a,Ξ΄)}\sum_{s^{\prime}\in S}\left[p(s^{\prime}\,|\,s,a)-p^{k}(s^{\prime}\,|\,s,a)\right]\leq\min\{1,\beta^{k}(s,a,\delta)\}

If we combine Eq. 46 and Eq. 47 in Eq. 45, for all s ∈ S, it holds that

$\overline{v}{1}^{k}(s,d)-\overline{v}{1,|,k}^{k}(s,d)\leq\mathbb{E}{a\sim p{\pi_{1}^{k}(s,d)}^{(1,s)},s^{\prime}\sim p(1,s,a)}\left[\overline{v}{2}^{k}(s^{\prime},d^{\prime})-\overline{v}{2,|,k}^{k}(s^{\prime},d^{\prime})\right]$ $$+,\mathrm{LE}{a\sim p{\pi_{1}^{k}(s,d)}^{(1,s)}}\left[\min{1,\beta^{k}(s,a,\delta)}\right]$$ $$+,L\left[\min{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)}\right]$$ (46)(46) (47)(47) Similarly, for all s ∈ S, d ∈ D we can show

vβ€Ύ2k(s,d)βˆ’vβ€Ύ2β€‰βˆ£β€‰kk(s,d)≀ Ea∼pΟ€1k(s,d)k,sβ€²βˆΌpΞΆk[s,a)[vβ€Ύ3k(sβ€²,Ο€2(s,d))βˆ’vβ€Ύ3β€‰βˆ£β€‰kk(sβ€²,Ο€2(s,d))]+LEa∼pΟ€1k(s,d)k[min⁑{1,Ξ²k(s,a,Ξ΄)}]+L[min⁑{1,Ξ²Dk(s,Ο€2k(s,d),Ξ΄)}]\begin{array}{l}{{\overline{{v}}_{2}^{k}(s,d)-\overline{{v}}_{2\,|\,k}^{k}(s,d)\leq\,\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k},s^{\prime}\sim p_{\zeta}^{k}[s,a)}\left[\overline{{v}}_{3}^{k}(s^{\prime},\pi_{2}(s,d))-\overline{{v}}_{3\,|\,k}^{k}(s^{\prime},\pi_{2}(s,d))\right]}}\\ {{+L\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k}}\left[\min\{1,\beta^{k}(s,a,\delta)\}\right]}}\\ {{+L\left[\min\{1,\beta_{D}^{k}(s,\pi_{2}^{k}(s,d),\delta)\}\right]}}\end{array}

Hence, by induction we have

vβ€Ύ1k(s,d)βˆ’vβ€Ύ1  1  kk(s,d)≀LE[βˆ‘t=1Lmin⁑{1,Ξ²Pk(st,dt,Ξ΄)}+βˆ‘t=1Lmin⁑{1,Ξ²k(st,at,Ξ΄)}∣s1=s,d0=d]\overline{v}_{1}^{k}(s,d)-\overline{v}_{1\,\,1\,\,k}^{k}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\rm P}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1}=s,d_{0}=d\right] where $s_{t}$ is the $t$th element of $L^{p}$. The $s_{t}$ is the $t$th element of $L^{p}$.
(48)(48) (49)(49) β–‘\square (50)\quad(50) where the expectation is taken over the MDP with policy $\pi^k$ under true transitions $P_{\mathcal{D}}$ and $P_{\pi^k}$. Lemma 5. Let W be a finite set and c be a constant. For k ∈ [K], suppose Tk = (wk,1, wk,2, . . . , wk,H) is a random variable with distribution P(.|wk,1), where wk,i ∈ W*. Then,*

βˆ‘k=1KETk∼P(.∣wk,i)[βˆ‘t=1Hmin⁑{1,cmax⁑{1,Nk(wk,t)}}]≀2H∣W∣+22c∣W∣KH(51)\sum_{k=1}^{K}\mathbb{E}_{T_{k}\sim P(.|w_{k,i})}\left[\sum_{t=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,t})\}}}\}\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{51} $=\sum_{i=1}^{k-1}\sum_{t=1}^{H}\mathbb{I}(w_{i,t}=w)$.
with Nk(w) := Pkβˆ’1 j=1 Proof. The proof is adapted from Osband et al. (2013). We first note that

E "X K k=1 X H t=1 min{1,c pmax{1, Nk(wk,t)} } # = E "X K k=1 X H t=1 I(Nk(wk,t) ≀ H) min{1,c pmax{1, Nk(wk,t)} } # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) min{1,c pmax{1, Nk(wk,t)} } # ≀ E "X K k=1 X H t=1 I(Nk(wk,t) ≀ H) Β· 1 # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) Β·c pNk(wk,t) # (52)

Then, we bound the first term of the above equation

E[βˆ‘k=1Kβˆ‘t=1HI(Nk(wk,t)≀H)]=E[βˆ‘w∈W{#of timeswis observed andNk(w)≀H}]\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})\leq H)\right]=\mathbb{E}\left[\sum_{w\in\mathcal{W}}\{\#\text{of times}w\text{is observed and}N_{k}(w)\leq H\}\right] $$\leq\mathbb{E}\left[|\mathcal{W}|\cdot2H|=2H|\mathcal{W}|\right.\tag{1}$$

To bound the second term, we first define nΟ„ (w) as the number of times w has been observed in the first Ο„ steps, i.e., if we are at the t th index of trajectory Tk, then Ο„ = tk + t, where tk = (k βˆ’ 1)H, and note that

ntk+t(w)≀Nk(w)+tn_{t_{k}+t}(w)\leq N_{k}(w)+t Ξ¦k(w)+H+1≀2Nk(w).\mathbf{\Phi}_{k}(w)+H+1\leq2N_{k}(w). ntk+t(w) ≀ Nk(w) + t (54) because we will observe w at most t ∈ {1*, . . . , H*} times within trajectory Tk. Now, if Nk(w) > H, we have that

ntk+t(w)+1≀tn_{t_{k}+t}(w)+1\leq t

ntk+t(w) + 1 ≀ Nk(w) + t + 1 ≀ Nk(w) + H + 1 ≀ 2Nk(w). (55) Hence we have,

I(Nk(wk,t)>H)(ntk+t(wk,t)+1)≀2Nk(wk,t)β€…β€ŠβŸΉβ€…β€ŠI(Nk(wk,t)>H)Nk(wk,t)≀2ntk+t(wk,t)+1\mathbb{I}(N_{k}(w_{k,t})>H)(n_{t_{k}+t}(w_{k,t})+1)\leq2N_{k}(w_{k,t})\implies{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\leq{\frac{2}{n_{t_{k}+t}(w_{k,t})+1}} Then, using the above equation, we can bound the second term in Eq. 52: The above equation, we can bound the second term in Eq. (1) as $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]=\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}c_{i}\sqrt{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\right]$$ $$\stackrel{{(i)}}{{\leq}}\sqrt{2}c\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\sqrt{\frac{1}{n_{t_{k}+t}(w_{k,t})+1}}\right],$$

(53)\quad(53)

(55)(55) (56)(56) (57)\quad(57)

where (i) follows from Eq. 56.

Next, we can further bound E hPK k=1 E "X K k=1 X H t=1 s1 ntk+t(wk,t) + 1#= E "X KH Ο„=1 s1 nΟ„ (wΟ„ ) + 1# (i) = E r1 Ξ½ + 1 X w∈W NKX+1(w) Ξ½=0 w∈W E r1 Ξ½ + 1 NKX+1(w) Ξ½=0 = X w∈W E "Z NK+1(w)+1 r1 x dx# ≀ X 1 w∈W E h2pNK+1(w) i ≀ X (ii) ≀ E 2 s|W| X w∈W NK+1(w) (iii) = E h2p|W|KHi= 2p|W|KH, (58) where (i) follows from summing over different w ∈ W instead of time and from the fact that we observe each PH t=1 q 1 ntk+t(wk,t)+1 ias follows:

(58)\left({\mathrm{58}}\right) w exactly NK+1(w) times after K trajectories, (ii) follows from Jensen's inequality and (iii) follows from the fact that Pw∈W NK+1(w) = KH. Next, we combine Eqs 57 and 58 to obtain

E[βˆ‘k=1Kβˆ‘t=1HI(Nk(wk,t)>H)cNk(wk,t)]≀2cΓ—2∣W∣KHΛ‰=22c∣W∣KHΛ‰(59)\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]\leq\sqrt{2}c\times2\sqrt{|\mathcal{W}|K\bar{H}}=2\sqrt{2}c\sqrt{|\mathcal{W}|K\bar{H}}\tag{59}

Further, we plug in Eqs. 53 and 59 in Eq.52

E[βˆ‘k=1Kβˆ‘r=1Hmin⁑{1,cmax⁑{1,Nk(wk,i)}}}β€‰βˆ£β€‰βˆ£β€‰]≀2H∣W∣+22c∣W∣KH(60)\mathbb{E}\left[\sum_{k=1}^{K}\sum_{r=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,i})\}}}\}\}\,|\,|\,\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{60} the proof.

This concludes the proof.

Lemma 6. Let W be a finite set and Pt(Ξ΄) := {p : βˆ€w ∈ W, ||p(.|w) βˆ’ pΛ†t(.|w)||1≀ Ξ²t(w, Ξ΄)} be a |W|rectangular confidence set over probability distributions p βˆ—(.|w) with m outcomes, where pΛ†t(.|w) is the empirical estimation of $\overline{P^{\prime}(.|w)}$. Suppose at each time $\tau$, we observe an state $w_{\tau}=w$ and sample from $p^{\prime}(.|w)$. If $\beta_{t}(w,\delta)=\sqrt{\frac{2\log\left(\frac{\delta^{2}(\overline{P^{\prime}(.|w)})^{2-\delta+1}}{2\log\left(1,\overline{N_{t}(w)}\right)}\right)}{\max\left(1,N_{t}(w)\right)}}$ with $N_{t}(w)=\sum_{\tau=1}^{t}\mathbb{I}(w_{\tau}=w)$, then the true distributions $p^{}$ lie in the confidence set $\mathcal{P}_{t}(\delta)$ with probability at least $1-\frac{\delta}{2^{t}}$.
βˆ—(.|w)
. If* Proof. We adapt the proof from Lemma 17 in Jaksch et al. (2010). We note that,

Pr(p βˆ—6∈ Pt) (i) = Pr [ w∈W kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ Ξ²t(w, Ξ΄) ! (ii) ≀X w∈W Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ vuut 2 log t 7|W|2m+1 Ξ΄ max{1, Nt(w)} (iii) ≀X w∈W Xt n=0 Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ vuut 2 log t 7|W|2m+1 Ξ΄ , max{1, n} where (i) follows from the definition of the confidence set, i.e., the probability distributions do not lie in the confidence set if there is at least one state w in which kp βˆ—(Β· | w) βˆ’ pΛ†(Β· | w)k1 β‰₯ Ξ²t(w, Ξ΄), (ii) follows from the definition of Ξ²t(w, Ξ΄) and a union bound over all w ∈ W and (iii) follows from a union bound over all possible values of Nt(w). To continue, we split the sum into n = 0 and n > 0:

w∈W Xt n=0 Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ vuut 2 log t 7|W|2m+1 Ξ΄ X max{1, n} w∈W Xt n=1 Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ vuut2 log t 7|W|2m+1 Ξ΄ = X n if n=0 z }| { X w∈W Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ s 2 log t 7|W|2m+1 Ξ΄ ! + w∈W Xt n=1 Pr kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 β‰₯ vuut2 log t 7|W|2m+1 Ξ΄ (i) = X + 0 n (ii) ≀ t|W|2 m exp log βˆ’ t 7|W|2 m+1 Ξ΄ ≀δ 2t 6 , where (i) follows from the fact that kp βˆ—(Β· | w) βˆ’ pΛ†t(Β· | w)k1 < r2 log t 7|W|2m+1 Ξ΄ for non-trivial cases. More specifically,

Ξ΄<1, tβ‰₯2β€…β€ŠβŸΉβ€…β€Š2log(tΟ„βˆ£W∣2m+1Ξ΄)>2log⁑(512)>2,\delta<1,\,t\geq2\implies\sqrt{2log\left(\frac{t^{\tau}|\mathcal{W}|2^{m+1}}{\delta}\right)}>\sqrt{2\log(512)}>2, $$|p^{}(\cdot,|,w)-\hat{p}{t}(\cdot,|,w)|{1}\leq\sum_{i\in[m]}\left(p^{}(i,|,s)+\hat{p}_{t}(i,|,w)\right)\leq2,\tag{61}$$

and (ii) follows from the fact that, after observing n samples, the L 1-deviation of the true distribution p βˆ— from the empirical one pΛ† over m events is bounded by:

Pr(βˆ₯pβˆ—(β‹…)βˆ’p^(β‹…)βˆ₯1β‰₯Ο΅)≀2mexp⁑(βˆ’nΟ΅22)\mathrm{Pr}\left(\|p^{*}(\cdot)-{\hat{p}}(\cdot)\|_{1}\geq\epsilon\right)\leq2^{m}\exp\left(-n{\frac{\epsilon^{2}}{2}}\right)

Lemma 7. Consider the following minimization problem:

(62)(62) β–‘\square (63)(63) (65)(65) (66)(66)

where d β‰₯ 0, bi β‰₯ 0 βˆ€i ∈ {1, . . . , m},Pi bi = 1 and 0 ≀ w1 ≀ w2 . . . ≀ wm. Then, the solution to the above minimization problem is given by:

xiβˆ—={min⁑{1,b1+d2}if i=1biif i>1 andβ€‰βˆ‘l=1ixl≀10otherwise.x_{i}^{*}={\left\{\begin{array}{l l}{\operatorname*{min}\{1,b_{1}+{\frac{d}{2}}\}}&{{\mathrm{if~}}i=1}\\ {b_{i}}&{{\mathrm{if~}}i>1\,a n d\,\sum_{l=1}^{i}x_{l}\leq1}\\ {0}&{{\mathrm{otherwise.}}}\end{array}\right.} (64)(64)

Proof. Suppose there is {x 0 i ;Pi x 0 i = 1, x0i β‰₯ 0} such that Pi x 0 iwi <Pi x βˆ— i wi. Let j ∈ {1*, . . . , m*} be the first index where x 0 j 6= x βˆ— j , then it's clear that x 0 j > xβˆ— j .

If j = 1:

βˆ‘i=1m∣xiβ€²βˆ’bi∣=∣x1β€²βˆ’b1∣+βˆ‘i=2m∣xiβ€²βˆ’bi∣>d2+βˆ‘i=2mbiβˆ’xiβ€²=d2+x1β€²βˆ’b1>d\sum_{i=1}^{m}\vert x_{i}^{\prime}-b_{i}\vert=\vert x_{1}^{\prime}-b_{1}\vert+\sum_{i=2}^{m}\vert x_{i}^{\prime}-b_{i}\vert>{\frac{d}{2}}+\sum_{i=2}^{m}b_{i}-x_{i}^{\prime}={\frac{d}{2}}+x_{1}^{\prime}-b_{1}>d

If j > 1:

βˆ‘i=1m∣xiβ€²βˆ’bi∣=∣x1β€²βˆ’b1∣+βˆ‘i=jm∣xiβ€²βˆ’bi∣>d2+βˆ‘i=j+1mbiβˆ’xiβ€²>d2+x1β€²βˆ’b1=d\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|=|x_{1}^{\prime}-b_{1}|+\sum_{i=j}^{m}|x_{i}^{\prime}-b_{i}|>\frac{d}{2}+\sum_{i=j+1}^{m}b_{i}-x_{i}^{\prime}>\frac{d}{2}+x_{1}^{\prime}-b_{1}=d radical the condition $\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|\leq d$.
Both cases contradict the condition Pm Lemma 8. For the value function V Ο€ t|PD,P defined in Eq. 10, we have that:

V Ο€ t|PD,P (s, d) = cΟ€t(s,d)(s, d) + X a∈A pΟ€t(s,d)(a|s) Β· ce(s, a) + X s 0∈S p(s 0| s, a) Β· V Ο€ t+1|PD,P (s 0, Ο€t(s, d))!(67) Proof.

Vt∣PD,PΟ€(s,d)cΛ‰(s,d)+βˆ‘sβ€²βˆˆSp(sβ€²,Ο€t(s,d)∣(s,d))Vt+1∣PD,PΟ€(sβ€²,Ο€t(s,d))V_{t|P_{\mathsf{D}},P}^{\pi}(s,d)\stackrel{(i)}{=}\bar{c}(s,d)+\sum_{s^{\prime}\in\mathcal{S}}p(s^{\prime},\pi_{t}(s,d)|(s,d))V_{t+1|P_{\mathsf{D}},P}^{\pi}(s^{\prime},\pi_{t}(s,d)) β–‘\square (67)(67) βˆ‘i=1mxiwisubject toβˆ‘i=1m∣xiβˆ’biβˆ£β‰€d, βˆ‘ixi=1,xiβ‰₯0 βˆ€i∈{1,…,m},\begin{array}{ll}\underset{\boldsymbol{x}}{minimize}&\sum_{i=1}^{m}x_{i}w_{i}\\ \text{subject to}&\sum_{i=1}^{m}|x_{i}-b_{i}|\leq d,\ \sum_{i}x_{i}=1,\\ &x_{i}\geq0\ \forall i\in\{1,\ldots,m\},\end{array} βˆ‘s∈ApΟ€1(s,a)(aβ€‰βˆ£β€‰s)cΟ€2(s,a)+cΟ€(Ο€1(s,d))+cΟ€(Ο€1(s,d),d)+βˆ‘sβ€²βˆˆSp(sβ€²β€‰βˆ£β€‰s,a)pΟ€1(s,d)(aβ€‰βˆ£β€‰s)VΟ€1+1∣pΟ€2,pβˆ—(sβ€²,Ο€1(s,d))\sum_{s\in A}p_{\pi_{1}(s,a)}(a\,|\,s)c_{\pi_{2}}(s,a)+c_{\pi}(\pi_{1}(s,d))+c_{\pi}(\pi_{1}(s,d),d)+\sum_{s^{\prime}\in S}p(s^{\prime}\,|\,s,a)p_{\pi_{1}(s,d)}(a\,|\,s)V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d)) $$\stackrel{{(iii)}}{{=}}c_{\pi_{1}(s,a)}(s,d)+\sum_{n\in A}p_{\pi_{1}(s,a)}(a|s)\cdot\left(c_{\pi_{1}}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime},|,s,a)\cdot V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d))\right),\tag{68}$$

where (i) is the standard Bellman equation in the standard MDP defined with dynamics 3 and costs 4, (ii) follows by replacing cΒ― and p with equations 3 and 4, and (iii) follows by cd0 (s, d) = cc(d 0) + cx(d 0, d).

Lemma 9. min{T, a + b} ≀ min{T, a} + min{T, b} for T, a, b β‰₯ 0. Proof. Assume that a ≀ b ≀ a + b. Then,

min⁑{T,a+b}={T≀a+b=min⁑{T,a}+min⁑{T,b}\mboxif  a≀b≀T≀a+bT≀a+T=min⁑{T,a}+min⁑{T,b}\mboxif  a≀T≀b≀a+bT≀2T=min⁑{T,a}+min⁑{T,b}\mboxif  T≀a≀b≀a+ba+b=min⁑{T,a}+min⁑{T,b}\mboxif  a≀b≀a+b≀T(69)\min\{T,a+b\}=\left\{\begin{array}{ll}T\leq a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq T\leq a+b\\ T\leq a+T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq T\leq b\leq a+b\\ T\leq2T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ T\leq a\leq b\leq a+b\\ a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq a+b\leq T\end{array}\right.\tag{69}

C Implementation Of Ucrl2 In Finite Horizon Setting

ALGORITHM 2: Modified UCRL2 algorithm for a finite horizon MDP M = (S, A*, P, C, L*).

Require: Cost C = [c(s, a)], confidence parameter δ ∈ (0, 1).

1: ({Nk(s, a)}, {Nk(s, a, s0)}) ← InitializeCounts() 2: for k = 1*, . . . , K* do 3: for s, s0 ∈ S, a ∈ A do 4: if Nk(s, a) 6= 0 then pΛ†k(s 0|s, a) ← Nk(s, a, s0) Nk(s, a)else pΛ†k(s 0|s, a) ← 1 |S| 5: Ξ²k(s, a, Ξ΄) ← s14|S|log 2(kβˆ’1)L*|A||S|* Ξ΄ max{1, Nk(s, a)} 6: end for 7: Ο€ k ← ExtendedValueIteration(Λ†pk, Ξ²k, C) 8: s0 ← InitialConditions() 9: for t = 0*, . . . , L* βˆ’ 1 do 10: Take action at = Ο€ k t (st), and observe next state st+1.

11: Nk(st, at) ← Nk(st, at) + 1 12: Nk(st, at, st+1) ← Nk(st, at, st+1) + 1 13: end for 14: end for 15: Return Ο€ K ALGORITHM 3: It implements ExtendedValueIteration, which is used in Algorithm 2.

Require: Empirical transition distribution pˆ(.|s, a), cost c(s, a), and confidence interval β(s, a, δ).

1: Ο€ ← InitializePolicy(), v ← InitializeValueFunction() 2: n ← |S| 3: for t = T βˆ’ 1*, . . . ,* 0 do 4: for s ∈ S do 5: for a ∈ A do 6: s 0 1*, . . . , s*0n ← Sort(vt+1) # vt+1(s 0 1) ≀ . . . ≀ vt+1(s 0 n) 7: p(s 0 1) ← min{1, pΛ†(s 0 1|s, a) + Ξ²(s,a,Ξ΄) 2} 8: p(s 0 i) ← pΛ†(s 0 i|s, a) βˆ€ 1 < i ≀ n 9: l ← n 10: **while** Ps 0 i ∈S p(s 0 i) > 1 do 11: p(s 0 l) = max{0, 1 βˆ’Ps 0 i 6=s 0 l p(s 0 i)} 12: l ← l βˆ’ 1 13: end while 14: q(s, a) = c(s, a) + Es0∼p [vt+1(s 0)] 15: end for 16: vt(s) ← mina∈A{q(s, a)} 17: Ο€t(s) ← arg mina∈A{q(s, a)} 18: end for 19: end for 20: Return Ο€

D Distribution Of Cell Types And Traffic Levels In The Lane Driving Environment

road grass stone car
no-car 0.7 0.2 0.1 0
light 0.6 0.2 0.1 0.1
heavy 0.5 0.2 0.1 0.2

Table 1: Probability of cell types based on traffic level.

no-car light heavy
no-car 0.99 0.01 0
light 0.01 0.98 0.01
heavy 0 0.01 0.99

Table 2: Probability of traffic levels based on the previous row.

E Performance of the human and machine agents in obstacle avoidance task

28_image_0.png

Figure 8: Performance of the machine policy, a human policy with ΟƒH = 2, and the optimal policy in terms of total cost. In panel (a), the episodes start with an initial traffic level Ξ³0 = no-car and, in panel (b), the episodes start with an initial traffic level Ξ³0 ∈ {light, heavy}.

F The amount of human control for different initial traffic levels

28_image_1.png

Figure 9: The amount of human control rate using UCRL2-MC switching algorithm for different initial traffic levels. For each traffic level, we sample 500 environment and average the human control rate over them.

Higher traffic level results in more human control, as the human agent is more reliable in heavier traffic.

29_image_0.png

Figure 10: Ratio of UCRL2-MC regret to UCRL2 for (a) a set of action sizes and (b) different numbers of agents. By increasing the action space size, the performance of UCRL2-MC gets worse but remains within the same scale. In addition, UCRL2-MC outperforms UCRL2 in environments with a larger number of agents.

G Additional Experiments

In this section, we run additional experiments in the RiverSwim environment to investigate the effect of action space size and the number of agents in a team on the total regret.

G.1 Action Space Size

To study the effect of action space size on the total regret, we artificially increase the number of actions by planning m steps ahead. More concretely, we consider a new MDP, where each time step consists of m steps of the original RiverSwim MDP, and the switching policy decides for all the m steps at once. The number of actions in the new MDP increases to 2 m, while the state space remains unchanged. We consider a setting with a single team of two agents with p = 0 and p = 1, i.e., one agent always takes action right and the other takes left. We run the simulations for 20,000 episodes with m = {1, 2, Β· Β· Β· , 4}, i.e., with the action size of 2, 4, 8, 16. Each experiment is repeated for 5 times. We compare the performance of our algorithm against UCRL2 in terms of total regret. Figure 10 (a) summarizes our results; The performance of UCRL2-MC gets worse by increasing the number of actions as the regret bound directly depends on the action size (Theorem 2). However, the regret ratio still remains within the same scale even after doubling the number of actions. One reason is that our algorithm only needs to learn the actions taken by the agents to find the optimal switching policy. If the agents' policies include a small subset of actions, our algorithms will maintain a small regret bound even in environments with huge action space. Therefore, we believe a more careful analysis can improve our regret bound by making it a function of agents' action space instead of the whole action size.

G.2 Number Of Agents

Here, our goal is to examine the impact of the number of agents on the total regret achieved by our algorithm.

To this end, we consider the original RiverSwim MDP (i.e., two actions) with a single team of n agents, where we run our simulations for n = {3, 4, Β· Β· Β· , 10} and 20,000 episodes for each n. We choose p, i.e., the probability of taking action right for n agents as {0,1 nβˆ’1 , Β· Β· Β· , nβˆ’2 nβˆ’1 , 1}. As shown in Figure 10 (b), UCRL2-MC outperforms UCRL2 as the number of agents increases. This agrees with Theorem 2, as our derived regret bound mainly depends on the action space size |A|, while the UCRL2 regret bound depends on the number of agents |D|.