File size: 87,945 Bytes
b6d9e08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
# Learning To Switch Among Agents In A Team Via 2**-Layer** Markov Decision Processes

Vahid Balazadeh *vahid@cs.toronto.edu* University of Toronto Abir De *abir@cse.iitb.ac.in* Indian Institute of Technology Bombay Adish Singla *adishs@mpi-sws.dot.org* Max Planck Institute for Software Systems Manuel Gomez Rodriguez *manuelgr@mpi-sws.org* Max Planck Institute for Software Systems Reviewed on OpenReview: *https://openreview.net/forum?id=NT9zgedd3I*

## Abstract

Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner—they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. The total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps and, whenever multiple teams of agents operate in a similar environment, our algorithm greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities and it enjoys a better regret bound than problem-agnostic algorithms. Simulation experiments illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.

## 1 Introduction

In recent years, reinforcement learning (RL) agents have achieved, or even surpassed, human performance in a variety of computer games by taking decisions autonomously, without human intervention (Mnih et al.,
2015; Silver et al., 2016; 2017; Vinyals et al., 2019). Motivated by these successful stories, there has been a tremendous excitement on the possibility of using RL agents to operate fully autonomous cyberphysical systems, especially in the context of autonomous driving. Unfortunately, a number of technical, societal, and legal challenges have precluded this possibility to become so far a reality.

In this work, we argue that existing RL agents may still enhance the operation of cyberphysical systems if deployed under lower automation levels. For example, if we let RL agents take some of the actions and leave the remaining ones to human agents, the resulting performance may be better than the performance either of them would achieve on their own (Raghu et al., 2019a; De et al., 2020; Wilder et al., 2020). Once we depart from full automation, we need to address the following question: when should we switch control between machine and human agents? In this work, we look into this problem from a theoretical perspective and develop an online algorithm that learns to optimally switch control between multiple agents in a team automatically. However, to fulfill this goal, we need to address several challenges:
- *Level of automation.* In each application, what is considered an appropriate and tolerable load for each agent may differ (European Parliament, 2006). Therefore, we would like that our algorithms provide mechanisms to adjust the amount of control for each agent (*i.e.*, level of automation) during a given time period.

- *Number of switches.* Consider two different switching patterns resulting in the same amount of agent control and equivalent performance. Then, we would like our algorithms to favor the pattern with the least number of switches. For example, in a team consisting of human and machine agents, every time a machine defers (takes) control to (from) a human, there is an additional cognitive load for the human (Brookhuis et al., 2001).

- *Unknown agent policies.* The spectrum of human abilities spans a broad range (Macadam, 2003). As a result, there is a wide variety of potential human policies. Here, we would like that our algorithms learn personalized switching policies that, over time, adapt to the particular humans (and machines) they are dealing with.

- *Disentangling agents' policies and environment dynamics.* We would like that our algorithms learn to disentangle the influence of the agents' policies and the environment dynamics on the switching policies. By doing so, they could be used to efficiently find multiple personalized switching policies for different teams of agents operating in similar environments (*e.g.*, multiple semi-autonomous vehicles with different human drivers).

To tackle the above challenges, we first formally define the problem of learning to switch control among agents in a team using a 2-layer Markov decision process (Figure 1). Here, the team can be composed of any number of machines or human agents, and the agents' policies, as well as the transition probabilities of the environment, may be unknown. In our formulation, we assume that all agents follow Markovian policies1, similarly as other theoretical models of human decision making (Townsend et al., 2000; Daw &
Dayan, 2014; McGhan et al., 2015). Under this definition, the problem reduces to finding the switching policy that provides an optimal trade off between the environmental cost, the amount of agent control, and the number of switches. Then, we develop an online learning algorithm, which we refer to as UCRL2-MC2, that uses upper confidence bounds on the agents' policies and the transition probabilities of the environment to find a sequence of switching policies whose total regret with respect to the optimal switching policy is sublinear in the number of learning steps. In addition, we also demonstrate that the same algorithm can be used to find multiple sequences of switching policies across several independent teams of agents operating in similar environments, where it greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2, a very well known reinforcement learning algorithm that we view as the most natural competitor. Finally, we perform a variety of simulation experiments in the standard RiverSwim environment as well as an obstacle avoidance task, where we consider multiple teams of agents (drivers) composed by one human and one machine agent.

Our results illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic alternatives.

Before we proceed further, we would like to point out that, at a broader level, our methodology and theoretical results are applicable to the problem of switching control between agents following Markovian policies. As long as the agent policies are Markovian, our results do not distinguish between machine and human agents.

In this context, we view teams of human and machine agents as one potential application of our work, which we use as a motivating example throughout the paper. However, we would also like to acknowledge that a practical deployment of our methodology in a real application with human and machine agents would require considering a wide range of additional practical aspects (*e.g.*, transparency, explainability, and visualization). Moreover, one may also need to explicitly model the difference in reaction times between human and machine agents. Finally, there may be scenarios in which it might be beneficial to allow a human operator to switch control. Such considerations are out of the scope of our work.

1In certain cases, it is possible to convert a non-Markovian human policy into a Markovian one by changing the state representation (Daw & Dayan, 2014). Addressing the problem of learning to switch control among agents in a team in a semi-Markovian setting is left as a very interesting venue for future work.

2UCRL2 with Multiple Confidence sets.

## 2 Related Work

One can think of applying existing RL algorithms (Jaksch et al., 2010; Osband et al., 2013; Osband &
Van Roy, 2014; Gopalan & Mannor, 2015), such as UCRL2 or Rmax, to find switching policies. However, these problem-agnostic algorithms are unable to exploit the specific structure of our problem. More specifically, our algorithm computes the confidence intervals separately over the agents' policies and the transition probabilities of the environment, instead of computing a single confidence interval, as problem-agnostic algorithms do. As a consequence, our algorithm learns to switch more efficiently across multiple teams of agents, as shown in Section 6.

There is a rapidly increasing line of work on learning to defer decisions in the machine learning literature (Bartlett & Wegkamp, 2008; Cortes et al., 2016; Geifman et al., 2018; Ramaswamy et al., 2018; Geifman
& El-Yaniv, 2019; Liu et al., 2019; Raghu et al., 2019a;b; Thulasidasan et al., 2019; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al., 2020; Shekhar et al., 2021). However, previous work has typically focused on supervised learning. More specifically, it has developed classifiers that learn to defer by considering the defer action as an additional label value, by training an independent classifier to decide about deferred decisions, or by reducing the problem to a combinatorial optimization problem. Moreover, except for a few recent notable exceptions (Raghu et al., 2019a; De et al., 2020; 2021; Mozannar & Sontag, 2020; Wilder et al.,
2020), they do not consider there is a human decision maker who takes a decision whenever the classifiers defer it. In contrast, we focus on reinforcement learning, and develop algorithms that learn to switch control between multiple agents, including human agents. Recently, Jacq et al. (2022) introduced a new framework called lazy-MDPs to decide when to act optimally for reinforcement learning agents. They propose to augment existing MDPs with a new default action and encourage agents to defer decision-making to default policy in non-critical states. Though their lazy-MDP is similar to our augmented 2-layer MDP framework, our approach is designed to switch optimally between possibly multiple agents, each having its own policy.

Our work is also connected to research on understanding switching behavior and switching costs in the context of human-computer interaction (Czerwinski et al., 2000; Horvitz & Apacible, 2003; Iqbal & Bailey, 2007; Kotowick & Shah, 2018; Janssen et al., 2019), which has been sometimes referred to as "adjustable autonomy" (Mostafa et al., 2019). At a technical level, our work advances state of the art in adjustable autonomy by introducing an algorithm with provable guarantees to efficiently find the optimal switching policy in a setting in which the dynamics of the environment and the agents' policies are unknown (*i.e.*, there is uncertainty about them). Moreover, our work also relates to a recent line of research that combines deep reinforcement learning with opponent modeling to robustly switch between multiple machine policies (Everett
& Roberts, 2018; Zheng et al., 2018). However, this line of research does not consider the presence of human agents, and there are no theoretical guarantees on the performance of the proposed algorithms.

Furthermore, our work contributes to an extensive body of work on human-machine collaboration (Stone et al., 2010; Taylor et al., 2011; Walsh et al., 2011; Barrett & Stone, 2012; Macindoe et al., 2012; Torrey &
Taylor, 2013; Nikolaidis et al., 2015; Hadfield-Menell et al., 2016; Nikolaidis et al., 2017; Grover et al., 2018; Haug et al., 2018; Reddy et al., 2018; Wilson & Daugherty, 2018; Brown & Niekum, 2019; Kamalaruban et al.,
2019; Radanovic et al., 2019; Tschiatschek et al., 2019; Ghosh et al., 2020; Strouse et al., 2021). However, rather than developing algorithms that learn to switch control between humans and machines, previous work has predominantly considered settings in which the machine and the human interact with each other.

Finally, one can think of using option framework and the notion of macro-actions and micro-actions to formulate the problem of learning to switch (Sutton et al., 1999). However, the option framework is designed to address different levels of temporal abstraction in RL by defining macro-actions that correspond to sub-tasks
(skills). In our problem, each agent is not necessarily optimized to act for a specific task or sub-goal but for the whole environment/goal. Also, in our problem, we do not necessarily have control over all agents to learn the optimal policy for each agent, while in the option framework, a primary direction is to learn optimal options for each sub-task. In other words, even though we can mathematically refer to each agent policy as an option, they are not conceptually the same.

## 3 Switching Control Among Agents As A 2-Layer Mdp

Given a team of agents D, at each time step t ∈ {1*, . . . , L*}, our (cyberphysical) system is characterized by a state st ∈ S, where S is a finite state space, and a control switch dt ∈ D, which determines who takes an action at ∈ A, where A is a finite action space. In the above, the switch value is given by a (deterministic and time-varying) switching policy dt = πt(st, dt−1)
3. More specifically, if dt = d, the action at is sampled from the agent d's policy pd(at | st). Moreover, given a state st and an action at, the state st+1 is sampled from a transition probability p(st+1 | st, at). Here, we assume that the agents' policies and the transition probabilities may be unknown. Finally, given an initial state and switch value (s1, d0) and a trajectory τ = {(st, dt, at)}
L t=1 of states, switch values and actions, we define the total cost c(τ | s1, d0) as:

$$c(\tau\,|\,s_{1},d_{0})=\sum_{t=1}^{L}[c_{e}(s_{t},a_{t})+c_{c}(d_{t})+c_{x}(d_{t},d_{t-1})],\tag{1}$$
$$\left(2\right)$$

where ce(st, at) is the environment cost of taking action at at state st, cc(dt) is the cost of giving control to agent dt, cx(dt, dt−1) is the cost of switching from dt−1 to dt, and L is the time horizon4. Then, our goal is to find the optimal switching policy π
∗ = (π

1
, . . . , π∗L
) that minimizes the expected cost, *i.e.*,

$$\pi^{*}=\operatorname*{argmin}_{\pi}\mathbb{E}\left[c(\tau\mid s_{1},d_{0})\right],$$
E [c(τ | s1, d0)] , (2)
where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies.

To solve the above problem, one could just resort to problem-agnostic RL algorithms, such as UCRL2 or Rmax, over a standard Markov decision process (MDP), defined as

$${\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{D}},\bar{P},\bar{C},L),$$
(4) $$\binom{4}{5}$$. 
where *S × D* is an augmented state space, the set of actions D is just the switch values, the transition dynamics P¯ at time t are given by

$$p(s_{t+1},d_{t}\,|\,s_{t},d_{t-1})=\mathbb{I}[\pi_{t}(s_{t},d_{t-1})=d_{t}]\times\sum_{a\in\mathcal{A}}p(s_{t+1}\,|\,s_{t},a)p_{d_{t}}(a\,|\,s_{t}),\tag{3}$$

the immediate cost C¯ at time t is given by

$$\tilde{c}(s_{t},d_{t-1})=\mathbb{E}_{a_{t}\sim p_{\pi_{t}(s_{t},d_{t-1})}(\cdot\mid s_{t})}\left[c_{e}(s_{t},a_{t})\right]+c_{c}(\pi_{t}(s_{t},d_{t-1}))+c_{x}(\pi_{t}(s_{t},d_{t-1}),d_{t-1}).$$

Here, note that, by using conditional expectations, we can compute the average cost of a trajectory, given by Eq. 1, from the above immediate costs. However, these algorithms would not exploit the structure of the problem. More specifically, they would not use the observed agents' actions to improve the estimation of the transition dynamics over time.

$${\mathcal{M}}=({\mathcal{S}}\times{\mathcal{D}},{\mathcal{S}}\times{\mathcal{A}},{\mathcal{D}},P_{{\mathcal{D}}},P,C_{{\mathcal{D}}},C_{e},L)$$

To avoid the above shortcoming, we will resort instead to a 2-layer MDP where taking an action dt in state
(st, dt−1) leads first to an intermediate state (st, at) *∈ S × A* with probability pdt
(at | st) and immediate cost cdt
(st, dt−1) = cc(dt) + cx(dt, dt−1) and then to a final state (st+1, dt) *∈ S × D* with probability I[πt(st, dt−1) = dt] · p(st+1 | st, at) and immediate cost ce(st, at). More formally, the 2-layer MDP is defined by the following 8-tuple:
M = (S × D, S × A, D, PD, P, CD, Ce, L) (5)
where S×D is the final state space, S×A is the intermediate state space, the set of actions D is the switch values, the transition dynamics PD and P at time t are given by pdt
(at | st) and I[πt(st, dt−1) = dt] · p(st+1 | st, at),
and the immediate costs CD and Ce at time t are given by cdt
(st, dt−1) and ce(st, at), respectively.

The above 2-layer MDP will allow us to estimate separately the agents' policies pd(· | s) and the transition probability p(· | *s, a*) of the environment using both the intermediate and final states and design an algorithm that improves the regret that problem-agnostic RL algorithms achieve in our problem.

3Note that, by making the switching policy dependent on the previous switch value dt−1, we can account for the switching cost. 4The specific choice of environment cost ce(·, ·), control cost cc(·) and switching cost cx(·, ·) is application dependent.

$$\left(5\right)$$

4

![4_image_0.png](4_image_0.png)

Figure 1: Transitions of a 2-layer Markov Decision Process (MDP) from state (*s, d*) to state (s 0, d0) after seleting agent d 0. d 0 and d denote the current and previous agents in control. In the first layer (switching layer), the switching policy chooses agent d 0, which takes action w.r.t. its action policy pd0 . Then, in the action layer, the environment transitions to the next state s 0 based on the taken action w.r.t. the transition probability p.

## 4 Learning To Switch In A Team Of Agents

Since we may not know the agents' policies nor the transition probabilities, we need to trade off exploitation, i.e., minimizing the expected cost, and exploration, *i.e.*, learning about the agents' policies and the transition probabilities. To this end, we look at the problem from the perspective of episodic learning and proceed as follows.

We consider K independent subsequent episodes of length L and denote the aggregate length of all episodes as T = KL. Each of these episodes corresponds to a realization of the same finite horizon 2-layer Markov decision process, introduced in Section 3, with state spaces *S × A* and *S × D*, set of actions D, true agent policies P
∗
D, true environment transition probability P
∗, and immediate costs CD and Ce. However, since we do not know the true agent policies and environment transition probabilities, just before each episode k starts, our goal is to find a switching policy π k with desirable properties in terms of total regret R(T), which is given by:

$$R(T)=\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim\pi^{k},P_{\mathbb{D}}^{*},P^{*}}\left[c(\tau\,|\,s_{1},d_{0})\right]\right],\tag{6}$$

where π
∗is the optimal switching policy under the true agent policies and environment transition probabilities.

To achieve our goal, we apply the principle of optimism in the face of uncertainty, *i.e.*,

$$\pi^{k}=\operatorname*{argmin}_{\pi}\operatorname*{min}_{P_{\mathcal{D}}\in\mathcal{P}_{\mathcal{D}}^{k}}\operatorname*{min}_{P\in\mathcal{P}^{k}}\mathbb{E}_{\tau\sim\pi,P_{\mathcal{D}},P}\left[c(\tau\mid s_{1},d_{0})\right]$$
Eτ∼π,PD,P [c(τ | s1, d0)] (7)
where P
k D is a (*|S|×|D|×*L)-rectangular confidence set, *i.e.*, P
k D =×*s,d,t* P
k
· | *d,s,t*, and P
kis a (*|S|×|A|×*L)-
rectangular confidence set, *i.e.*, P
k = ×*s,a,t* P
k
· | *s,a,t*. Here, note that the confidence sets are constructed using data gathered during the first k − 1 episodes and allows for time-varying agent policies pd(· | *s, t*) and transition probabilities p(· | *s, a, t*).

$$\left(7\right)$$

However, to solve Eq. 7, we first need to explicitly define the confidence sets. To this end, we first define the empirical distributions pˆ
k d
(· | s) and pˆ
k(· | *s, a*) just before episode k starts as:

$$\hat{p}_{d}^{k}(a\,|\,s)=\begin{cases}\frac{N_{k}(s,d,a)}{N_{k}(s,d)}&\text{if}N_{k}(s,d)\neq0\\ \frac{1}{|A|}&\text{otherwise,}\end{cases}$$  $$\hat{p}^{k}(s^{\prime}\,|\,s,a)=\begin{cases}\frac{N_{k}^{\prime}(s,a,s^{\prime})}{N_{k}^{\prime}(s,a)}&\text{if}N_{k}^{\prime}(s,a)\neq0\\ \frac{1}{|\mathcal{S}|}&\text{otherwise,}\end{cases}$$
(8)  $\binom{9}{2}$  . 
where

$$N_{k}(s,d)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,d_{t}=d\text{in episode}l),\,N_{k}(s,d,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,d_{t}=d\text{in episode}l),$$ $$N_{k}^{\prime}(s,a)=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a\text{in episode}l),\,N_{k}^{\prime}(s,a,s^{\prime})=\sum_{l=1}^{k-1}\sum_{t\in[l]}\mathbb{I}(s_{t}=s,a_{t}=a,s_{t+1}=s^{\prime}\text{in episode}l).$$

Then, similarly as in Jaksch et al. (2010), we opt for L
1confidence sets5, *i.e.*,

$$\begin{array}{l}{{{\mathcal P}_{\cdot\mid d,s,t}^{k}(\delta)=\left\{\,p_{d}:||p_{d}(\cdot\mid s,t)-\hat{p}_{d}^{k}(\cdot\mid s)||_{1}\leq\beta_{\mathcal D}^{k}(s,d,\delta)\right\},}}\\ {{{\mathcal P}_{\cdot\mid s,a,t}^{k}(\delta)=\left\{\,p:||p(\cdot\mid s,a,t)-\hat{p}^{k}(\cdot\mid s,a)||_{1}\leq\beta^{k}(s,a,\delta)\right\},}}\end{array}$$

for all d ∈ D, s ∈ S, a ∈ A and t ∈ [L], where δ is a given parameter,

$$\beta_{D}^{k}(s,d,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[D\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,d)\}}}\quad\text{and}\quad\beta^{k}(s,a,\delta)=\sqrt{\frac{2\log\left(\frac{(k-1)^{T_{f}}U_{f}^{\prime}\left[S\left[L\right]\right]\left[S\left[L\right]\right]^{2\left(d+1\right)}}{\delta}\right)}{\max\{1,N_{k}(s,a)\}}}.$$

Next, given the switching policy π and the transition dynamics PD and P, we define the value function as

$$V_{t\mid P_{\tau},P}^{\pi}(s,d)=\mathbb{E}\bigg{[}\sum_{\tau=t}^{L}c_{e}(s_{\tau},a_{\tau})+c_{e}(d_{\tau})+c_{s}(d_{\tau},d_{\tau-1})\,|\,s_{t}=s,d_{t-1}=d\bigg{]},\tag{10}$$

where the expectation is taken over all the trajectories induced by the switching policy given the agents' policies. Then, for each episode k, we define the optimal value function v k t
(*s, d*) as

$$v_{t}^{k}(s,d)=\min_{\pi}\min_{P_{\mathbb{P}}\in\mathcal{P}_{P}^{k}(\delta)}\min_{P\in\overline{P}^{k}(\delta)}V_{t|P_{\mathbb{P}},P}^{\pi}(s,d).\tag{11}$$

Then, we are ready to use the following key theorem, which gives a solution to Eq. 7 (proven in Appendix A): Theorem 1. For any episode k*, the optimal value function* v k t
(s, d) *satisfies the following recursive equation:*

$$v_{t}^{k}(s,d)=\min_{a,t\in\mathcal{D}}\Big{[}c_{d_{t}}(s,d)+\min_{p_{a,t}\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\sum_{a\in\mathcal{A}}p_{d_{t}}(a\mid s,t)\times\Big{(}c_{s}(s,a)+\min_{p\in\mathcal{P}^{\mathrm{p}}_{\mid a,s,t}}\mathbb{E}_{s^{\prime}\sim p(\cdot\mid s,a,t)}[v_{t+1}^{k}(s^{\prime},d_{t}))\Big{)}\Big{]},\tag{12}$$

with v k L+1(*s, d*) = 0 for all s ∈ S and d ∈ D*. Moreover, if* d
∗ t is the solution to the minimization problem of the RHS of the above recursive equation, then π k t
(*s, d*) = d
∗
t
.

The above result readily implies that, just before each episode k starts, we can find the optimal switching policy π k = (π k 1
, . . . , πkL
) using dynamic programming, starting with vL+1(*s, d*) = 0 for all s ∈ S and d ∈ D.

Moreover, similarly as in Strehl & Littman (2008), we can solve the inner minimization problems in Eq. 12 analytically using Lemma 7 in Appendix B. To this end, we first find the optimal p(· | *s, a, t*) for all and a ∈ A
5This choice will result into a sequence of switching policies with desirable properties in terms of total regret.

ALGORITHM 1: UCRL2-MC
1: Cost functions CD and Ce, δ 2: {Nk, N0k} ← InitializeCounts()
3: for k = 1*, . . . , K* do 4: {pˆ
k d}, pˆ
k ← UpdateDistribution({Nk, N0k})
5: P
k D, P
k ← UpdateConfidenceSets({pˆ
k d}, pˆ
k, δ)
6: π k ← GetOptimal(P
k D, P
k, CD, Ce),
7: (s1, d0) ← InitializeConditions()
8: for t = 1*, . . . , L* do 9: dt ← π k t (st, dt−1)
10: at ∼ pdt
(·|st)
11: st+1 ∼ P(·|st, at)
12: N ← UpdateCounts((st, dt, at, st+1), {Nk, N0k})
13: **end for**
14: **end for**
15: **Return** π K
and then we find the optimal pdt
(· | *s, t*) for all dt ∈ D. Algorithm 1 summarizes the whole procedure, which we refer to as UCRL2-MC.

Within the algorithm, the function GetOptimal(·) finds the optimal policy π k using dynamic programming, as described above, and UpdateDistribution(·) computes Eqs. 8 and 9. Moreover, it is important to notice that, in lines 8–10, the switching policy π kis actually deployed, the true agents take actions on the true environment and, as a result, action and state transition data from the true agents and the true environment is gathered.

Next, the following theorem shows that the sequence of policies {π k}
K
k=1 found by Algorithm 1 achieve a total regret that is sublinear with respect to the number of steps, as defined in Eq. 6 (proven in Appendix A):
Theorem 2. Assume we use Algorithm 1 *to find the switching policies* π k*. Then, with probability at least* 1 − δ*, it holds that*

$$R(T)\leq\rho_{1}L{\sqrt{|{\mathcal{A}}||{\mathcal{S}}||{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}+\rho_{2}L|{\mathcal{S}}|{\sqrt{|{\mathcal{A}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{A}}|T}{\delta}}\right)}}$$
(13)
where ρ1, ρ2 > 0 *are constants.*
The above regret bound suggests that our algorithm may achieve higher regret than standard UCRL2 (Jaksch et al., 2010), one of the most popular problem-agnostic RL algorithms. More specifically, one can readily show that, if we use UCRL2 to find the switching policies π k(refer to Appendix C), then, with probability at least 1 − δ, it holds that

$$(13)$$
$$R(T)\leq\rho L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}$$
$$(14)$$
(14)
where ρ is a constant. Then, if we omit constant and logarithmic factors and assume the size of the team of agents is smaller than the size of state space, i.e., |D|< |S|, we have that, for UCRL2, the regret bound is O˜(L|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T).

That being said, in practice, we have found that our algorithm achieves comparable regret with respect to UCRL2, as shown in Figure 4. In addition, after applying our algorithm on a specific team of agents and environment, we can reuse the confidence intervals over the transition probability p(· | *s, a*) we have learned to find the optimal switching policy for a different team of agents operating in a similar environment. In contrast, after applying UCRL2, we would only have a confidence interval over the conditional probability defined by Eq. 3, which would be of little use to find the optimal switching policy for a different team of agents. In the following section, we will build up on this insight by considering several independent teams of agents operating in similar environments. We will demonstrate that, whenever we aim to find multiple sequences of

![7_image_0.png](7_image_0.png)

Figure 2: Three examples of environment realizations with different initial traffic level γ0.
switching policies for these independent teams, a straightforward variation of UCRL2-MC greatly benefits from maintaining shared confidence bounds for the transition probabilities of the environments and enjoys a better regret bound than UCRL2.

Remarks. For ease of exposition, we have assumed that both the machine and human agents follow arbitrary Markov policies that do not change due to switching. However, our theoretical results still hold if we lift this assumption—we just need to define the agents' policies as pd(at|st, dt, dt−1) and construct separate confidence sets based on the switch values.

## 5 Learning To Switch Across Multiple Teams Of Agents

In this section, rather than finding a sequence of switching policies for a single team of agents, we aim to find multiple sequences of switching policies across several independent teams operating in similar environments.

We will analyze our algorithm in scenarios where it can maintain shared confidence bounds for the transition probabilities of the environments across these independent teams. For instance, when the learning algorithm is deployed in centralized settings, it is possible to collect data across independent teams to maintain shared confidence intervals on the common parameters (i.e., the environment's transition probabilities in our problem setting). This setting fits a variety of real applications, more prominently, think of a car manufacturer continuously collecting driving data from million of human drivers wishing to learn different switching policies for each driver to implement a personalized semi-autonomous driving system. Similarly as in the previous section, we look at the problem from the perspective of episodic learning and proceed as follows.

Given N independent teams of agents {Di}
N
i=1, we consider K independent subsequent episodes of length L per team and denote the aggregate length of all of these episodes as T = KL. For each team of agents Di, every episode corresponds to a realization of a finite horizon 2-layer Markov decision process with state spaces *S × A* and *S × D*i, set of actions Di, true agent policies P
∗
Di
, true environment transition probability P
∗, and immediate costs CDi and Ce. Here, note that all the teams operate in a similar environment, *i.e.*,
P
∗is shared across teams, and, without loss of generality, they share the same costs. Then, our goal is to find the switching policies π k i with desirable properties in terms of total regret R(*T, N*), which is given by:

$$R(T,N)=\sum_{i=1}^{N}\sum_{k=1}^{K}\left[\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]-\mathbb{E}_{\tau\sim n_{i}^{+},P_{P_{i}}^{*},P^{*}}\left[c(\tau\mid s_{1},d_{0})\right]\right]\tag{15}$$

where π
∗
i is the optimal switching policy for team i, under the true agent policies and environment transition probability.

To achieve our goal, we just run N instances of UCRL2-MC (Algorithm 1), each with a different confidence set P
k Di
(δ) for the agents' policies, similarly as in the case of a single team of agents, but with a shared confidence set P
k(δ) for the environment transition probability. Then, we have the following key corollary, which readily follows from Theorem 2:

![8_image_0.png](8_image_0.png)

Figure 3: Trajectories induced by the switching policies found by Algorithm 1. The blue and orange segments indicate machine and human control, respectively. In both panels, we train Algorithm 1 within the same sequence of episodes, where the initial traffic level of each episode is sampled uniformly from
{no-car, light, heavy}, and show three episodes with different initial traffic levels. The results indicate that, in the latter episodes, the algorithm has learned to switch to the human driver in heavier traffic levels.
Corollary 3. Assume we use N instances of Algorithm 1 *to find the switching policies* π k i *using a shared* confidence set for the environment transition probability. Then, with probability at least 1 − δ*, it holds that*

$$R(T,N)\leq\rho_{1}NL\sqrt{|{\cal A}||{\cal S}||{\cal D}|T\log\left(\frac{|{\cal S}||{\cal D}|T}{\delta}\right)}+\rho_{2}L|{\cal S}|\sqrt{|{\cal A}|NT\log\left(\frac{|{\cal S}||{\cal A}|T}{\delta}\right)}\tag{16}$$

where ρ1, ρ2 > 0 *are constants.*
The above results suggests that our algorithm may achieve lower regret than UCRL2 in a scenario with multiple teams of agents operating in similar environments. This is because, under UCRL2, the confidence sets for the conditional probability defined by Eq. 3 cannot be shared across teams. More specifically, if we use N instances of UCLR2 to find the switching policies π k i
, then, with probability at least 1 − δ, it holds that

$$R(T,N)\leq\rho N L|{\mathcal{S}}|{\sqrt{|{\mathcal{D}}|T\log\left({\frac{|{\mathcal{S}}||{\mathcal{D}}|T}{\delta}}\right)}}$$

where ρ is a constant. Then, if we omit constant and logarithmic factors and assume |Di|< |S| for all i ∈ [N], we have that, for UCRL2, the regret bound is O˜(NL|S|p|D|T) while, for UCRL2-MC, it is O˜(L|S|p|A|T N + NLp*|A||S||D|*T). Importantly, in practice, we have found that UCRL2-MC does achieve a significant lower regret than UCRL2, as shown in the Figure 5.

## 6 Experiments 6.1 Obstacle Avoidance

We perform a variety of simulations in obstacle avoidance, where teams of agents (drivers) consist of one human agent (H) and one machine agent (M), i.e., D = {H, M}. We consider a lane driving environment with three lanes and infinite rows, where the type of each individual cell (*i.e.*, road, car, stone or grass) in row r is sampled independently at random with a probability that depends on the traffic level γr, which can take three discrete values, γr ∈ {no-car, light, heavy}. The traffic level of each row γr+1 is sampled at random with a probability that depends on the traffic level of the previous row γr. The probability of each cell type based on traffic level, as well as the conditional distribution of traffic levels can be found in Appendix D.

At any given time t, we assume that whoever is in control—be it the machine or the human—can take three different actions A = {left, straight, right}. Action left steers the car to the left of the current lane,

![9_image_1.png](9_image_1.png)

![9_image_0.png](9_image_0.png)

Figure 4: Total regret of the trajectories induced by the switching policies found by Algorithm 1 and those induced by a variant of UCRL2 in comparison with the trajectories induced by a machine driver and a human driver in a setting with a single team of agents. In all panels, we run K = 20,000. For Algorithm 1 and the variant of UCRL2, the regret is sublinear with respect to the number of time steps whereas, for the machine and the human drivers, the regret is linear.
action right steers it to the right and action straight leaves the car in the current lane. If the car is already on the leftmost (rightmost) lane when taking action left (right), then the lane remains unchanged. Irrespective of the action taken, the car always moves forward. The goal of the cyberphysical system is to drive the car from an initial state in time t = 1 until the end of the episode t = L with the minimum total amount of cost.

In our experiments, we set L = 10. Figure 2 shows three examples of environment realizations.

State space. To evaluate the switching policies found by Algorithm 1, we experiment with a *sensor-based* state space, where the state values are the type of the current cell and the three cells the car can move into in the next time step, as well as the current traffic level—we assume the agents (be it a human or a machine)
can measure the traffic level. For example, assume at time t the traffic is light, the car is on a road cell and, if it moves forward left, it hits a stone, if it moves forward straight, it hits a car, and, if it moves forward right, it drives over grass, then its state value is st = (light, road, stone, car, grass). Moreover, if the car is on the leftmost (rightmost) lane, then we set the value of the third (fifth) dimension in st to ∅. Therefore, under this state representation, the resulting MDP has ∼3 × 5 4states.

Cost and human/machine policies. We consider a state-dependent environment cost ce(st, at) = ce(st)
that depends on the type of the cell the car is on at state st, *i.e.*, ce(st) = 0 if the type of the current cell is road, ce(st) = 2 if it is grass, ce(st) = 4 if it is stone and ce(st) = 10 if it is car. Moreover, in all simulations, we use a machine policy that has been trained using a standard RL algorithm on environment realizations with γ0 = no-car. In other words, the machine policy is trained to perform well under a low traffic level.

Moreover, we consider all the humans pick which action to take (left, straight or right) according to a noisy estimate of the environment cost of the three cells that the car can move into in the next time step.

More specifically, each human model H computes a noisy estimate of the cost cˆe(s) = ce(s) + s of each of the three cells the car can move into, where s ∼ N(0, σH), and picks the action that moves the car to the cell with the lowest noisy estimate6. As a result, human drivers are generally more reliable than the machine under high traffic levels, however, the machine is more reliable than humans under low traffic level, where its policy is near-optimal (See Appendix E for a comparison of the human and machine performance). Finally, we consider that only the car driven by our system moves in the environment.

## 6.1.1 Results

First, we focus on a single team of one machine M and one human model H, with σH = 2, and use Algorithm 1 to find a sequence of switching policies with sublinear regret. At the beginning of each episode, the initial traffic level γ0 is sampled uniformly at random.

6Note that, in our theoretical results, we have no assumption other than the Markov property regarding the human policy.

![10_image_0.png](10_image_0.png)

![10_image_1.png](10_image_1.png)

Figure 5: Total regret of the trajectories induced by the switching policies found by N instances of Algorithm 1 and those induced by N instances of a variant of UCRL2 in a setting with N team of agents. In both panels, each instance of Algorithm 1 shares the same confidence set for the environment transition probabilities and we run K = 5000 episodes. The sequence of policies found by Algorithm 1 outperform those found by the variant of UCRL2 in terms of total regret, in agreement with Corollary 3.
We look at the trajectories induced by the switching policies found by our algorithm across different episodes for different values of the switching cost cx and cost of human control cc(H)
7. Figure 3 summarizes the results, which show that, in the latter episodes, the algorithm has learned to rely on the machine (blue segments)
whenever the traffic level is low and switches to the human driver when the traffic level increases. Moreover, whenever the amount of human control and number of switches is not penalized (*i.e.*, cx = cc(H) = 0), the algorithm switches to the human more frequently whenever the traffic level is high to reduce the environment cost. See Appendix F for a comparison of human control rate in environments with different initial traffic levels.

In addition, we compare the performance achieved by Algorithm 1 with three baselines: (i) a variant of UCRL2 (Jaksch et al., 2010) adapted to our finite horizon setting (see Appendix C), (ii) a human agent, and
(iii) a machine agent. As a measure of performance, we use the total regret, as defined in Eq. 6. Figure 4 summarizes the results for two different values of switching cost cx and cost of human control cc(H). The results show that both our algorithm and UCRL2 achieve sublinear regret with respect to the number of time steps and their performance is comparable in agreement with Theorem 2. In contrast, whenever the human or the machine drive on their own, they suffer linear regret, due to a lack of exploration.

Next, we consider N = 10 independent teams of agents, {Di}
N
i=1, operating in a similar lane driving environment. Each team Diis composed of a different human model Hi, with σHi sampled uniformly from
(0, 4), and the same machine driver M. Then, to find a sequence of switching policies for each of the teams, we run N instances of Algorithm 1 with shared confidence set for the environment transition probabilities.

We compare the performance of our algorithm against the same variant of UCRL2 used in the experiments with a single team of agents in terms of the total regret defined in Eq. 15. Here, note that the variant of UCRL2 does not maintain a shared confidence set for the environment transition probabilities across teams but instead creates a confidence set for the conditional probability defined by Eq. 3 for each team. Figure 5 summarizes the results for a sequence for different values of the switching cost cx and cost of human control cc(H), which shows that, in agreement with Corollary 3, our method outperforms UCRL2 significantly.

## 6.2 Riverswim

In addition to the obstacle avoidance task, we consider the standard task of *RiverSwim* (Strehl & Littman, 2008). The MDP states and transition probabilities are shown in Figure 6. The cost of taking action in states s2 to s5 equals 1, while 0.995 and 0 for states s1 and s6, respectively. Each episode ends after L = 20

7Here, we assume the cost of machine control cc(M) = 0.

![11_image_0.png](11_image_0.png)

Figure 6: RiverSwim. Continuous (dashed) arrows show the transitions after taking actions right (left).

The optimal policy is to always take action right.

![11_image_2.png](11_image_2.png)

![11_image_1.png](11_image_1.png)

Figure 7: (a) Ratio of UCRL2-MC regret to UCRL2 for different number of teams. (b) Total regret of the trajectories induced by the switching policies found by UCRL2-MC and those induced by UCRL2 in a setting with N = 100 team of agents.
steps. We set the switching cost and cost of agent control to zero for all the simulations in this section, i.e.,
cx(·, ·) = cc(·) = 0. The set D consists of agents that choose action right with some probability value p, which may differ for different agents. In the following part, we investigate the effect of increasing the number of teams on the regret bound in the multiple teams of agents setting. See Appendix G for more simulations to study the impact of action size and number of agents in each team on the total regret.

## 6.2.1 Results

We consider N independent teams of agents, each consisting of two agents with the probability p and 1 − p of choosing action right, where p is chosen uniformly at random for each team. We run the simulations for N = {3, 4, *· · ·* , 10} teams of agents. For each N, we run both UCRL2-MC and UCRL2 for 20,000 episodes and repeat each experiment 5 times. Figure 7 (a) summarizes our results, showing the advantage of the shared confidence bounds on the environment transition probabilities in our algorithm against its problem-agnostic version. To better illustrate the performance of UCRL2-MC, we also run an experiment with N = 100 teams of agents for 10,000 episodes and compare the total regret of our algorithm to UCRL2. Figure 7 (b) shows that our algorithm significantly outperforms UCRL2.

## 7 Conclusions And Future Work

We have formally defined the problem of learning to switch control among agents in a team via a 2-layer Markov decision process and then developed UCRL2-MC, an online learning algorithm with desirable provable guarantees. Moreover, we have performed a variety of simulation experiments on the standard RiverSwim task and obstacle avoidance to illustrate our theoretical results and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms. Our work opens up many interesting avenues for future work. For example, we have assumed that the agents' policies are fixed. However, there are reasons to believe that simultaneously optimizing the agents' policies and the switching policy may lead to superior performance (De et al., 2020; 2021; Wilder et al., 2020; Wu et al., 2020). In our work, we have assumed that the state space is discrete and the horizon in finite. It would be very interesting to lift these assumptions and develop approximate value iteration methods to solve the learning to switch problem. Finally, it would be interesting to evaluate our algorithm using real human agents in a variety of tasks.

Acknowledgments. Gomez-Rodriguez acknowledges support from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 945719).

## References

Samuel Barrett and Peter Stone. An analysis framework for ad hoc teamwork tasks. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 357–364, 2012.

P. Bartlett and M. Wegkamp. Classification with a reject option using a hinge loss. *JMLR*, 2008.

K. Brookhuis, D. De Waard, and W. Janssen. Behavioural impacts of advanced driver assistance systems–an overview. *European Journal of Transport and Infrastructure Research*, 1(3), 2001.

Daniel S Brown and Scott Niekum. Machine teaching for inverse reinforcement learning: Algorithms and applications. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 7749–7758, 2019.

C. Cortes, G. DeSalvo, and M. Mohri. Learning with rejection. In ALT, 2016. Mary Czerwinski, Edward Cutrell, and Eric Horvitz. Instant messaging and interruption: Influence of task type on performance. In *OZCHI 2000 conference proceedings*, volume 356, pp. 361–367, 2000.

Nathaniel D. Daw and Peter Dayan. The algorithmic anatomy of model-based evaluation. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1655):20130478, 2014.

A. De, P. Koley, N. Ganguly, and M. Gomez-Rodriguez. Regression under human assistance. In *AAAI*, 2020. Abir De, Nastaran Okati, Ali Zarezade, and Manuel Gomez-Rodriguez. Classification under human assistance.

In *AAAI*, 2021.

European Parliament. Regulation (EC) No 561/2006. *http://data.europa.eu/eli/reg/2006/561/2015-03-02*,
2006.

R. Everett and S. Roberts. Learning against non-stationary agents with opponent modelling and deep reinforcement learning. In *2018 AAAI Spring Symposium Series*, 2018.

Y. Geifman and R. El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. *arXiv* preprint arXiv:1901.09192, 2019.

Y. Geifman, G. Uziel, and R. El-Yaniv. Bias-reduced uncertainty estimation for deep neural classifiers. In ICLR, 2018.

A. Ghosh, S. Tschiatschek, H. Mahdavi, and A. Singla. Towards deployment of robust cooperative ai agents:
An algorithmic framework for learning adaptive policies. In *AAMAS*, 2020.

Aditya Gopalan and Shie Mannor. Thompson sampling for learning parameterized markov decision processes.

In *Conference on Learning Theory*, pp. 861–898, 2015.

A. Grover, M. Al-Shedivat, J. Gupta, Y. Burda, and H. Edwards. Learning policy representations in multiagent systems. In *ICML*, 2018.

D. Hadfield-Menell, S. Russell, P. Abbeel, and A. Dragan. Cooperative inverse reinforcement learning. In NIPS, 2016.

L. Haug, S. Tschiatschek, and A. Singla. Teaching inverse reinforcement learners via features and demonstrations. In *NeurIPS*, 2018.

Eric Horvitz and Johnson Apacible. Learning and reasoning about interruption. In *Proceedings of the 5th* international conference on Multimodal interfaces, pp. 20–27, 2003.

Shamsi T Iqbal and Brian P Bailey. Understanding and developing models for detecting and differentiating breakpoints during interactive tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 697–706, 2007.

Alexis Jacq, Johan Ferret, Olivier Pietquin, and Matthieu Geist. Lazy-mdps: Towards interpretable reinforcement learning by learning when to act. In *AAMAS*, 2022.

T. Jaksch, R. Ortner, and P. Auer. Near-optimal regret bounds for reinforcement learning. *Journal of* Machine Learning Research, 2010.

Christian P Janssen, Shamsi T Iqbal, Andrew L Kun, and Stella F Donker. Interrupted by my car?

implications of interruption and interleaving research for automated vehicles. International Journal of Human-Computer Studies, 130:221–233, 2019.

Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla. Interactive teaching algorithms for inverse reinforcement learning. In *IJCAI*, 2019.

Kyle Kotowick and Julie Shah. Modality switching for mitigation of sensory adaptation and habituation in personal navigation systems. In *23rd International Conference on Intelligent User Interfaces*, pp. 115–127, 2018.

Z. Liu, Z. Wang, P. Liang, R. Salakhutdinov, L. Morency, and M. Ueda. Deep gamblers: Learning to abstain with portfolio theory. In *NeurIPS*, 2019.

C. Macadam. Understanding and modeling the human driver. *Vehicle system dynamics*, 40(1-3):101–134, 2003.

O. Macindoe, L. Kaelbling, and T. Lozano-Pérez. Pomcop: Belief space planning for sidekicks in cooperative games. In *AIIDE*, 2012.

Catharine L. R. McGhan, Ali Nasir, and Ella M. Atkins. Human intent prediction using markov decision processes. *Journal of Aerospace Information Systems*, 12(5):393–397, 2015.

V. Mnih et al. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529, 2015.

Salama A Mostafa, Mohd Sharifuddin Ahmad, and Aida Mustapha. Adjustable autonomy: a systematic literature review. *Artificial Intelligence Review*, 51(2):149–186, 2019.

Hussein Mozannar and David Sontag. Consistent estimators for learning to defer to an expert. In *ICML*,
2020.

S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah. Efficient model learning from joint-action demonstrations for human-robot collaborative tasks. In HRI, 2015.

S. Nikolaidis, J. Forlizzi, D. Hsu, J. Shah, and S. Srinivasa. Mathematical models of adaptation in human-robot collaboration. *arXiv preprint arXiv:1707.02586*, 2017.

Ian Osband and Benjamin Van Roy. Near-optimal reinforcement learning in factored mdps. In Advances in Neural Information Processing Systems, pp. 604–612, 2014.

Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. In *Advances in Neural Information Processing Systems*, pp. 3003–3011, 2013.

Goran Radanovic, Rati Devidze, David C. Parkes, and Adish Singla. Learning to collaborate in markov decision processes. In *ICML*, 2019.

M. Raghu, K. Blumer, G. Corrado, J. Kleinberg, Z. Obermeyer, and S. Mullainathan. The algorithmic automation problem: Prediction, triage, and human effort. *arXiv preprint arXiv:1903.12220*, 2019a.

M. Raghu, K. Blumer, R. Sayres, Z. Obermeyer, B. Kleinberg, S. Mullainathan, and J. Kleinberg. Direct uncertainty prediction for medical second opinions. In *ICML*, 2019b.

H. Ramaswamy, A. Tewari, and S. Agarwal. Consistent algorithms for multiclass classification with an abstain option. *Electronic J. of Statistics*, 2018.

Siddharth Reddy, Anca D Dragan, and Sergey Levine. Shared autonomy via deep reinforcement learning.

arXiv preprint arXiv:1802.01744, 2018.

Shubhanshu Shekhar, Mohammad Ghavamzadeh, and Tara Javidi. Active learning for classification with abstention. *IEEE Journal on Selected Areas in Information Theory*, 2(2):705–719, 2021.

D. Silver et al. Mastering the game of go with deep neural networks and tree search. *Nature*, 529(7587):484, 2016.

D. Silver et al. Mastering the game of go without human knowledge. *Nature*, 550(7676):354, 2017.

Peter Stone, Gal A Kaminka, Sarit Kraus, and Jeffrey S Rosenschein. Ad hoc autonomous agent teams:
Collaboration without pre-coordination. In *Twenty-Fourth AAAI Conference on Artificial Intelligence*,
2010.

A. Strehl and M. Littman. An analysis of model-based interval estimation for markov decision processes.

Journal of Computer and System Sciences, 74(8):1309–1331, 2008.

DJ Strouse, Kevin McKee, Matt Botvinick, Edward Hughes, and Richard Everett. Collaborating with humans without human data. In *Advances in Neural Information Processing Systems*, volume 34, 2021.

Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. *Artificial intelligence*, 112(1-2):181–211, 1999.

Matthew E Taylor, Halit Bener Suay, and Sonia Chernova. Integrating reinforcement learning with human demonstrations of varying ability. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 617–624. International Foundation for Autonomous Agents and Multiagent Systems, 2011.

S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof. Combating label noise in deep learning using abstention. *arXiv preprint arXiv:1905.10964*, 2019.

Lisa Torrey and Matthew Taylor. Teaching on a budget: Agents advising agents in reinforcement learning.

In *Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems*, pp.

1053–1060, 2013.

James T. Townsend, Kam M. Silva, Jesse Spencer-Smith, and Michael J. Wenger. Exploring the relations between categorization and decision making with regard to realistic face stimuli. *Pragmatics & Cognition*, 8(1):83–105, 2000.

S. Tschiatschek, A. Ghosh, L. Haug, R. Devidze, and A. Singla. Learner-aware teaching: Inverse reinforcement learning with preferences and constraints. In *NeurIPS*, 2019.

O. Vinyals et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, pp. 1–5, 2019.

Thomas J Walsh, Daniel K Hewlett, and Clayton T Morrison. Blending autonomous exploration and apprenticeship learning. In *Advances in Neural Information Processing Systems*, pp. 2258–2266, 2011.

Bryan Wilder, Eric Horvitz, and Ece Kamar. Learning to complement humans. In *IJCAI*, 2020. H. Wilson and P. Daugherty. Collaborative intelligence: humans and ai are joining forces. *Harvard Business* Review, 2018.

Bohan Wu, Jayesh K Gupta, and Mykel Kochenderfer. Model primitives for hierarchical lifelong reinforcement learning. *Autonomous Agents and Multi-Agent Systems*, 34(1):1–38, 2020.

Y. Zheng, Z. Meng, J. Hao, Z. Zhang, T. Yang, and C. Fan. A deep bayesian policy reuse approach against non-stationary agents. In *NeurIPS*, 2018.

## A Proofs A.1 Proof Of Theorem 1

We first define P
k D|.,t+ := ×s∈S,d∈D,t0∈{*t,...,L*}P
k
.|*d,s,t*0 , P
k |.,t+ = ×s∈S,a∈A,t0∈{*t,...,L*}P
k |*s,a,t*0 and πt+ =
{πt*, . . . , π*L}. Next, we get a lower bound the optimistic value function v k t
(*s, d*) as follows:
v k t
(*s, d*)
= min πmin PD∈PkD
min P ∈Pk V
π t|PD,P (*s, d*)
= min πt+min PD∈PkD
min P ∈Pk V
π t|PD,P (*s, d*)

(i) = min πt(s,d) min pπt(s,d)(.|s,t)∈Pk · | πt(s,d),s,t min p(.|s,.,t)∈Pk · | s,·,t min π(t+1)+ PD∈PkD | ·,(t+1)+ P ∈Pk · | ·,(t+1)+ hcπt(s,d)(s, d) + Ea∼pπt(s,d)(· | s,t) ce(s, a) + Es 0∼p(· | s,a,t)V π t+1|PD,P (s 0, πt(s, d))i (ii)
· | ·,(t+1)+ (ii) ≥ min πt(s,d) min pπt(s,d)(.|s,t)∈Pk · | πt(s,d),s,t min p(.|s,.,t)∈Pk · | s,·,t cπt(s,d)(s, d) +Ea∼pπt(s,d)(· | s,t)   ce(s, a) + Es 0∼p(· | s,a,t) " min π(t+1)+ min PD∈PkD | ·,(t+1)+ min P ∈Pk · | ·,(t+1)+ V π t+1|PD,P (s 0, πt(s, d))#!# " cdt (s, d) + min pdt (.|s,t)∈Pk ·|dt,s,t X a∈A pdt (a|s, t) ·   ce(s, a) + min p(.|s,a,t)∈Pk · | s,a,t Es 0∼p(· | s,a,t)v k t+1(s 0, dt) !# , = min dt
where (i) follows from Lemma 8 and (ii) follows from the fact that mina E[X(a)] ≥ E[mina X(a)]. Next, we provide an upper bound of the optimistic value function v k t
(*s, d*) as follows:

$$v_{t}^{k}(s,d)$$

= min πmin PD∈PkD
min P ∈Pk V
π t|PD,P (*s, d*)

$${\overset{(i)}{=}}\operatorname*{min}_{\pi_{t}}\quad\quad\quad\quad\quad\operatorname*{min}_{p\pi_{t}(s,d)\,(.|s,t)\in{\mathcal{P}}}$$

min
π(t+1)+
PD∈Pk*D | ·*,(t+1)+
P ∈Pk
$$\min_{a,(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\pi_{t}(s,d),s,t}\min_{p(\,\mid s,t)\in\mathbb{P}^{n}_{+1}\,\ldots,t}$$ $$\left.\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}_{\alpha\sim p_{\pi_{t}(s,d)}(\,\mid s,t)}\left(c_{e}(s,a)+\mathbb{E}_{s^{\prime}\sim p(\,\mid s,a,t)}V_{t+1\mid p_{\mathbb{D}},p}^{\pi}(s^{\prime},\pi_{t}(s,d))\right)\right]\right.\right.$$
· | ·,(t+1)+
(ii)
$$\begin{array}{r l}{\operatorname*{min}}&{{}\operatorname*{min}}\\ {\pi_{t}(s,d)}&{{}\quad p_{\pi_{t}(x,d)}(\cdot|s,t){\in}\mathcal{P}_{\mathrm{-}|\pi_{t}(x,d),x,t}^{k}}\end{array}$$
$$\min_{p(:\mid s,t_{i}\rangle\in P^{s}_{t_{i}\mid s,t_{i}\rangle}}\left[c_{\pi_{t}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{\pi_{t}(s,d)}(\ \mid s,t)}\left(c_{e}(s,a)+\mathbb{E}_{s^{\prime}\sim p(\ \mid s,a,t)}V^{\pi^{\prime}}_{t+1\mid P^{\pi}_{\pi},P^{\pi}}(s^{\prime},\pi_{t}(s,d))\right)\right]$$
$$(\stackrel{i i i}{=})\operatorname*{min}_{\pi_{t}(s,d)}$$
$$\min_{p(\cdot,\cdot),p(\cdot),p(\cdot)}\left[c_{n_{1}(s,d)}(s,d)+\mathbb{E}_{a\sim p_{a_{1}(s,d)}(\ \mid s,d)}\left(c_{\nu}(s,a)+\mathbb{E}_{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},\pi_{t}(s,d))\right)\right]$$ $$=\min_{d_{t}}\left[c_{a}(s,d)+\min_{p_{a_{1}(s,d)}\in\mathbb{P}_{d_{1},s,a_{1}}^{\nu}}\sum_{a_{i}\in A}p_{d_{i}}(a|s,t)\cdot\left(c_{\nu}(s,a)+\min_{p(\ \mid s,a,t)\in\mathbb{P}_{\nu\sim p(\ \mid s,a,t)}}\mathbb{E}_{\nu\sim p(\ \mid s,a,t)}v_{t+1}^{k}(s^{\prime},d_{t})\right)\right].$$
$$\operatorname*{min}_{P\pi_{t}(s,d)\,(\cdot|s,t)\in{\mathcal{P}}_{\cdot}^{k}}$$
Here, (i) follows from Lemma 8, (ii) follows from the fact that:

hcπt(s,d)(*s, d*) + Ea∼pπt(s,d)(· | s,t)
ce(*s, a*) + Es
0∼p(· | *s,a,t*)V
π
t+1|PD,P (s
0, πt(*s, d*))i
min
π(t+1)+
PD∈Pk*D | ·*,(t+1)+
P ∈Pk
· | ·,(t+1)+

hcπt(s,d)(*s, d*) + Ea∼pπt(s,d)(· | s,t)
ce(*s, a*) + Es
0∼p(· | *s,a,t*)V
π
t+1|PD,P (s
0, πt(*s, d*))i
∀π, PD ∈ Pk*D | ·*,(t+1)+ , P ∈ Pk
· | ·,(t+1)+ (17)
and if we set π(t+1)+ = {π

t+1*, ..., π*∗L
}, PD = P

D ∈ Pk*D | ·*,(t+1)+ and P = P
∗ ∈ Pk
| ·,(t+1)+ , where


t+1, ..., π∗L}, P∗D, P∗ = argmin
π(t+1)+
PD∈Pk*D | ·*,(t+1)+
P ∈Pk
· | ·,(t+1)+
$$(17)$$
$$(18)$$
V
π
t+1|PD,P (s
0, πt(*s, d*)), (18)
PK
k=1 ∆kI(P

D *6∈ P*kD ∨ P
∗ *6∈ P*k).
then equality (iii) holds. Since the upper and lower bounds are the same, we can conclude that the optimistic value function satisfies Eq. 12, which completes the proof.

## A.2 Proof Of Theorem 2

In this proof, we assume that ce(*s, a*) + cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D. Throughout the proof, we will omit the subscripts P

D, P∗in Vt | P ∗D,P ∗ and write Vt instead in case of true agent policies P

D and true transition probabilities P
∗. Then, we define the following quantities:
where, recall from Eq. 7 that, π k = argminπ minPD∈PkD
, minP ∈Pk V
π 1|PD,P
(s1, d0); and, ∆k indicates the regret for episode k. Hence, we have

$$R(T)=\sum_{k=1}^{K}\Delta_{k}=\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})+\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\tag{22}$$  Next, we split the analysis into two parts. We first bound $\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})$ and then bound 
$$(19)$$
$$(20)$$
$$(21)$$
$$(22)$$

- *Computing the bound on* PK
k=1 ∆kI(P

D ∈ PkD ∧ P
∗ ∈ Pk)
First, we note that

$$\Delta_{k}=V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1}^{\pi^{\pi^{*}}}(s_{1},d_{0})\leq V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\Sigma,P^{k}}^{k}}^{\pi^{k}}(s_{1},d_{0})$$
This is because
V π k 1|P kD,P k (s1, d0) (i) = min πmin PD∈PkD min P ∈Pk V π 1|PD,P (s1, d0) (ii) ≤ min πV π 1|P ∗D,P ∗ (s1, d0) = V π ∗
1(s1, d0), (24)
where (i) follows from Eqs. 19, 20, and (ii) holds because of the fact that the true transition probabilities P

D ∈ PkD and P
∗ ∈ Pk. Next, we use Lemma 4 (Appendix B) to bound PK
k=1(V
π k 1(s1, d0)−V
π k 1|P kD,P k (s1, d0)).

$$\sum_{k=1}^{K}(V_{1}^{\pi^{k}}(s_{1},d_{0})-V_{1|P_{\mathfrak{D}}^{k},P^{k}}^{\pi^{k}}(s_{1},d_{0}))\leq\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\mathfrak{D}}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t}\delta)\}\right|s_{1},$$
s1, d0
$$\left.\begin{array}{l}{{}}\\ {{}}\\ {{}}\end{array}\right]$$
$$(23)$$
$$(24)$$
P k D = argmin PD∈PkD(δ) min P ∈Pk(δ) V π k 1|PD,P (s1, d0), (19) P k = argmin P ∈Pk(δ) V π k 1|P kD,P (s1, d0), (20) ∆k = V π k 1(s1, d0) − V π ∗ 1(s1, d0), (21)
Since by assumption, ce(*s, a*) + cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D, the worst-case regret is bounded by T. Therefore, we have that:

k=1 ∆kI(P ∗ D ∈ PkD ∧ P ∗ ∈ Pk) ≤ min (T,X K X K k=1 LE "X L t=1 min{1, βkD(st, dt, δ)}|s1, d0 # + X K k=1 LE "X L t=1 min{1, βk(st, at, δ)}|s1, d0 #) ≤ min (T,X K k=1 LE "X L t=1 min{1, βkD(st, dt, δ)}|s1, d0 #) + min (T,X K k=1 LE "X L t=1 min{1, βk(st, at, δ)}|s1, d0 #) , (26)
where, the last inequality follows from Lemma 9. Now, we aim to bound the first term in the RHS of the above inequality.

$$\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right]$$
#(i) = L X K  1, vuut 2 log ((k−1)L) 7|S||D|2 |A|+1 δ  t=1 min   k=1 E   X L   s1, d0   max{1, Nk(st, dt)}  1, vuut 2 log (KL) 7|S||D|2 |A|+1 δ  k=1 E   X L t=1 min      (ii) ≤ L X K max{1, Nk(st, dt)} (iii) ≤ 2 √2L s 2 log (KL) 7|S||D|2 |A|+1 δ |S||D|KL + 2L 2|S||D| (27) ≤ 2 √2 s 14|A|log (KL)|S||D| δ |S||D|KL + 2L 2|S||D| = √112s|A|log (KL)|S||D| δ |S||D|KL + 2L 2|S||D| (28)

where (i) follows by replacing β k D(st, dt, δ) with its definition, (ii) follows by the fact that (k − 1)L ≤ KL, (iii)
follows from Lemma 5, in which, we put W := *S ×D*, c := r2 log (KL)
7*|S||D|*2 |A|+1 δ
, Tk = (wk,1, . . . , wk,L) :=
((s1, d1), . . . ,(sL, dL)). Now, due to Eq. 28, we have the following.

$$\min\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq\min\left\{T,\sqrt{112}L\sqrt{|A||S||D|}T\log\left(\frac{T|S||D|}{\delta}\right)+2L^{2}|S||D|\right\}\tag{29}$$

Now, if T ≤ 2L
2*|S||A||D|*log T*|S||D|* δ
,

$$T^{2}\leq2L^{2}|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)\implies T\leq{\sqrt{2}}L{\sqrt{|{\mathcal{S}}||{\mathcal{A}}||{\mathcal{D}}|T\log\left({\frac{T|{\mathcal{S}}||{\mathcal{D}}|}{\delta}}\right)}}$$

and if T > 2L
2*|S||A||D|*log T*|S||D|* δ
,

$$2L^{2}|S|<{\frac{\sqrt{2L^{2}|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}{|A||D|\log\left({\frac{T|S||D|}{\delta}}\right)}}\leq{\sqrt{2}}L{\sqrt{|S||A||D|T\log\left({\frac{T|S||D|}{\delta}}\right)}}.$$
$$(30)$$
. (30)
Thus, the minimum in Eq. 29 is less than

$$({\sqrt{2}}+{\sqrt{112}})L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}}<12L{\sqrt{|S||\mathcal{A}||\mathcal{D}|T\log\left({\frac{|S||\mathcal{D}|T}{\delta}}\right)}}$$
(31)
A similar analysis can be done for the second term of the RHS of Eq. 26, which would show that,

$$\operatorname*{min}\left\{T,\sum_{k=1}^{K}L\mathbb{E}\left[\sum_{t=1}^{L}\operatorname*{min}\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1},d_{0}\right]\right\}\leq12L|\mathcal{S}|{\sqrt{|\mathcal{A}|T\log\left({\frac{T|\mathcal{S}|\mathcal{A}|}{\delta}}\right)}}.$$
. (32)
Combining Eqs. 26, 31 and 32, we can bound the first term of the total regret as follows:

$$\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\in\mathcal{P}_{\mathcal{D}}^{k}\wedge P^{*}\in\mathcal{P}^{k})\leq12L\sqrt{|\mathcal{A}|\mathcal{S}||\mathcal{D}|T\log\left(\frac{T|\mathcal{S}||\mathcal{D}|}{\delta}\right)}+12L|\mathcal{S}|\sqrt{|\mathcal{A}|T\log\left(\frac{T|\mathcal{S}||\mathcal{A}|}{\delta}\right)}\tag{33}$$  _Computing the bound on $\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})$_
Here, we use a similar approach to Jaksch et al. (2010). Note that

$$\sum_{k=1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=\sum_{k=1}^{\lfloor\sqrt{\frac{K}{k}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})+\sum_{k=\lfloor\sqrt{\frac{K}{k}}\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{D}^{k}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k}).\tag{34}$$
$$(31)$$
$$(32)$$
$$(36)$$

Now, our goal is to show the second term of the RHS of above equation vanishes with high probability. If we succeed, then it holds that, with high probability, PK
k=1 ∆kI(P

D *6∈ P*kD ∨ P
∗ *6∈ P*k) equals the first term of the RHS and then we will be done because

$$\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq\sum_{k=1}^{\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor}\Delta_{k}\leq\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor L=\sqrt{\overline{K}L},\tag{35}$$

where (i) follows from the fact that ∆k ≤ L since we assumed the cost of each step ce(s, a)+cc(d)+cx(*d, d*0) ≤ 1 for all s ∈ S, a ∈ A, and d, d0 ∈ D.

To prove that PK
k=√ K
L
+1 ∆kI(P

D *6∈ P*kD ∨ P
∗ *6∈ P*k) = 0 with high probability, we proceed as follows. By applying Lemma 6 to P

D and P
∗, we have

$$\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}},\ \operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{2{t_{k}}^{6}}}$$

Thus,

$$\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k}\lor P^{*}\not\in{\mathcal{P}}^{k})\leq\operatorname*{Pr}(P_{\mathcal{D}}^{*}\not\in{\mathcal{P}}_{\mathcal{D}}^{k})+\operatorname*{Pr}(P^{*}\not\in{\mathcal{P}}^{k})\leq{\frac{\delta}{t_{k}^{~6}}}$$
6(37)
$$(37)$$

where tk = (k − 1)L is the end time of episode k − 1. Therefore, it follows that

Pr X K k=√ K L +1 ∆kI(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk) = 0!= Pr ∀k : $rK L % + 1 ≤ k ≤ K; P ∗ D ∈ PkD ∧ P ∗ ∈ Pk ! = 1 − Pr ∃k : $rK L % + 1 ≤ k ≤ K; P ∗ D 6∈ PkD ∨ P ∗6∈ Pk ! (i) ≥ 1 −X K k=√ K L +1 Pr(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk)
$$\stackrel{{(ii)}}{{\geq}}1-\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\frac{\delta}{t_{k}^{6}}$$ $$\stackrel{{(iii)}}{{\geq}}1-\sum_{t=\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\int_{\sqrt{KL}}^{KL}\frac{\delta}{t^{6}}\geq1-\frac{\delta}{5(KL)^{\frac{\pi}{4}}}.\tag{38}$$  follows from Eq. [37] and (iii) holds using that $t_{k}=(k-1)L$. Hence
$$(39)$$
where (i) follows from a union bound, (ii) follows from Eq. 37 and (iii) holds using that tk = (k − 1)L. Hence, with probability at least 1 −δ 5(KL)
5 4 we have that

$$\sum_{k=\left\lfloor\sqrt{\frac{K}{L}}\right\rfloor+1}^{K}\Delta_{k}\mathbb{I}(P_{\mathcal{D}}^{*}\not\in\mathcal{P}_{\mathcal{D}}^{k}\lor P^{*}\not\in\mathcal{P}^{k})=0.$$
If we combine the above equation and Eq. 35, we can conclude that, with probability at least 1 −δ
5T 5/4, we
have that
$$\sum_{k=1}^{\lfloor{\sqrt{\frac{T}{k}}}\rfloor}\Delta_{k}\mathbb{I}(P_{D}^{*}\not\in\mathcal{P}_{D}^{k}\lor P^{*}\not\in\mathcal{P}^{k})\leq{\sqrt{T}}$$

where T = KL. Next, if we combine Eqs. 33 and 40, we have

R(T) = X K k=1 ∆kI(P ∗ D ∈ PkD ∧ P ∗ ∈ Pk) +X K k=1 ∆kI(P ∗ D 6∈ PkD ∨ P ∗6∈ Pk) ≤ 12L s |A||S||D|T log T|S||D| δ + 12L|S|s|A|T log T|S||A| δ + √ T ≤ 13L s |A||S||D|T log T|S||D| δ + 12L|S|s|A|T log T|S||A| δ (41) Finally, since P∞ T =1δ 5T 5/4 ≤ δ, with probability at least 1 − δ, the above inequality holds. This concludes the
$$(40)$$
proof.

## B Useful Lemmas

Lemma 4. Suppose PD and P are true transitions and PD ∈ PkD, P ∈ Pkfor episode k*. Then, for arbitrary*
policy $x^{k}$, and arbitrary $P_{D}^{k}\in\mathcal{P}_{D}^{k}$, $P^{k}\in\mathcal{P}^{k}$, it holds that_  $$V_{1}^{s}{}_{\mid P_{D},\,P}(s,d)-V_{1}^{s}{}_{\mid P_{D}^{k},\,P^{k}}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{D}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{H}\min\{1,\beta^{k}(s_{t},a_{t},\delta)\}\mid s_{1}=s,d_{0}=d\right],\tag{42}$$
where the expectation is taken over the MDP with policy π k under true transitions PD and P.

Proof. For ease of notation, let v k t
:= V
π k t | PD,P , v k t | k
:= V
π k t | P kD,P k and c π t
(*s, d*) = cπ k t
(s,d)
(*s, d*). We also define d 0 = π k 1
(*s, d*). From Eq 68, we have

$$\mathbb{P}_{k}^{\sharp}(s,d)=c_{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}(a\,|\,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime}|s,a)\cdot\mathbb{P}_{2}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{43}$$  $$\mathbb{P}_{1\,|\,k}^{\sharp}(s,d)=c_{1}^{\sharp}(s,d)+\sum_{a\in A}p_{x_{1}^{\sharp}(s,d)}^{k}(a\,|\,s)\cdot\left(c_{\varepsilon}(s,a)+\sum_{s^{\prime}\in S}p^{k}(s^{\prime}|s,a)\cdot\mathbb{P}_{2\,|\,k}^{\sharp}(s^{\prime},d^{\prime})\right)\tag{44}$$  \[\begin{array}{
Now, using above equations, we rewrite v k 1
(*s, d*) − v k 1 | k
(*s, d*) as

v
k
1
(*s, d*) − v
k
1 | k
(*s, d*) = X
a∈A

k
1
(s,d)
(a|s)
 
ce(*s, a*) + X
s
0∈S
p(s
0|*s, a*) · v
k
2
(s
0, d0)
!
a∈A
p
k
π
k
1
(s,d)
(a|s)
 
ce(*s, a*) + X
s
0∈S
p
k(s
0|*s, a*) · v
k
2 | k
(s
0, d0)
!

X
a∈A

k
1
(s,d)
(a | s)
 
ce(*s, a*) + X
s
0∈S
p(s
0|*s, a*) · v
k
2
(s
0, d0) − ce(*s, a*) −
X
s
0∈S
p
k(s
0| s) · v
k
2 | k
(s
0, d0)
!
(i)
=
X
a∈A

k
1
(s,d)
(a | s) − p
k
π
k
1
(s,d)
(a | s)

ce(*s, a*) + X
s
0∈S
p
k(s
0| *s, a*) · v
k
2 | k
(s
0, d0)

+
X
| {z }
≤L
a∈A "pπ
k
1
(s,d)
(a | s) ·
X
s
0∈S
hp(s
0| *s, a*)v
k
2
(s
0, d0) − p
k(s
0| *s, a*)v
k
2 | k
(s
0, d0)
i#
(ii)

X
a∈A
hpπ
k
1
(s,d)
(a | s) − p
k
π
k
1
(s,d)
(a | s)
i
+ L
X
(iii)
=X
a∈A "pπ
k
1
(s,d)
(a | s) ·
X
s
0∈S
p(s
0| *s, a*) ·
v
k
2
(s
0, d0) − v
k
2 | k
(s
0, d0)
#
a∈A


k
1
(s,d)
(a | s)
X
s
0∈S
p(s
0| *s, a*) − p
k(s
0| *s, a*)v
k
2 | k
(s
0, d0)
| {z }
≤L

+
X

a∈A
hpπ
k
1
(s,d)
(a | s) − p
k
π
k
1
(s,d)
(a | *s, d*)
i
+ L
X
(iv)
≤ Ea∼pπk
1
(s,d)
(. | s),s0∼p(· | s,a)
hv
k
2
(s
0, d0) − v
k
2 | k
(s
0, d0)
i
+ LEa∼pπk
1
(s,d)
(· | s)
"X
s
0∈S
-p(s
0| *s, a*) − p
k(s
0| *s, a*)#+ L
X
a∈A
hpπ
k
1
(s,d)
(a | s) − p
k
π
k
1
(s,d)
(a | s)
i,
(45)
where (i) follows by adding and subtracting term pπ k 1
(s,d)
(a | s)
ce(*s, a*) + Ps 0∈S p k(s 0| *s, a*) · v k 2 | k
(s 0, d0)
,
(ii) follows from the fact that ce(*s, a*) + Ps 0∈S p k(s 0| *s, a*) · v k 2 | k
(s 0, d0) ≤ L, since, by assumption, ce(*s, a*) +
cc(d) + cx(*d, d*0) < 1 for all s ∈ S, a ∈ A and d, d0 ∈ D.. Similarly, (iii) follows by adding and subtracting p(s 0| *s, a*)v k 2 | k
(s 0, d0), and (iv) follows from the fact that v k 2 | k ≤ L. By assumption, both PD and P
k D lie in the confidence set P
k D(δ), so

$$\sum_{a\in\mathcal{A}}\left[p_{\pi_{1}^{k}(s,d)}(a\,|\,s)-p_{\pi_{1}^{k}(s,d)}^{k}(a\,|\,s)\right]\leq\operatorname*{min}\{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)\}$$

Similarly,

$$\sum_{s^{\prime}\in S}\left[p(s^{\prime}\,|\,s,a)-p^{k}(s^{\prime}\,|\,s,a)\right]\leq\min\{1,\beta^{k}(s,a,\delta)\}$$

If we combine Eq. 46 and Eq. 47 in Eq. 45, for all s ∈ S, it holds that

$\overline{v}_{1}^{k}(s,d)-\overline{v}_{1\,|\,k}^{k}(s,d)\leq\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{(1,s)},s^{\prime}\sim p(1,s,a)}\left[\overline{v}_{2}^{k}(s^{\prime},d^{\prime})-\overline{v}_{2\,|\,k}^{k}(s^{\prime},d^{\prime})\right]$  $$+\,\mathrm{LE}_{a\sim p_{\pi_{1}^{k}(s,d)}^{(1,s)}}\left[\min\{1,\beta^{k}(s,a,\delta)\}\right]$$  $$+\,L\left[\min\{1,\beta_{\mathcal{D}}^{k}(s,d^{\prime}=\pi_{1}^{k}(s,d),\delta)\}\right]$$
$$(46)$$
$$(47)$$
Similarly, for all s ∈ S, d ∈ D we can show

$$\begin{array}{l}{{\overline{{v}}_{2}^{k}(s,d)-\overline{{v}}_{2\,|\,k}^{k}(s,d)\leq\,\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k},s^{\prime}\sim p_{\zeta}^{k}[s,a)}\left[\overline{{v}}_{3}^{k}(s^{\prime},\pi_{2}(s,d))-\overline{{v}}_{3\,|\,k}^{k}(s^{\prime},\pi_{2}(s,d))\right]}}\\ {{+L\mathbb{E}_{a\sim p_{\pi_{1}^{k}(s,d)}^{k}}\left[\min\{1,\beta^{k}(s,a,\delta)\}\right]}}\\ {{+L\left[\min\{1,\beta_{D}^{k}(s,\pi_{2}^{k}(s,d),\delta)\}\right]}}\end{array}$$

Hence, by induction we have

$$\overline{v}_{1}^{k}(s,d)-\overline{v}_{1\,\,1\,\,k}^{k}(s,d)\leq L\mathbb{E}\left[\sum_{t=1}^{L}\min\{1,\beta_{\rm P}^{k}(s_{t},d_{t},\delta)\}+\sum_{t=1}^{L}\min\{1,\beta^{k}(s_{t},a_{t},\delta)\}|s_{1}=s,d_{0}=d\right]$$  where $s_{t}$ is the $t$th element of $L^{p}$. The $s_{t}$ is the $t$th element of $L^{p}$.  
$$(48)$$
$$(49)$$
$$\square$$
$$\quad(50)$$
where the expectation is taken over the MDP with policy $\pi^k$ under true transitions $P_{\mathcal{D}}$ and $P_{\pi^k}$. 
Lemma 5. Let W be a finite set and c be a constant. For k ∈ [K]*, suppose* Tk = (wk,1, wk,2, . . . , wk,H) is a random variable with distribution P(.|wk,1), where wk,i ∈ W*. Then,*

$$\sum_{k=1}^{K}\mathbb{E}_{T_{k}\sim P(.|w_{k,i})}\left[\sum_{t=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,t})\}}}\}\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{51}$$  $=\sum_{i=1}^{k-1}\sum_{t=1}^{H}\mathbb{I}(w_{i,t}=w)$.  
with Nk(w) := Pk−1
j=1
Proof. The proof is adapted from Osband et al. (2013). We first note that

E "X K k=1 X H t=1 min{1,c pmax{1, Nk(wk,t)} } # = E "X K k=1 X H t=1 I(Nk(wk,t) ≤ H) min{1,c pmax{1, Nk(wk,t)} } # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) min{1,c pmax{1, Nk(wk,t)} } # ≤ E "X K k=1 X H t=1 I(Nk(wk,t) ≤ H) · 1 # + E "X K k=1 X H t=1 I(Nk(wk,t) > H) ·c pNk(wk,t) # (52)

Then, we bound the first term of the above equation

$$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})\leq H)\right]=\mathbb{E}\left[\sum_{w\in\mathcal{W}}\{\#\text{of times}w\text{is observed and}N_{k}(w)\leq H\}\right]$$ $$\leq\mathbb{E}\left[|\mathcal{W}|\cdot2H|=2H|\mathcal{W}|\right.\tag{1}$$

To bound the second term, we first define nτ (w) as the number of times w has been observed in the first τ steps, *i.e.*, if we are at the t th index of trajectory Tk, then τ = tk + t, where tk = (k − 1)H, and note that

$$n_{t_{k}+t}(w)\leq N_{k}(w)+t$$
$$\mathbf{\Phi}_{k}(w)+H+1\leq2N_{k}(w).$$
ntk+t(w) ≤ Nk(w) + t (54)
because we will observe w at most t ∈ {1*, . . . , H*} times within trajectory Tk. Now, if Nk(w) > H, we have that

$$n_{t_{k}+t}(w)+1\leq t$$

ntk+t(w) + 1 ≤ Nk(w) + t + 1 ≤ Nk(w) + H + 1 ≤ 2Nk(w). (55)
Hence we have,

$$\mathbb{I}(N_{k}(w_{k,t})>H)(n_{t_{k}+t}(w_{k,t})+1)\leq2N_{k}(w_{k,t})\implies{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\leq{\frac{2}{n_{t_{k}+t}(w_{k,t})+1}}$$
Then, using the above equation, we can bound the second term in Eq. 52:
The above equation, we can bound the second term in Eq. (1) as  $$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]=\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}c_{i}\sqrt{\frac{\mathbb{I}(N_{k}(w_{k,t})>H)}{N_{k}(w_{k,t})}}\right]$$ $$\stackrel{{(i)}}{{\leq}}\sqrt{2}c\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\sqrt{\frac{1}{n_{t_{k}+t}(w_{k,t})+1}}\right],$$

$$\quad(53)$$

$$(55)$$
$$(56)$$
$$\quad(57)$$

where (i) follows from Eq. 56.

Next, we can further bound E
hPK
k=1 E "X K k=1 X H t=1 s1 ntk+t(wk,t) + 1#= E "X KH τ=1 s1 nτ (wτ ) + 1# (i) = E  r1 ν + 1  X w∈W NKX+1(w) ν=0  w∈W E  r1 ν + 1   NKX+1(w) ν=0 = X  w∈W E "Z NK+1(w)+1 r1 x dx# ≤ X 1 w∈W E h2pNK+1(w) i ≤ X (ii) ≤ E  2 s|W| X w∈W NK+1(w)   (iii) = E h2p|W|KHi= 2p|W|KH, (58) where (i) follows from summing over different w ∈ W instead of time and from the fact that we observe each
PH
t=1 q 1
ntk+t(wk,t)+1 ias follows:

$$\left({\mathrm{58}}\right)$$
w exactly NK+1(w) times after K trajectories, (ii) follows from Jensen's inequality and (iii) follows from the fact that Pw∈W NK+1(w) = KH. Next, we combine Eqs 57 and 58 to obtain

$$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{t=1}^{H}\mathbb{I}(N_{k}(w_{k,t})>H)\frac{c}{\sqrt{N_{k}(w_{k,t})}}\right]\leq\sqrt{2}c\times2\sqrt{|\mathcal{W}|K\bar{H}}=2\sqrt{2}c\sqrt{|\mathcal{W}|K\bar{H}}\tag{59}$$

Further, we plug in Eqs. 53 and 59 in Eq.52

$$\mathbb{E}\left[\sum_{k=1}^{K}\sum_{r=1}^{H}\min\{1,\frac{c}{\sqrt{\max\{1,N_{k}(w_{k,i})\}}}\}\}\,|\,|\,\right]\leq2H|\mathcal{W}|+2\sqrt{2}c\sqrt{|\mathcal{W}|KH}\tag{60}$$  the proof.  

This concludes the proof.

Lemma 6. Let W *be a finite set and* Pt(δ) := {p : ∀w ∈ W, ||p(.|w) − pˆt(.|w)||1≤ βt(w, δ)} be a |W|rectangular confidence set over probability distributions p
∗(.|w) with m outcomes, where pˆt(.|w) is the empirical
estimation of $\overline{P^{\prime}(.|w)}$. Suppose at each time $\tau$, we observe an state $w_{\tau}=w$ and sample from $p^{\prime}(.|w)$. If $\beta_{t}(w,\delta)=\sqrt{\frac{2\log\left(\frac{\delta^{2}(\overline{P^{\prime}(.|w)})^{2-\delta+1}}{2\log\left(1,\overline{N_{t}(w)}\right)}\right)}{\max\left(1,N_{t}(w)\right)}}$ with $N_{t}(w)=\sum_{\tau=1}^{t}\mathbb{I}(w_{\tau}=w)$, then the true distributions $p^{*}$ lie in the confidence set $\mathcal{P}_{t}(\delta)$ with probability at least $1-\frac{\delta}{2^{t}}$.  
∗(.|w)*. If* Proof. We adapt the proof from Lemma 17 in Jaksch et al. (2010). We note that,

Pr(p ∗6∈ Pt) (i) = Pr [ w∈W kp ∗(· | w) − pˆt(· | w)k1 ≥ βt(w, δ) ! (ii) ≤X w∈W Pr  kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ  max{1, Nt(w)}   (iii) ≤X w∈W Xt n=0 Pr  kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ    , max{1, n}
where (i) follows from the definition of the confidence set, *i.e.*, the probability distributions do not lie in the confidence set if there is at least one state w in which kp
∗(· | w) − pˆ(· | w)k1 ≥ βt(*w, δ*), (ii) follows from the definition of βt(*w, δ*) and a union bound over all w ∈ W and (iii) follows from a union bound over all possible values of Nt(w). To continue, we split the sum into n = 0 and n > 0:

w∈W Xt n=0 Pr  kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut 2 log t 7|W|2m+1 δ   X  max{1, n} w∈W Xt n=1 Pr   kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut2 log t 7|W|2m+1 δ   = X  n if n=0 z }| { X w∈W Pr kp ∗(· | w) − pˆt(· | w)k1 ≥ s 2 log t 7|W|2m+1 δ ! + w∈W Xt n=1 Pr   kp ∗(· | w) − pˆt(· | w)k1 ≥ vuut2 log t 7|W|2m+1 δ   (i) = X  + 0 n (ii) ≤ t|W|2 m exp log − t 7|W|2 m+1 δ  ≤δ 2t 6 ,
where (i) follows from the fact that kp
∗(· | w) − pˆt(· | w)k1 <
r2 log t 7|W|2m+1 δ for non-trivial cases. More specifically,

$$\delta<1,\,t\geq2\implies\sqrt{2log\left(\frac{t^{\tau}|\mathcal{W}|2^{m+1}}{\delta}\right)}>\sqrt{2\log(512)}>2,$$ $$\|p^{*}(\cdot\,|\,w)-\hat{p}_{t}(\cdot\,|\,w)\|_{1}\leq\sum_{i\in[m]}\left(p^{*}(i\,|\,s)+\hat{p}_{t}(i\,|\,w)\right)\leq2,\tag{61}$$

and (ii) follows from the fact that, after observing n samples, the L
1-deviation of the true distribution p

from the empirical one pˆ over m events is bounded by:

$$\mathrm{Pr}\left(\|p^{*}(\cdot)-{\hat{p}}(\cdot)\|_{1}\geq\epsilon\right)\leq2^{m}\exp\left(-n{\frac{\epsilon^{2}}{2}}\right)$$

Lemma 7. *Consider the following minimization problem:*

$$(62)$$
$$\square$$
$$(63)$$
$$(65)$$
$$(66)$$

where d ≥ 0, bi ≥ 0 ∀i ∈ {1, . . . , m},Pi bi = 1 and 0 ≤ w1 ≤ w2 . . . ≤ wm. Then, the solution to the above minimization problem is given by:

$$x_{i}^{*}={\left\{\begin{array}{l l}{\operatorname*{min}\{1,b_{1}+{\frac{d}{2}}\}}&{{\mathrm{if~}}i=1}\\ {b_{i}}&{{\mathrm{if~}}i>1\,a n d\,\sum_{l=1}^{i}x_{l}\leq1}\\ {0}&{{\mathrm{otherwise.}}}\end{array}\right.}$$
$$(64)$$

Proof. Suppose there is {x 0 i
;Pi x 0 i = 1, x0i ≥ 0} such that Pi x 0 iwi <Pi x

i wi. Let j ∈ {1*, . . . , m*} be the first index where x 0 j 6= x
j
, then it's clear that x 0 j > x∗
j
.

If j = 1:

$$\sum_{i=1}^{m}\vert x_{i}^{\prime}-b_{i}\vert=\vert x_{1}^{\prime}-b_{1}\vert+\sum_{i=2}^{m}\vert x_{i}^{\prime}-b_{i}\vert>{\frac{d}{2}}+\sum_{i=2}^{m}b_{i}-x_{i}^{\prime}={\frac{d}{2}}+x_{1}^{\prime}-b_{1}>d$$

If j > 1:

$$\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|=|x_{1}^{\prime}-b_{1}|+\sum_{i=j}^{m}|x_{i}^{\prime}-b_{i}|>\frac{d}{2}+\sum_{i=j+1}^{m}b_{i}-x_{i}^{\prime}>\frac{d}{2}+x_{1}^{\prime}-b_{1}=d$$  radical the condition $\sum_{i=1}^{m}|x_{i}^{\prime}-b_{i}|\leq d$.  
Both cases contradict the condition Pm Lemma 8. *For the value function* V
π t|PD,P defined in Eq. *10, we have that:*

V π t|PD,P (s, d) = cπt(s,d)(s, d) + X a∈A pπt(s,d)(a|s) ·   ce(s, a) + X s 0∈S p(s 0| s, a) · V π t+1|PD,P (s
0, πt(*s, d*))!(67)
Proof.

$$V_{t|P_{\mathsf{D}},P}^{\pi}(s,d)\stackrel{(i)}{=}\bar{c}(s,d)+\sum_{s^{\prime}\in\mathcal{S}}p(s^{\prime},\pi_{t}(s,d)|(s,d))V_{t+1|P_{\mathsf{D}},P}^{\pi}(s^{\prime},\pi_{t}(s,d))$$
$$\square$$
$$(67)$$
$$\begin{array}{ll}\underset{\boldsymbol{x}}{minimize}&\sum_{i=1}^{m}x_{i}w_{i}\\ \text{subject to}&\sum_{i=1}^{m}|x_{i}-b_{i}|\leq d,\ \sum_{i}x_{i}=1,\\ &x_{i}\geq0\ \forall i\in\{1,\ldots,m\},\end{array}$$
$$\sum_{s\in A}p_{\pi_{1}(s,a)}(a\,|\,s)c_{\pi_{2}}(s,a)+c_{\pi}(\pi_{1}(s,d))+c_{\pi}(\pi_{1}(s,d),d)+\sum_{s^{\prime}\in S}p(s^{\prime}\,|\,s,a)p_{\pi_{1}(s,d)}(a\,|\,s)V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d))$$ $$\stackrel{{(iii)}}{{=}}c_{\pi_{1}(s,a)}(s,d)+\sum_{n\in A}p_{\pi_{1}(s,a)}(a|s)\cdot\left(c_{\pi_{1}}(s,a)+\sum_{s^{\prime}\in S}p(s^{\prime}\,|\,s,a)\cdot V_{\pi_{1}+1|p_{\pi_{2}},p}^{*}(s^{\prime},\pi_{1}(s,d))\right),\tag{68}$$

where (i) is the standard Bellman equation in the standard MDP defined with dynamics 3 and costs 4, (ii)
follows by replacing c¯ and p with equations 3 and 4, and (iii) follows by cd0 (*s, d*) = cc(d 0) + cx(d 0, d).

Lemma 9. min{T, a + b} ≤ min{*T, a*} + min{T, b} for T, a, b ≥ 0. Proof. Assume that a ≤ b ≤ a + b. Then,

$$\min\{T,a+b\}=\left\{\begin{array}{ll}T\leq a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq T\leq a+b\\ T\leq a+T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq T\leq b\leq a+b\\ T\leq2T=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ T\leq a\leq b\leq a+b\\ a+b=\min\{T,a\}+\min\{T,b\}&\mbox{if}\ \ a\leq b\leq a+b\leq T\end{array}\right.\tag{69}$$

## C Implementation Of Ucrl2 In Finite Horizon Setting

ALGORITHM 2: Modified UCRL2 algorithm for a finite horizon MDP M = (S, A*, P, C, L*).

Require: Cost C = [c(*s, a*)], confidence parameter δ ∈ (0, 1).

1: ({Nk(s, a)}, {Nk(*s, a, s*0)}) ← InitializeCounts()
2: for k = 1*, . . . , K* do 3: for s, s0 ∈ S, a ∈ A do 4: if Nk(*s, a*) 6= 0 **then** pˆk(s 0|*s, a*) ←
Nk(*s, a, s*0)
Nk(*s, a*)**else** pˆk(s 0|*s, a*) ← 1 |S| 5: βk(*s, a, δ*) ←
s14|S|log 2(k−1)L*|A||S|* δ max{1, Nk(*s, a*)}
6: **end for**
7: π k ← ExtendedValueIteration(ˆpk, βk, C)
8: s0 ← InitialConditions()
9: for t = 0*, . . . , L* − 1 do 10: Take action at = π k t (st), and observe next state st+1.

11: Nk(st, at) ← Nk(st, at) + 1 12: Nk(st, at, st+1) ← Nk(st, at, st+1) + 1 13: **end for**
14: **end for** 15: **Return** π K
ALGORITHM 3: It implements ExtendedValueIteration, which is used in Algorithm 2.

Require: Empirical transition distribution pˆ(.|*s, a*), cost c(*s, a*), and confidence interval β(*s, a, δ*).

1: π ← InitializePolicy(), v ← InitializeValueFunction()
2: n *← |S|* 3: for t = T − 1*, . . . ,* 0 do 4: for s ∈ S do 5: for a ∈ A do 6: s 0 1*, . . . , s*0n ← Sort(vt+1) \# vt+1(s 0 1) ≤ *. . .* ≤ vt+1(s 0 n)
7: p(s 0 1) ← min{1, pˆ(s 0 1|*s, a*) + β(*s,a,δ*)
2}
8: p(s 0 i) ← pˆ(s 0 i|s, a) ∀ 1 < i ≤ n 9: l ← n 10: **while** Ps 0 i
∈S p(s 0 i) > 1 do 11: p(s 0 l) = max{0, 1 −Ps 0 i 6=s 0 l p(s 0 i)}
12: l ← l − 1 13: **end while**
14: q(*s, a*) = c(*s, a*) + Es0∼p [vt+1(s 0)]
15: **end for**
16: vt(s) ← mina∈A{q(*s, a*)}
17: πt(s) ← arg mina∈A{q(*s, a*)}
18: **end for**
19: **end for** 20: **Return** π

## D Distribution Of Cell Types And Traffic Levels In The Lane Driving Environment

| road   | grass   | stone   | car   |     |
|--------|---------|---------|-------|-----|
| no-car | 0.7     | 0.2     | 0.1   | 0   |
| light  | 0.6     | 0.2     | 0.1   | 0.1 |
| heavy  | 0.5     | 0.2     | 0.1   | 0.2 |

Table 1: Probability of cell types based on traffic level.

| no-car   | light   | heavy   |      |
|----------|---------|---------|------|
| no-car   | 0.99    | 0.01    | 0    |
| light    | 0.01    | 0.98    | 0.01 |
| heavy    | 0       | 0.01    | 0.99 |

Table 2: Probability of traffic levels based on the previous row.

E Performance of the human and machine agents in obstacle avoidance task

![28_image_0.png](28_image_0.png)

Figure 8: Performance of the machine policy, a human policy with σH = 2, and the optimal policy in terms of total cost. In panel (a), the episodes start with an initial traffic level γ0 = no-car and, in panel (b), the episodes start with an initial traffic level γ0 ∈ {light, heavy}.

F The amount of human control for different initial traffic levels

![28_image_1.png](28_image_1.png)

Figure 9: The amount of human control rate using UCRL2-MC switching algorithm for different initial traffic levels. For each traffic level, we sample 500 environment and average the human control rate over them.

Higher traffic level results in more human control, as the human agent is more reliable in heavier traffic.

![29_image_0.png](29_image_0.png)

Figure 10: Ratio of UCRL2-MC regret to UCRL2 for (a) a set of action sizes and (b) different numbers of agents. By increasing the action space size, the performance of UCRL2-MC gets worse but remains within the same scale. In addition, UCRL2-MC outperforms UCRL2 in environments with a larger number of agents.

## G Additional Experiments

In this section, we run additional experiments in the RiverSwim environment to investigate the effect of action space size and the number of agents in a team on the total regret.

## G.1 Action Space Size

To study the effect of action space size on the total regret, we artificially increase the number of actions by planning m steps ahead. More concretely, we consider a new MDP, where each time step consists of m steps of the original RiverSwim MDP, and the switching policy decides for all the m steps at once. The number of actions in the new MDP increases to 2 m, while the state space remains unchanged. We consider a setting with a single team of two agents with p = 0 and p = 1, i.e., one agent always takes action right and the other takes left. We run the simulations for 20,000 episodes with m = {1, 2, *· · ·* , 4}, i.e., with the action size of 2, 4, 8, 16. Each experiment is repeated for 5 times. We compare the performance of our algorithm against UCRL2 in terms of total regret. Figure 10 (a) summarizes our results; The performance of UCRL2-MC gets worse by increasing the number of actions as the regret bound directly depends on the action size (Theorem 2). However, the regret ratio still remains within the same scale even after doubling the number of actions. One reason is that our algorithm only needs to learn *the actions taken by the agents* to find the optimal switching policy. If the agents' policies include a small subset of actions, our algorithms will maintain a small regret bound even in environments with huge action space. Therefore, we believe a more careful analysis can improve our regret bound by making it a function of agents' action space instead of the whole action size.

## G.2 Number Of Agents

Here, our goal is to examine the impact of the number of agents on the total regret achieved by our algorithm.

To this end, we consider the original RiverSwim MDP (i.e., two actions) with a single team of n agents, where we run our simulations for n = {3, 4, *· · ·* , 10} and 20,000 episodes for each n. We choose p, i.e.,
the probability of taking action right for n agents as {0,1 n−1
, *· · ·* ,
n−2 n−1
, 1}. As shown in Figure 10 (b),
UCRL2-MC outperforms UCRL2 as the number of agents increases. This agrees with Theorem 2, as our derived regret bound mainly depends on the action space size |A|, while the UCRL2 regret bound depends on the number of agents |D|.