RedTachyon commited on
Commit
ca7b471
1 Parent(s): fe0f242

Upload folder using huggingface_hub

Browse files
krQIuCCQsW/18_image_0.png ADDED

Git LFS Details

  • SHA256: 5422e1ca6f5da94c1e449234bd1269da812f20b4874112da459a47574d63e680
  • Pointer size: 130 Bytes
  • Size of remote file: 38.7 kB
krQIuCCQsW/19_image_0.png ADDED

Git LFS Details

  • SHA256: e246093eb37d65338288adc931213e26ca41e68354dc764bfb449a4f930e1eb1
  • Pointer size: 130 Bytes
  • Size of remote file: 32.4 kB
krQIuCCQsW/20_image_0.png ADDED

Git LFS Details

  • SHA256: b83314c609c50deb8f9cd10a0b61fa46d8163d36010bf651fe69a64249136c7d
  • Pointer size: 130 Bytes
  • Size of remote file: 33.3 kB
krQIuCCQsW/3_image_0.png ADDED

Git LFS Details

  • SHA256: 304c73b7bfe29bb1924f4b5064d7aca58fbb20c42af80f1ab832f3ccf7ffcdb1
  • Pointer size: 130 Bytes
  • Size of remote file: 13.8 kB
krQIuCCQsW/6_image_0.png ADDED

Git LFS Details

  • SHA256: 8ad0acb723c142ddce8729ea6eb91ac524fc7bce300930a119f54f6ee071c078
  • Pointer size: 130 Bytes
  • Size of remote file: 44.1 kB
krQIuCCQsW/6_image_1.png ADDED

Git LFS Details

  • SHA256: 4cf975355ccab07705e2523b3c8119e4b39b5cfaeafa064ab6d39ca758e55c98
  • Pointer size: 130 Bytes
  • Size of remote file: 37.3 kB
krQIuCCQsW/6_image_2.png ADDED

Git LFS Details

  • SHA256: 8f7e9cca6417ed29ed4f9f82b9148093d59311a78c96cc0f62be52b80a3ec03d
  • Pointer size: 130 Bytes
  • Size of remote file: 14.4 kB
krQIuCCQsW/8_image_0.png ADDED

Git LFS Details

  • SHA256: 3f75a1acdb8ee824370a29719ccc2f764bb289d9623ef571de6733c9dcb12572
  • Pointer size: 130 Bytes
  • Size of remote file: 34.3 kB
krQIuCCQsW/9_image_0.png ADDED

Git LFS Details

  • SHA256: cca007b7df7131d2873104f1ded1adb573dff654922ce60a0001bba330b594c4
  • Pointer size: 130 Bytes
  • Size of remote file: 11.1 kB
krQIuCCQsW/krQIuCCQsW.md ADDED
@@ -0,0 +1,725 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Test-Time Recalibration Of Conformal Predictors Under Distribution Shift Based On Unlabeled Examples
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Modern image classifiers are very accurate, but the predictions come without uncertainty estimates. Conformal predictors provide uncertainty estimates by computing a set of classes containing the correct class with a user-specified probability based on the classifier's probability estimates. To provide such sets, conformal predictors often estimate a cutoff threshold for the probability estimates based on a calibration set. Conformal predictors guarantee reliability only when the calibration set is from the same distribution as the test set.
8
+
9
+ Therefore, conformal predictors need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible.
10
+
11
+ In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we propose a method that provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.
12
+
13
+ ## 1 Introduction
14
+
15
+ Consider a (black-box) image classifier, typically a deep neural network with a softmax layer at the end, that is trained to output probability estimates for L classes given an input feature vector x ∈ R
16
+ d. Conformal predictors are wrapped around such a classifier and generate a set of classes that contains the correct label with a user-specified probability based on the classifier's probability estimates.
17
+
18
+ Let x ∈ R
19
+ d be a feature vector with associated label y ∈ {1*, . . . , L*}. We say that a set-valued function C
20
+ generates valid prediction sets for the distribution P if P(x,y)∼P [y ∈ C(x)] ≥ 1 − α, (1)
21
+ where 1 − α is the desired coverage level. Conformal predictors generate valid sets C for the distribution P
22
+ by utilizing a calibration set consisting of labeled examples {(x1, y1), . . . ,(xn, yn)}. An important caveat of conformal predictors is that the examples from the calibration set are drawn from the same distribution as the test dataset.
23
+
24
+ This assumption is difficult to satisfy in applications and potentially limits the applicability of conformal prediction methods in practice. In fact, in practice one usually expects a distribution shift between the calibration set and the examples at inference (or the test set), in which case the coverage guarantees provided by conformal prediction methods are void. For example, the new ImageNetV2 test set was created in the same way as the original ImageNet test sets, yet Recht et al. (2019) found a notable drop in classification accuracy for all classifiers considered.
25
+
26
+ Ideally, a conformal predictor is recalibrated on a distribution before testing, otherwise the coverage guarantees are not valid (Cauchois et al., 2020). However, in real-world applications, where distribution shifts are ubiquitous, labeled data from new distributions is scarce or non-existent.
27
+
28
+ We therefore consider the problem of recalibrating a conformal predictor only based on unlabeled data from the new domain. This is an ill-posed problem: it is in general impossible to calibrate a conformal predictor based on unlabeled data. Yet, we propose a simple calibration method that gives excellent performance for a variety of natural distribution shifts.
29
+
30
+ Organization and contributions. We start with concrete examples on how conformal predictors yield miscalibrated uncertainty estimates under natural distribution shifts. We next propose a simple recalibration method that only uses unlabeled examples from the target distribution. We show that our method correctly recalibrates a popular conformal predictor (Sadinle et al., 2019) on a theoretical toy model. We provide empirical results for various natural distribution shifts of ImageNet showing that recalibrating conformal predictors using our proposed method significantly reduces the performance gap. In certain cases, it even achieves near oracle-level coverage. Related work. Several works have considered the robustness of conformal prediction to distribution shift
31
+ (Tibshirani et al., 2019; Gibbs & Candes, 2021; Park et al., 2022; Barber et al., 2023; Prinster et al., 2022; 2023; Gibbs & Candès, 2023; Fannjiang et al., 2022). Gibbs & Candes (2021); Gibbs & Candès (2023) consider a setting where the distribution varies over time and propose an adaptive conformal prediction method to guarantee asymptotic and local coverage. Similarly, Barber et al. (2023) propose a weighted conformal prediction method to provably generalize to the case where distribution changes over time. On the other hand, Prinster et al. (2022; 2023) propose a weighted uncertainty quantification based on the jackknife+
32
+ method rather than the typical conformal prediction methods that we consider in this paper.
33
+
34
+ Particularly of interest, Tibshirani et al. (2019) and Park et al. (2022) propose methods that assume a covariate shift and calibrate based on estimating the amount of covariate shift, we compare to those later in Section 5.2. Podkopaev & Ramdas (2021) studies the related, but discrete setting of label shifts between the source and target domains and proposes a method that is more robust under the label shift setting. In contrast, we focus on complex image datasets for which covariate shift is not well defined and label shift not broadly relevant.
35
+
36
+ We are not aware of other works studying calibration of conformal predictors under distribution shift based on unlabeled examples. However, prior works propose to make conformal predictors robust to various distribution shifts from the source distribution of the calibration set (Cauchois et al., 2020; Gendler et al., 2022), via calibrating the conformal predictor to achieve a desired coverage in the worse case scenario of the considered distribution shifts. Cauchois et al. (2020) considers covariate shifts and calibrates the conformal predictor to achieve coverage for the worst-case distribution within the f-divergence ball of the source distribution.
37
+
38
+ Gendler et al. (2022) considers adversarial perturbations as distribution shifts and calibrates a conformal predictor to achieve coverage for the worst-case distribution obtained through ℓ2-norm bounded adversarial noise.
39
+
40
+ While making the conformal predictor robust to a range of worst-case distributions at calibration time allows maintaining coverage under the worst-case distributions, these approaches have two shortcomings:
41
+ First, natural distribution shifts are difficult to capture mathematically, and models like covariate-shifts or adversarial perturbations do not seem to model natural distribution shifts (such as that from ImageNet to ImageNetV2) accurately. Second, calibrating for a worst-case scenario results in an overly conservative conformal predictor that tends to yield much higher coverage than desired for test distributions that correspond to a less severe shift from the source, which comes at the cost of reduced efficiency (i.e., larger set size, or larger confidence interval length). In contrast, our method does not compromise the efficiency of the conformal predictor on easier distributions as we recalibrate the conformal predictor for any new dataset.
42
+
43
+ A related problem is to predict the accuracy of a classifier on new distributions from unlabeled data sampled from a new distribution (Deng & Zheng, 2021; Chen et al., 2021; Jiang et al., 2022; Deng et al., 2021; Guillory et al., 2021; Garg et al., 2022). In particular, Garg et al. (2022) proposed a simple method that achieves state-of-the-art performance in predicting classifier accuracy across a range of distributions. However, the calibration problem we consider is fundamentally different than estimating the accuracy of a classifier. While predicting the accuracy of the classifier would allow making informed decisions on whether to use the classifier for a new distribution, it doesn't provide a solution for recalibration.
44
+
45
+ ## 2 Background On Conformal Prediction
46
+
47
+ Consider a black-box classifier with input feature vector x ∈ R
48
+ dthat outputs a probability estimate πℓ(x) ∈ [0, 1] for each class ℓ = 1*, . . . , L*. Typically, the classifier is a neural network trained on some distribution, and the probability estimates are the softmax outputs. We denote the order statistics of the probability estimates by π(1)(x) ≥ π(2)(x) ≥ *. . .* ≥ π(L)(x).
49
+
50
+ Many conformal predictors use a calibration set DP
51
+ cal = {(xi, yi)}
52
+ n i=1 to find a cutoff threshold (Sadinle et al., 2019; Romano et al., 2020; Angelopoulos et al., 2020; Bates et al., 2021) that achieves the desired empirical coverage on this set. Here, the superscript P denotes the distribution from which the examples in the calibration set are sampled from. Given a set-valued function C(x, u, τ ) ⊂ {1*, . . . , L*} containing the set of predicted classes by the conformal predictor, such conformal predictors compute the threshold parameter τ as
53
+
54
+ $$\tau^{*}=\operatorname*{inf}\left\{\tau:|\left\{i:y_{i}\in{\mathcal{C}}(\mathbf{x}_{i},u_{i},\tau)\right\}|\geq(1-\alpha)(n+1)\right\},$$
55
+
56
+ where uiis added randomization to smoothen the cardinality term, chosen independently and uniformly from the interval [0, 1], see Vovk et al. (2005) on smoothed conformal predictors. Finally, the '+1' term in the
57
+ (n + 1) term is a bias correction for the finite size of the calibration set.
58
+
59
+ This conformal calibration procedure achieves distributional coverage as defined in the expression (1),
60
+ for any set valued function C(x*, u, τ* ) satisfying the nesting property C(x, u, τ1) ⊆ C(x*, u, τ*2) for τ1 < τ2, see (Angelopoulos et al., 2020, Thm. 1).
61
+
62
+ In this paper, we primarily focus on the popular conformal predictors *Thresholded Prediction Sets*
63
+ (TPS) (Sadinle et al., 2019) and *Adaptive Prediction Sets* (APS) (Romano et al., 2020). The set generating functions of the two conformal predictors are
64
+
65
+ $$\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau)=\{\ell=1,\ldots,L\colon\pi_{\ell}(\mathbf{x})\geq1-\tau\}\,,$$ $$\mathcal{C}^{\mathrm{APS}}(\mathbf{x},u,\tau)=\{\ell=1,\ldots,L\colon\sum_{j=1}^{\ell-1}\pi_{(j)}(\mathbf{x})+u\cdot\pi_{(\ell)}(\mathbf{x})\leq\tau\},$$
66
+ $\uparrow$).
67
+ $$\left({3}\right)$$
68
+ $$\left(4\right)$$
69
+
70
+ with u ∼ U(0, 1) for smoothing. The set generating function of TPS doesn't require smoothing since each softmax score is independently thresholded and therefore there are no discrete jumps. Computing the threshold τ through conformal calibration (2) requires a labeled calibration set from distribution P. We therefore add a superscript to the threshold to designate which distribution the calibration set was sampled from; for example τ P indicates that the calibration set was sampled from the distribution P. The prediction set function C
71
+ TPS for TPS and C
72
+ APS for APS both satisfy the nesting property. Therefore, TPS
73
+ and APS calibrated on a calibration set DP
74
+ cal by computing the threshold in the expression (2) is guaranteed to achieve coverage on the distribution P. However, coverage is only guaranteed if the test distribution Q is the same as the calibration distribution P.
75
+
76
+ ## 3 Failures Under Distribution Shifts And Problem Statement
77
+
78
+ Often we are most interested in quantifying uncertainty with conformal prediction when we apply a classifier to new data that might come from a slightly different distribution than the distribution we calibrated on.
79
+
80
+ Yet, conformal predictors only provide coverage guarantees for data coming from the same distribution as the calibration set, and the coverage guarantees often fail even under slight distribution shifts. For example, our experiments (see Figure 3) show that APS calibrated on ImageNet-Val to yield 1 − α = 0.9 coverage only achieves a coverage of 0.64 on the ImageNet-Sketch dataset, which consists of sketches of the ImageNet-Val images and hence constitutes a distribution shift (Wang et al., 2019).
81
+
82
+ Different conformal predictors typically have different coverage gaps under the same distribution shift. More efficient conformal predictors (i.e., those that produce smaller prediction sets) tend to have a larger coverage gap under a distribution shift. For example, both TPS and RAPS (a generalization of APS proposed by Angelopoulos et al. (2020)) yield smaller confidence sets, but only achieve a coverage of 0.38 vs. 0.64 for APS on the ImageNet-Sketch distribution shift discussed above.
83
+
84
+ ![3_image_0.png](3_image_0.png)
85
+
86
+ Figure 1: **Left**: Vanilla conformal prediction. **Right**: QTC recalibration. QTC encapsulates the conformal calibration process to recalibrate the conformal predictor for each new distribution without altering the underlying set generating function. D
87
+ Q
88
+ tst is the unlabeled test set and DP is the labeled training/calibration set. QTC finds a threshold on the scores of the model on the unlabeled samples and predicts the coverage level by utilizing how the distribution of the scores changes across test distribution with respect to this threshold.
89
+ Even under more subtle distribution shifts such as subpopulation shifts (Santurkar et al., 2021), the achieved coverage can drop significantly. For example, APS calibrated to yield 1 − α = 0.9 coverage on the source distribution of the Living-17 BREEDS dataset only achieves a coverage of 0.68 on the target distribution.
90
+
91
+ The source and target distributions contain images of exclusively different breeds of animals while the animals' species is shared as the label (Santurkar et al., 2021).
92
+
93
+ Problem statement. Our goal is to recalibrate a conformal predictor on a new distribution Q based on unlabeled data. Given an unlabeled dataset D
94
+ Q
95
+ tst = {x1*, . . . ,* xn} sampled from the target distribution Q, our goal is to provide an accurate estimate τˆ
96
+ Q for the threshold τ Q. Recall that the threshold τ Q is so that the conformal predictor with set function C(x*, u, τ* Q) achieves the desired coverage of 1 − α on the target distribution Q. Thus, in other words, our goal is to estimate a threshold τˆ
97
+ Q so that the set C(x*, u,* τˆ
98
+ Q)
99
+ achieves close to the desired coverage of 1 − α on the target distribution, based on the unlabeled dataset only. In general, it is impossible to guarantee coverage since conformal prediction relies on exchangeability assumptions which can not be guaranteed in practice for new datasets (Vovk et al., 2005; Romano et al.,
100
+ 2020; Angelopoulos et al., 2020; Cauchois et al., 2020; Bates et al., 2021). However, we will see that we can consistently estimate the threshold τ Q for a variety of natural distribution shifts.
101
+
102
+ We refer to the difference between the target coverage of 1 − α and the actual coverage achieved on a given distribution without any recalibration efforts as the *coverage gap*. We assess how effective a recalibration method is based on the reduction of the coverage gap after recalibration.
103
+
104
+ ## 4 Methods
105
+
106
+ In this section we introduce our calibration method, termed Quantile Thresholded Confidence (QTC), along with baseline methods we consider in our experiments.
107
+
108
+ ## 4.1 Quantile Thresholded Confidence
109
+
110
+ Consider a conformal predictor with threshold τ P
111
+ α calibrated so that the conformal predictor achieves coverage 1 − α on the source distribution P. On a different distribution Q the coverage of the conformal predictor is off. But there is a value β such that, if we calibrate the conformal predictor on the *source distribution* using the value β instead of α, it achieves 1 − α coverage on the *target distribution*, i.e., the corresponding thresholds obey τ P
112
+ β = τ Q
113
+ α .
114
+
115
+ Our method first estimates the value β based on unlabeled examples. From the estimate βˆ, we estimate τ Q
116
+ α based on computing the threshold τ P
117
+ βˆ
118
+ by calibrating the conformal predictor on the source calibration set using βˆ. This yields a threshold close to the desired one, i.e., τ P
119
+ βˆ ≈ τ Q
120
+ α .
121
+
122
+ Step 1, estimation of β: We are given a labeled source dataset DP
123
+ cal and an unlabeled target dataset D
124
+ Q
125
+ tst.
126
+
127
+ Our estimate of β relies on the quantile function
128
+
129
+ $$q({\mathcal{D}},c)=\inf\left\{p\colon\frac{1}{|{\mathcal{D}}|}\sum_{{\mathbf{x}}\in{\mathcal{D}}}\mathbbm{1}_{\{s(\pi({\mathbf{x}}))<p\}}\geq c\right\}.$$
130
+
131
+ The quantile function depends on the classifier's predictions through a score function s(π(x)) = maxℓ πℓ(x),
132
+ which we take as the largest softmax score of the classifier's predictions. Here, D is a set of unlabeled examples and c ∈ [0, 1] is a scalar. Our method first identifies a threshold based on the unlabeled target dataset D
133
+ Q
134
+ tst for a desired coverage level α in expression (5) by computing q(D
135
+ Q
136
+ tst, α). Since this process is identical to finding the (α)
137
+ th quantile of the scores on the dataset, we dub the method Quantile Thresholded Confidence
138
+ (QTC). QTC estimates β as
139
+
140
+ $$\mathbf{\dot{b}})$$
141
+ $$\beta_{\mathrm{QTC}}=\operatorname*{min}(\beta_{\mathrm{QTC-T}},\beta_{\mathrm{QTC-S}}),$$
142
+ $$(6)$$
143
+ $$\left(7\right)$$
144
+ $$\quad(8)$$
145
+ βQTC = min(βQTC−T, βQTC−S), (6)
146
+ where the QTC-Target and QTC-Source estimates are
147
+
148
+ βQTC−T(D Q tst) = 1 |DP cal| X x∈DP cal 1{s(π(x))<q(DQ tst,α)}(7) βQTC−S(D Q tst) = 1 − 1 |DQ tst| X x∈DQ tst 1{s(π(x))<q(DP cal ,1−α)} . (8)
149
+ We consider two estimates for β, and aggregate them to a single value by taking the minimum of the two.
150
+
151
+ This yields best performance, as demonstrated by studying the three versions of QTC, corresponding to the three estimates (6), (7), and (8).
152
+
153
+ The reasons for having two estimates and aggregating them is as follows. DNNs have a tendency to be over-confident in their predictions (Guo et al., 2017). If the distribution of the softmax scores over the dataset is not sufficiently smooth in the lower-confidence regime, the QTC-T estimate might be inaccurate. In this higher-confidence regime QTC-S provides a better estimate. The minimum of the two provides a good estimate in the high and low confidence regions.
154
+
155
+ The motivation behind QTC is that we essentially map the quantile function conformal prediction uses, which relies on the labels, to the quantile function of QTC, which does not require labels. While this mapping is not guaranteed to be preserved under distribution shift, we have observed that it works very well in practice and provably works in the theoretical setting that we consider.
156
+
157
+ If there is no distribution shift between the source and target, QTC would recover the original α. That is, both the QTC-T and QTC-S estimates of the β would be asymptotically equal to α and 1 − α respectively.
158
+
159
+ To see this more clearly, note that we can insert the definition of q in (5) in the RHS of the equations (7), (8).
160
+
161
+ As n → ∞, the sums over the datasets converge to the expectations which are equal when no distribution shift is present.
162
+
163
+ Step 2, estimation of the threshold τ Q
164
+ α **based on** β: QTC predicts the conformal threshold τ Q
165
+ α by conformal calibration with target value βQTC. Specifically, we calibrate the conformal predictor on the dataset DP
166
+ cal as
167
+
168
+ $$\tau_{\rm QTC}=\inf\left\{\tau:|\{i:y_{i}\in{\cal C}({\bf x}_{i},u_{i},\tau)\}|\geq(1-\beta_{\rm QTC})(|{\cal D}_{\rm cal}^{p}|+1)\right\},\tag{9}$$
169
+
170
+ which yields the estimate τQTC for τ Q
171
+ α . QTC is illustrated in Figure 1.
172
+
173
+ QTC is inspired by a method for predicting a classifier's accuracy from Garg et al. (2022). Garg et al. (2022)'s method finds a threshold on the scores matching the accuracy of a classifier on the dataset and predicts the accuracy on other datasets. Contrary, we predict the threshold of a conformal predictor, and our method is based on predicting an auxillary parameter β instead of a threshold directly.
174
+
175
+ ## 4.2 Baseline Methods
176
+
177
+ We consider regression-based methods as baselines. Regression-based methods have been used for predicting classification accuracy, assuming a correlation between the classification accuracy and a feature (e.g., average confidence) across different distributions (Deng et al., 2021; Deng & Zheng, 2021; Guillory et al., 2021). We consider regression-based methods as baselines for predicting the conformal threshold on a target distribution that would achieve 1 − α coverage. We train the regression-based methods on a dataset consisting of synthetically generated distributions given a source distribution (e.g. ImageNet-C from ImageNet) with the goal of predicting the conformal threshold for a test dataset sampled from a natural distribution.
178
+
179
+ Let ϕπ(D): R
180
+ L → R
181
+ d be the feature extractor part of a neural network that maps the softmax scores of the classifier to the features for a given dataset D. A simple example is the one-dimensional feature (d = 1)
182
+ extracted by computing the average confidence of a given classifier across the examples of a given dataset.
183
+
184
+ We fit a regression function fθ parameterized by different feature extractors ϕπ by minimizing the mean squared error between the output and the calibrated threshold τ across the distributions as
185
+
186
+ $$\hat{\theta}=\arg\operatorname*{min}_{\theta}\sum_{j}(f_{\theta}(\phi_{\pi}({\mathcal{D}}_{j}))-\tau^{{\mathcal{P}}_{j}})^{2}.$$
187
+ $$(10)^{\frac{1}{2}}$$
188
+ 2. (10)
189
+ We consider the following choices for the feature extractor ϕπ (see App A.1 for details):
190
+ - *Average confidence regression (ACR)*: The average confidence of the classifier across the entire dataset.
191
+
192
+ - *Difference of confidence regression (DCR)* (Guillory et al., 2021): The average confidence of the classifier across the entire dataset offset by the average confidence on the source dataset. Prediction is also for the offset target τ − τ P . DCR performs better than ACR for predicting a classifier's accuracy (Guillory et al., 2021).
193
+
194
+ - *Confidence histogram-density regression (CHR)*: Normalized histogram density of the classifier confidence across the dataset, where the feature dimension is controlled by a hyperparameter that determines the number of histogram bins in the probability range [0, 1]. Neural networks tend to be overconfident in their prediction which heavily skews the histogram densities to the last bin. We also therefore consider a variant of CHR, *dubbed CHR-*, where we drop the last bin of the histogram as a feature.
195
+
196
+ - *Predicted class-wise average confidence regression (PCR)*: Class-wise (by predicted class) average confidence of the classifier across the samples.
197
+
198
+ ## 5 Experiments
199
+
200
+ We study the performance of QTC on natural distribution shifts and on an artifical covariate shift.
201
+
202
+ ## 5.1 Natural Distribution Shifts
203
+
204
+ We consider the following choices for the source distribution P and associated natural distribution shifts:
205
+ ImageNet (Deng et al., 2009) distribution shifts: In our ImageNet experiments, ImageNet is the source distribution P and the following natural distribution shifts are the target distributions Q:
206
+ - **ImageNetV2** (Recht et al., 2019) was constructed by following the same procedure as for constructing and labeling the original ImageNet dataset. However, all standard models perform significantly worse on ImageNetV2 relative to the original ImageNet test set.
207
+
208
+ - **ImageNet-Sketch** (Wang et al., 2019) contains sketch-like images of the objects in the original ImageNet, but otherwise matches the original categories and scales.
209
+
210
+ ![6_image_0.png](6_image_0.png)
211
+
212
+ Figure 2: Coverage obtained by TPS for a desired coverage of 1 − α = 0.9 on the target distribution Q after recalibration using the unlabeled samples from Q for various recalibration methods. The dotted line is the coverage without recalibration, and the dashed line is the target coverage 1 − α = 0.9. The figure shows that QTC-T and QTC-S almost fully close the coverage gap across ImageNet and BREEDS test distribution shifts, corresponding to varying severities.
213
+
214
+ ![6_image_1.png](6_image_1.png)
215
+
216
+ ![6_image_2.png](6_image_2.png)
217
+
218
+ Figure 3: Coverage obtained by TPS and APS on the target distribution Q as a function of the desired coverage (i.e., 1 − α) after recalibration with the respective prediction method. For regression methods, only the best performing method, CHR-, is shown. QTC significantly closes the coverage gap across the range of 1 − α, while CHR- yields inconsistent or insufficient performance improvements.
219
+ - **ImageNet-R** (Hendrycks et al., 2021) contains artwork images of the ImageNet class objects found in the web. ImageNet-R only contains images for a 200-class subset of the original ImageNet. We don't limit our experiments to this subset but instead consider the adverse setting of calibrating on all 1000 classes since our main goal is to provide an end-to-end solution for recalibration of the conformal predictors and we are interested in how well our method performs against possible adversaries such as dataset imbalance that can be encountered in practice.
220
+
221
+ 7 BREEDS (Santurkar et al., 2021) distribution shifts: The BREEDS datasets feature *sub-population* shifts from the training set to test. The BREEDS datasets were constructed using the existing ImageNet images, but with different classes. BREEDS utilizes the hierarchical WordNet structure of the classes to choose a parent class that makes the original ImageNet classes the leaves. For example, in the BREEDS
222
+ Living-17 dataset, one of the classes is *domestic cat*. This is a parent class of several ImageNet classes, which are *tiger cat, Egyptian cat, Persian cat and Siamese cat*. BREEDS induces a subpopulation shift from the source distribution to the target by assigning these leaf classes to either the source or target. For example, the images in the source dataset of Living-17 under the *domestic cat* class are that of either *tiger cats* or Egyptian cats, whereas in the target are that of either Persian cats or *Siamese cats*. Therefore, despite having the same label (*domestic cat*), the source and target distributions semantically differ due to the differences between the breeds, which induces a subpopulation shift.
223
+
224
+ We consider three BREEDS datasets: Entity-13, Entity-30 and Living-17, which are named using the convention theme/object type–*\#classes*. Experimental procedure. For the ImageNet experiments we use a ResNet-50 and DenseNet-121 pretrained on the ImageNet training set. For the BREEDS experiments, we train a ResNet-18 model from scratch for the BREEDS datasets. In both cases, the classifiers only see examples from the source distribution.
225
+
226
+ For all experiments, we first calibrate the conformal predictor on the source distribution P to find the cutoff threshold τ P . For QTC and variants, we find the threshold q using the expression (5). For the regression methods, we use the ImageNet-C dataset (Hendrycks & Dietterich, 2019) as the source of synthetic distributions, find the cutoff threshold τ for each of the distributions, and fit a regressor by minimizing the loss (10). For the regression function we use a 4-layer MLP with ReLU activations. ImageNet-C consists of 90 different distributions obtained by synthetically perturbing the images of ImageNet-Val for 18 different types of perturbations at 5 different levels of severity, resulting in 90 distinct distributions. Recalibration experiments for a fixed target coverage. We first evaluate the recalibration methods for a fixed target coverage of 1 − α = 0.9. The results in Figure 2 for recalibrating TPS show that QTC
227
+ reduces the coverage gap much more than regression methods, and even closes it in some cases.
228
+
229
+ We also display QTC-T and QTC-S as ablation studies. Here it can be seen that sometimes QTC-T and sometimes QTC-S performs best, which is why combining them is necessary. The different performance of QTC-T and QTC-S can be attributed to the difference of the type of shifts (e.g. semantic vs. subpopulation)
230
+ between ImageNet and BREEDS. Note that QTC-T operates on the regime of samples with lower confidence whereas QTC-S on the higher confidence regime. Therefore, QTC-T may perform subpar compared to QTC-S
231
+ for datasets consisting of fewer, more distinct classes like BREEDS, for which a well-trained classifier tends to assign high confidence to its predictions.
232
+
233
+ Recalibration experiments for different target coverage levels. The coverage gap (i.e., the difference of achieved coverage and targeted coverage) varies across the desired coverage level 1 − α. We therefore next evaluate the performance as a function of the desired coverage level.
234
+
235
+ Figure 3 shows the coverage obtained after recalibration with TPS and APS for different values of 1 − α for the natural distribution shifts from ImageNet. QTC closes the coverage gap significantly for all choices of 1 − α, whereas the best performing regression-based baseline method, CHR-, fails to significantly improve the coverage gap consistently across all choices of 1 − α.
236
+
237
+ ## 5.2 Comparison To Covariate Shift Based Methods
238
+
239
+ QTC does not require labeled data from the target distribution at training or inference time. Existing methods that aim to measure the amount of covariate shift based on unlabeled examples also improve the robustness of conformal prediction, but rely on labeled examples from the target domain (Tibshirani et al.,
240
+ 2019; Park et al., 2022). Here, we compare the performance of QTC to that of covariate shift based methods and show that QTC outperforms the state-of-the-art when labeled data is not available during training, and performs only marginally worse if labeled data is available.
241
+
242
+ ![8_image_0.png](8_image_0.png)
243
+
244
+ Figure 4: Coverage (**left**) and the average set size (**right**) obtained by TPS on the target Q =
245
+ DomainNet-Infograph for various settings of (1 − α). For the setting where all domains are available for the discriminator (**left**), WSCI closes the coverage gap while QTC considerably improves it; whereas when only DomainNet-Real is available, QTC slightly outperforms. In both settings, PS-W fails by constructing uninformatively large confidence sets for the range 1 − α > 0.9.
246
+ Under a covariate shift, the conditional distribution of the label y given the feature vector x is fixed but the marginal distribution of the feature vectors differ:
247
+
248
+ source:$(\mathbf{x},y)\sim\mathcal{P}=p_{\mathcal{P}}(\mathbf{x})\times p(y|\mathbf{x}),\qquad\text{target:}(\mathbf{x},y)\sim\mathcal{Q}=p_{\mathcal{Q}}(\mathbf{x})\times p(y|\mathbf{x}),$
249
+ where pP (x) and pQ(x) are the marginal PDFs of the features x, and p(y|x) is the conditional PDF of the label y. In order to account for a covariate shift, Tibshirani et al. (2019); Park et al. (2022) utilize an approach called weighted conformal calibration. Weighted conformal calibration uses the likelihood ratio of the covariate distributions, i.e., the importance weights w(x) = pQ(x)/pP (x) to weigh the scores used for the set generating function of the conformal predictor for each sample (x, y) ∈ DP
250
+ cal. A conformal predictor calibrated on a source calibration set with the true importance weights for a target distribution is guaranteed to achieve the desired coverage on the target, see Tibshirani et al. (2019, Cor. 1). In practice, the importance weights are not known and are therefore estimated heuristically.
251
+
252
+ Covariate shifts is not well defined for complex tasks such as image classification. We therefore follow the experimental setup of Park et al. (2022) and consider a backbone ResNet-101 classifier trained using unsupervised domain adaptation based on training sets sampled from both the source and target distribution as well as an auxillary classifier (discriminator) g that yields probability estimates of membership between the two for a given sample. For the *weighted split conformal inference* (WSCI) method of Tibshirani et al. (2019),
253
+ we estimate the importance weights using this discriminator g and for the PAC prediction sets method of Park et al. (2022) based on rejection sampling (PS-W), using histogram density estimation over the probability estimates. We use TPS as the conformal predictor.
254
+
255
+ We consider the DomainNet distribution shift problem (Peng et al., 2019) and choose *DomainNet-Infograph* as the target distribution since the coverage gap is insignificant for the others (see Park et al. (2022, Table 1)).
256
+
257
+ We consider two scenarios, for both of which all six DomainNet domains, i.e. *DomainNet-Sketch, DomainNetClipart, DomainNet-Painting, DomainNet-Quickdraw, DomainNet-Real, and DomainNet-Infograph*, are available during training. In the first scenario all domains are also available at inference, whereas in the second scenario, analogous to the ImageNet setup, we only have access to the examples from *DomainNet-Real*
258
+ (source) and *DomainNet-Infograph* (target).
259
+
260
+ The results in Figure 4 show that when the source includes all the domains, WSCI outperforms other methods.
261
+
262
+ However, when only DomainNet-Real is available for the source at calibration time, QTC slightly outperforms WSCI. In both settings, PS-W fails if α is chosen such that 1 − α > 0.9, by constructing uninformatively large confidence sets that tend to contain all possible labels. On the other hand, QTC and WSCI tend to construct similarly sized confidence sets consistently across the range of 1 − α. Note that while QTC considerably closes the coverage gap in both setups, QTC-S fails to improve the coverage gap. This might be due to the fact that
263
+
264
+ ![9_image_0.png](9_image_0.png)
265
+
266
+ Figure 5: Example source and target distributions P and Q for the binary classification model, and a classifier with winv, wsp = 1. The decision boundary is shown with a faded dotted line. The correlation between the feature xsp and the label y is higher for the source than target (p P > pQ).
267
+ ResNet-101 trained with domain adaptation tends to yield very high confidence across all examples. While a separate discriminator that uses the representations of the ResNet-101 before the fully-connected linear layer is utilized for the covariate shift based methods, this is not the case for QTC and its variants. Therefore, the threshold found by QTC-S tends to be very close or even equal to 1.0, hindering the performance.
268
+
269
+ ## 6 Theoretical Results
270
+
271
+ We consider a simple binary classification distribution shift model from Nagarajan et al. (2021); Garg et al.
272
+
273
+ (2022), and adapt the analysis from Garg et al. (2022) to show that recalibrating provably succeeds within this model. Specifically, we show that the conformal predictor TPS with QTC-T yields the desired coverage of 1 − α on the target distribution based on unlabeled examples.
274
+
275
+ The distribution shift model from Nagarajan et al. (2021) is as follows. Consider a binary classification problem with response y *∈ {−*1, 1} and with two features x = [xinv, xsp] ∈ R
276
+ 2, an invariant one and a spuriously correlated one. The source and target distributions P and Q over the feature vector and label are defined as follows. The label y is uniformly distributed over {−1, 1}. The invariant fully-predictive feature xinv is uniformly distributed in an interval determined by the constants *c > γ* ≥ 0, with the interval being conditional on y:
277
+
278
+ $$x_{\mathrm{inv}}|y\sim\begin{cases}U\left[\gamma,c\right]&\text{if}\quad y=1\\ U\left[-c,-\gamma\right]&\text{if}\quad y=-1\end{cases}.\tag{1}$$
279
+ $$(11)$$
280
+
281
+ The spurious feature xsp is correlated with the response y such that P(x,y)∼P [xsp · y > 0] = p P , where p P ∈ (0.5, 1.0) for some joint distribution P. A distribution shift is modeled by simulating target data with different degrees of spurious correlation such that P(x,y)∼Q [xsp · y > 0] = p Q, where p Q ∈ [0, 1]. There is a distribution shift from source to target when p P ̸= p Q. Two example distributions P and Q are illustrated in Figure 5.
282
+
283
+ We consider a logistic regression classifier that predicts class probability estimates for the classes y = −1 and y = 1 as f(x) = h1 1+ewT x
284
+ ,e wT x 1+ewT x i, where w = [winv, wsp] ∈ R
285
+ 2. The classifier with winv > 0 and wsp = 0 minimizes the misclassification error across all choices of distributions P and Q (i.e., across all choices of p).
286
+
287
+ However, a classifier learned by minimizing the empirical logistic loss via gradient descent depends on both the invariant feature xinv and the spuriously-correlated feature xsp, i.e., wsp = 0 ̸ due to the geometric skews on the finite data and statistical skews of the optimization with finite gradient descent steps (Nagarajan et al., 2021).
288
+
289
+ For the logistic regression classifier TPS recalibrated with QTC-T provably suceeds:
290
+ Theorem 6.1 (Informal). *Consider the logistic regression classifier for the binary classification problem* described above with winv > 0, wsp ̸= 0, let n *be the number of samples for the source and target datasets* and α ∈ (0, ϵ) be a user-defined value, where ϵ is the error rate of the classifier on the source. The coverage achieved on the target by recalibrating TPS on the source with the QTC estimate obtained in (7) by finding the QTC threshold on the target as in (5) converges to 1 − α as n → ∞ *with high probability.*
291
+ Regarding the assumption on α: A value of α that is larger than the error rate of the classifier does make sense as it would result in empty confidence sets for a portion of the examples in the dataset.
292
+
293
+ In order to understand the intuition behind Theorem 6.1, we first explain how the coverage is off under a distribution shift in this model. Consider a classifier that depends positively on the spurious feature (i.e.,
294
+ wsp > 0). When the spurious correlation is decreased from the source distribution to the target, the error rate of the classifier increases. TPS calibrated on the source samples finds a threshold τ such that the prediction sets yield 1 − α coverage on the source dataset as n → ∞. In other words, the fraction of misclassified points for which the model confidence is larger than the threshold τ is equal to α on the source. As the spurious correlation decreases and the error rate increases from source to target, the fraction of misclassified points for which the model confidence is larger than the threshold τ surpasses α, leading to a gap in targeted and actual coverage.
295
+
296
+ Now, we remark on how QTC recalibrates and ensures the target coverage is met. Note that there exists an unknown coverage level 1 − β that can be used to calibrate TPS on the source distribution such that it yields 1 − α coverage on the target. Theorem 6.1 guarantees that QTC correctly estimates β and therefore recalibration of the conformal predictor using QTC yields the desired coverage level of 1 − α on the target.
297
+
298
+ ## References
299
+
300
+ Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. *International Conference on Learning Representations (ICLR)*,
301
+ 2020.
302
+
303
+ Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability. *The Annals of Statistics*, 2023.
304
+
305
+ Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. *Journal of the ACM*, 2021.
306
+
307
+ Maxime Cauchois, Suyash Gupta, Alnur Ali, and John C. Duchi. Robust validation: Confident predictions even when distributions shift. *arXiv:2008.04267 [cs, stat]*, 2020.
308
+
309
+ Mayee Chen, Karan Goel, and Nimit Sohoni. Mandoline: Model evaluation under distribution shift.
310
+
311
+ International Conference on Machine Learning (ICML), 2021.
312
+
313
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2009.
314
+
315
+ Weijian Deng and Liang Zheng. Are labels always necessary for classifier accuracy evaluation? Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
316
+
317
+ Weijian Deng, Stephen Gould, and Liang Zheng. What does rotation prediction tell us about classifier accuracy under varying testing environments? *International Conference on Machine Learning (ICML)*,
318
+ 2021.
319
+
320
+ Clara Fannjiang, Stephen Bates, Anastasios N. Angelopoulos, Jennifer Listgarten, and Michael I. Jordan.
321
+
322
+ Conformal prediction under feedback covariate shift for biomolecular design. *Proceedings of the National* Academy of Sciences, 2022.
323
+
324
+ Saurabh Garg, Sivaraman Balakrishnan, and Zachary C Lipton. Leveraging unlabeled data to predict out-of-distribution performance. *International Conference on Learning Representations (ICLR)*, 2022.
325
+
326
+ Asaf Gendler, Tsui-Wei Weng, Luca Daniel, and Yaniv Romano. Adversarially robust conformal prediction.
327
+
328
+ International Conference on Learning Representations (ICLR), 2022.
329
+
330
+ Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
331
+
332
+ Isaac Gibbs and Emmanuel Candès. Conformal inference for online prediction with arbitrary distribution shifts. *arXiv:2208.08401 [cs, stat]*, 2023.
333
+
334
+ Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, and Ludwig Schmidt. Predicting with confidence on unseen distributions. *IEEE International Conference on Computer Vision (ICCV)*, 2021.
335
+
336
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On Calibration of Modern Neural Networks. In International Conference on Machine Learning. PMLR, 2017.
337
+
338
+ Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *International Conference on Learning Representations (ICLR)*, 2019.
339
+
340
+ Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. *IEEE International Conference on* Computer Vision (ICCV), 2021.
341
+
342
+ Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, and J. Zico Kolter. Assessing generalization of sgd via disagreement. *International Conference on Learning Representations (ICLR)*, 2022.
343
+
344
+ Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry Wasserman. Distribution-Free Predictive Inference For Regression. *Journal of the American Statistical Association (JASA)*, 2018.
345
+
346
+ Vaishnavh Nagarajan, Anders Andreassen, and Behnam Neyshabur. Understanding the failure modes of out-of-distribution generalization. *International Conference on Learning Representations (ICLR)*, 2021.
347
+
348
+ Sangdon Park, Edgar Dobriban, Insup Lee, and Osbert Bastani. Pac prediction sets under covariate shift. In International Conference on Learning Representations (ICLR), 2022.
349
+
350
+ Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In *IEEE International Conference on Computer Vision (ICCV)*, 2019.
351
+
352
+ Aleksandr Podkopaev and Aaditya Ramdas. Distribution-free uncertainty quantification for classification under label shift. *arXiv:2103.03323 [cs, stat]*, 2021.
353
+
354
+ Drew Prinster, Anqi Liu, and Suchi Saria. Jaws: Auditing predictive uncertainty under covariate shift.
355
+
356
+ Advances in Neural Information Processing Systems (NeurIPS), 2022.
357
+
358
+ Drew Prinster, Suchi Saria, and Anqi Liu. Jaws-x: Addressing efficiency bottlenecks of conformal prediction under standard and feedback covariate shift. In *International Conference on Machine Learning (ICML)*,
359
+ 2023.
360
+
361
+ Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? *International Conference on Machine Learning (ICML)*, 2019.
362
+
363
+ Yaniv Romano, Matteo Sesia, and Emmanuel J. Candès. Classification with valid and adaptive coverage.
364
+
365
+ Advances in Neural Information Processing Systems (NeurIPS), 2020.
366
+
367
+ Mauricio Sadinle, Jing Lei, and Larry Wasserman. Least ambiguous set-valued classifiers with bounded error levels. *Journal of the American Statistical Association (JASA)*, 2019.
368
+
369
+ Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift.
370
+
371
+ International Conference on Learning Representations (ICLR), 2021.
372
+
373
+ Ryan J. Tibshirani, Rina Foygel Barber, Emmanuel J. Candes, and Aaditya Ramdas. Conformal prediction under covariate shift. *Advances in Neural Information Processing Systems (NeurIPS)*, 2019.
374
+
375
+ Vladimir Vovk, A. Gammerman, and Glenn Shafer. *Algorithmic learning in a random world*. Springer, 2005.
376
+
377
+ Haohan Wang, Songwei Ge, Eric P Xing, and Zachary C Lipton. Learning robust global representations by penalizing local predictive power. *Advances in Neural Information Processing Systems (NeurIPS)*, 2019.
378
+
379
+ ## Appendix A Proof Of Theorem 6.1
380
+
381
+ In this section, we state and prove a formal version of Theorem 6.1. Our results rely on adapting the proof idea of Garg et al. (2022, Theorem 3) for predicting the classification accuracy of a model to our conformal prediction setup.
382
+
383
+ Recall that we consider a distribution shift model for a binary classification problem with an invariant predictive feature and a spuriously correlated feature, where a distribution shift is induced by the spurious feature of the target distribution being more or less correlated with the label than the source distribution (Nagarajan et al., 2021; Garg et al., 2022).
384
+
385
+ We consider a logistic regression classifier that outputs class probability estimates (softmax scores) for the two classes of y = −1 and y = +1 as
386
+
387
+ $$(\mathbf{x})=\left[{\frac{1}{1+e^{\mathbf{w}^{T}\mathbf{x}}}},{\frac{e^{\mathbf{w}^{T}\mathbf{x}}}{1+e^{\mathbf{w}^{T}\mathbf{x}}}}\right],$$
388
+
389
+ where w = [winv, wsp] ∈ R
390
+ 2. The classifier with winv > 0 and wsp = 0 minimizes the misclassification error across all choices of distributions P and Q (i.e., across all choices of p). However, a classifier learned by minimizing the empirical logistic loss via gradient descent depends on both the invariant feature xinv and the spuriously-correlated feature xsp, i.e., wsp ̸= 0 due to the geometric skews on the finite data and statistical skews of the optimization with finite gradient descent steps (Nagarajan et al., 2021).
391
+
392
+ In order to understand how geometric skews result in learning a classifier that depends on the spurious feature, suppose the probability that the spurious feature agrees with the label is high, i.e., p is close to 1.0. Note that in a finite-size training set drawn from this distribution, the fraction of samples for which the spurious feature disagrees with the label (i.e., xsp ̸= y) is small. Therefore, the margin on the invariant feature for these samples alone can be significantly larger than the actual margin γ of the underlying distribution. This implies that the max-margin classifier depends positively on the spurious feature, i.e., wsp > 0. Furthermore, we assume that winv > 0, which is required to obtain non-trivial performance (beating a random guess).
393
+
394
+ Conformal prediction in the distribution shift model. We consider the conformal prediction method TPS (Sadinle et al., 2019) applied to the linear classifier described above. While other conformal prediction methods such as APS and RAPS also work for this model, the smoothing induced by the randomization of the model scores used in those conformal predictors would introduce additional complexity to the analysis.
395
+
396
+ TPS also tends to be more efficient in that it yields smaller confidence sets compared to APS and RAPS at the same coverage level, see (Angelopoulos et al., 2020, Table 9). In the remaining part of this section, we establish Theorem 6.1, which states that TPS recalibrated on the source calibration set with QTC achieves the desired coverage of 1 − α on any target distribution that has a
397
+ (potentially) different correlation probability p for the spurious feature. We show this in two steps:
398
+
399
+ First, consider the oracle conformal predictor that is calibrated to achieve α miscoverage on the target
400
+ distribution, i.e., the conformal predictor with threshold τ
401
+ Q
402
+ α chosen so that
403
+ $$\alpha=\mathrm{P}_{({\bf x},y)\sim\mathcal{Q}}\left[y\notin\mathcal{C}({\bf x},\tau_{\alpha}^{\mathcal{Q}})\right].$$
404
+ α ). (12)
405
+ Define the miscoverage on the source distribution as From those two equations, it follows that a conformal predictor calibrated to achieve miscoverage β on the
406
+ source distribution P achieves the desired miscoverage of α on the target distribution, provided that the
407
+ calibration sets are sufficiently large, which is assumed as we consider the case of n → ∞.
408
+ $$\beta=\mathrm{P}_{({\bf x},y)\sim{\mathcal P}}\left[y\notin{\mathcal C}({\bf x},\tau_{\alpha}^{\mathcal Q})\right].$$
409
+ Second, we provide a bound on the deviation of the QTC estimate from the true value of β. We show that in the infinite sample size case, the QTC estimate converges to the true value of β. Those two steps prove Theorem 6.1.
410
+
411
+ $$(12)^{\frac{1}{2}}$$
412
+
413
+ Step 1: QTC relies on the fact that there exists an unknown β ∈ (0, 1) that can be used to calibrate TPS on the source distribution such that it yields 1 − α coverage on the target.
414
+
415
+ Here, we show that callibrating to achieve 1 − β coverage on the source calibration set DP via computing the threshold (2) achieves 1 − α coverage on the target distribution Q as n → ∞.
416
+
417
+ We utilize the following coverage guarantee of conformal predictors established by Vovk et al. (2005); Lei et al. (2018); Angelopoulos et al. (2020):
418
+ Lemma A.1. *(Lei et al., 2018, Thm. 2.2), (Angelopoulos et al., 2020, Thm. 1, Prop. 1) Consider*
419
+ (xi, yi), i = 1, . . . , n drawn iid from some distribution P. Let C(x, τ ) be the conformal set generating function that satisfies the nesting property in τ , i.e., C(x, τ ′) ⊆ C(x, τ ) if τ
420
+ ′ ≤ τ *. Then, the conformal predictor* calibrated by finding τ
421
+ ∗that achieves 1 − α coverage on the finite set {(xi, y)}
422
+ n i=1 *as in* (2) *achieves* 1 − α coverage on distribution P*, i.e.,*
423
+
424
+ $$\operatorname{P}_{(\mathbf{x},y)\sim{\mathcal{P}}}\left[y\in{\mathcal{C}}(\mathbf{x},\tau^{*})\right]\geq1-\alpha.$$
425
+ $$(13)^{\frac{1}{2}}$$
426
+ $$\left(14\right)$$
427
+ P(x,y)∼P [y ∈ C(x, τ ∗)] ≥ 1 − α. (13)
428
+ Furthermore, assume that the variables si = s(xi, yi) = inf{τ : yi ∈ C(xi, τ )} for i = 1, . . . , n *are distinct* almost surely. Then, the coverage achieved by the calibrated conformal predictor with the set generating function C(x, τ ) = {ℓ ∈ Y : s(x, ℓ) ≤ τ} *is also accurate, in that it satisfies*
429
+
430
+ $$\operatorname{P}_{(\mathbf{x},y)\sim{\mathcal{P}}}\left[y\in{\mathcal{C}}(\mathbf{x},\tau^{*})\right]\leq1-\alpha+{\frac{1}{n+1}}.$$
431
+
432
+ Both the lower bound (13) and the upper bound (14) of Lemma A.1 apply to TPS in the context of the binary classification problem that we consider. To see this, we verify that TPS calibrated with the set generating function (19) satisfies both assumptions of Lemma A.1. First, note that TPS satisfies the nesting property, since we have C
433
+ TPS(x, τ ′) ⊆ CTPS(x, τ ) for τ
434
+ ′ ≤ τ . Next, note that for TPS we have s(x, y) = πy(x). Further note that the linear logistic regression model we consider assigns a distinct score to each data point and since the invariant feature xinv is uniformly distributed in a continuous interval conditional on y, the variables si are distinct almost surely.
435
+
436
+ Now, consider the oracle TPS threshold τ Q
437
+ α that achieves 1 − α coverage, or equivalently α miscoverage, on the target distribution, i.e.,
438
+
439
+ $$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[y\not\in\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau_{\alpha}^{\mathcal{Q}})\right]=\alpha.$$
440
+ $$\left(15\right)$$
441
+
442
+ Next, note that y /∈ CTPS(x, τ Q
443
+ α ) if and only if arg maxj∈{0,1} πj (x) ̸= y and maxj∈{0,1} πj (x) ≥ τ Q
444
+ α . To see that, note that the confidence set returned by TPS is a singleton containing only the top prediction of the model when the confidence of this prediction is higher than the threshold τ Q
445
+ α . Moreover, the confidence set returned by TPS for the binary classification problem above does not contain the true label only when the confidence set is the singleton set of the top prediction of the model and is different than the true label. Thus, equation (15) implies
446
+
447
+ $$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\arg\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\text{and}\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\right]=\alpha.$$
448
+
449
+ We define β as the miscoverage that the oracle TPS yields on the source distribution, i.e.,
450
+
451
+ $$\beta:=\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\arg\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\,\,a n d\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\right].$$
452
+ . (17)
453
+ We have β ̸= α if there is a distribution shift from target to source.
454
+
455
+ Consider the threshold τˆ
456
+ P
457
+ βfound by calibrating TPS on the set DP to achieve empirical coverage of 1 − β as in (2). TPS with the threshold τˆ
458
+ P
459
+ βachieves coverage on the source distribution P as a result of Lemma A.1.
460
+
461
+ Moreover, combining (13) with (14) at n → ∞ yields exact coverage of 1 − β on the source distribution P.
462
+
463
+ Thus, we have
464
+
465
+ $$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\arg\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\,\,a n d\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\hat{\tau}_{\beta}^{\mathcal{P}}\right]=\beta.$$
466
+ $$(16)$$
467
+ $$\left(17\right)$$
468
+ $$(18)$$
469
+
470
+ Comparing equation (18) to the definition of β in equation (17) yields τˆ
471
+ P
472
+ β = τ Q
473
+ α . Therefore, it follows that TPS calibrated to achieve 1 − β coverage on the source calibration set DP as in (2) achieves exactly 1 − α coverage on the target distribution Q as n → ∞.
474
+
475
+ Step 2: In the second step, we show that QTC correctly estimates the value of β defined above. This is formalized by the lemma below.
476
+
477
+ Recall that the calibration of TPS entails identifying a cutoff threshold τ computed by the formula (2). The set generating function of TPS for the linear classification problem described above simplifies to
478
+
479
+ $${\mathcal{C}}^{\mathrm{TPS}}(\mathbf{x},\tau)=\left\{j\in\{0,1\}\colon\pi_{j}(\mathbf{x})\geq1-\tau\right\},\tag{1}$$
480
+ $$(19)$$
481
+
482
+ where π0(x) and π1(x) are the first and second entry of (x) as defined above.
483
+
484
+ We are only interested in the regime where the desired coverage level 1 − α is larger than the classifier's accuracy, or equivalently *α < ϵ* with ϵ being the error rate of the classifier. This is because a trivial method that constructs confidence sets with equal length of 1 for all samples (i.e., singleton sets of only the predicted label) already achieves coverage of 1 − ϵ.
485
+
486
+ Lemma A.2. Given the logistic regression classifier for the binary classification problem described above with any winv > 0, wsp ̸= 0, assume that the threshold q for QTC is computed using a dataset DQ consisting of n samples, sampled from some target distribution Q*, such that*
487
+
488
+ $$\frac{1}{|\mathcal{D}^{\mathcal{Q}}|}\sum_{\mathbf{x}\in\mathcal{D}^{\mathcal{Q}}}1_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}=\alpha.\tag{1}$$
489
+ $$(20)$$
490
+
491
+ Consider the oracle TPS conformal predictor with conformal threshold τ Q
492
+ α *, i.e., the predictor that achieves* 1−α coverage on the target distribution Q. Denote with 1−β *the coverage achieved on the source distribution* P by this oracle TPS. Fix a δ > 0*. The QTC estimate of the* miscoverage β*, denoted by*
493
+
494
+ $$\beta_{\mathrm{QTC}}={\frac{1}{|{\mathcal{D}}^{P}|}}\sum_{{\bf x}\in{\mathcal{D}}^{P}}\mathbb{1}\left\{\max_{j\in\{0,1\}}\pi_{j}<q\right\},\tag{1}$$
495
+
496
+ satisfies the following inequality with probability at least 1 − δ over a randomly drawn set of examples DQ
497
+
498
+ $$(21)$$
499
+ $$|\beta_{\rm qTc}-\beta|\leq\sqrt{\frac{2\log(16/\delta)}{n\cdot c_{sp}}},\tag{1}$$
500
+ $$(22)$$
501
+
502
+ where csp = (1 − p Q) · (1 − p P )
503
+ 2if wsp > 0 and csp = p Q · (p P )
504
+ 2 *otherwise.*
505
+ Proof. We adapt the proof idea of Garg et al. (2022, Theorem 3), which pertains to the problem of estimating the classification error of the classifier on the target, to estimating the source coverage of the oracle conformal predictor that achieves 1 − α coverage on the target distribution.
506
+
507
+ For notational convenience, we define the event that a sample (x, y) is not in the prediction set of the oracle TPS with conformal threshold τ Q
508
+ α (i.e., y /∈ CTPS(x, τ Q
509
+ α )) as
510
+
511
+ $\mathcal{E}_{mc}=\{y\notin\mathcal{C}^{\mathrm{TPS}}(\mathbf{x},\tau_{\alpha}^{\mathcal{Q}})\}$ $$=\{\arg\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\neq y\text{and}\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)\geq\tau_{\alpha}^{\mathcal{Q}}\}.$$
512
+ The infinite sample size case (n → ∞). In this part we show that as n → ∞, the QTC estimate βQTC
513
+ found as in (21) converges to the source miscoverage β, to illustrate the proof idea. For n → ∞, the QTC
514
+ estimate βQTC in (21) becomes
515
+
516
+ $$\beta_{\text{QTC}}=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\mathbb{1}\left\{\max_{j\in\{0,1\}}f_{j}(\mathbf{x})\leq q\right\}\right]$$ $$=\text{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]$$ $$=\text{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\mathcal{E}_{mc}\right]$$ $$=\beta,$$
517
+ $$(23)$$
518
+
519
+ where the last equality is the definition of β as given in equation (17). The critical step is equation (23),
520
+ which we establish in the remainder of this part of the proof.
521
+
522
+ First, we condition on the label y. Using the law of total probability, we get
523
+
524
+ P(x,y)∼P max j∈{0,1} πj (x) < q= Px∼P|y=−1 max j∈{0,1} πj (x) < q· P(x,y)∼P [y = −1] + Px∼P|y=+1 max j∈{0,1} πj (x) < q· P(x,y)∼P [y = +1] (i) = 1 2 · Px∼P|y=−1 max j∈{0,1} πj (x) < q+ 1 2 · Px∼P|y=+1 max j∈{0,1} πj (x) < q (ii) = Px∼P|y max j∈{0,1} πj (x) < q. (24)
525
+ For equation (i), we used that y is uniformly distributed across {−1, 1}, and for equation (ii) that x is symmetrically distributed with respect to the label y. That is, we have xinv ∼ U[−c, −γ] and P [xsp = −1] = p if y = −1 and xinv ∼ U[*γ, c*] and P [xsp = +1] = p if y = +1, so the two probabilities in (i) are equal.
526
+
527
+ We can further expand the expression for the probability Px∼P|y
528
+ -maxj∈{0,1} πj (x) < qby additionally conditioning on the spurious feature xsp, which yields
529
+
530
+ $$\mathrm{P}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{\pi_{\mathrm{inv}}\sim\mathcal{P}|\pi_{\mathrm{inv}}\sim\pi_{\mathrm{tp}}}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{tp}}=y\right]$$ $$+\mathrm{P}_{\pi_{\mathrm{inv}}\sim\mathcal{P}|\pi_{\mathrm{inv}}\neq y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{tp}}\neq y\right].\tag{25}$$
531
+
532
+ In order to simplify the expression in the RHS of equation (25), we consider the cases of wsp > 0 and wsp < 0 separately. If wsp > 0, we have maxj∈{0,1} πj (x) > q if xsp = y. Therefore, we have Pxinv∼P|y,xsp=y
533
+ -maxj∈{0,1} πj (x) < q= 0 if wsp > 0 and equation (25) simplifies to
534
+
535
+ $$\mathrm{P}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{un}}\sim\mathcal{P}\left[x_{\mathrm{up}}\neq y\right]}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}\left[y_{\mathrm{up}}\ \left[x_{\mathrm{up}}\neq y\right]\right.\right.$$ $$=\mathrm{P}_{x_{\mathrm{un}}\sim\mathcal{P}\left[x_{\mathrm{up}}\neq y\right]}\left[\max_{j\in\{0,1\}}\ \pi_{j}\left(\mathbf{x}\right)<q\right]\cdot(1-p^{p}).\tag{26}$$
536
+
537
+ Similarly, if wsp < 0, we have maxj∈{0,1} πj (x) > q if xsp ̸= y, and equation (25) becomes
538
+
539
+ $$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{P}}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{new}}\sim\mathcal{P}\left|x_{\mathrm{np}}=y\right.}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}\left|y\right.}\left[x_{\mathrm{np}}=y\right]$$ $$=\mathrm{P}_{x_{\mathrm{new}}\sim\mathcal{P}\left|x_{\mathrm{np}}=y\right.}\left[\max_{J\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]\cdot p^{\mathcal{P}}.\tag{1}$$
540
+
541
+ We next follow the same steps that we carried out above for P(x,y)∼P -maxj∈{0,1} πj (x) < qto rewrite the probability P(x,y)∼P [Emc]. If wsp > 0, the classifier makes no errors if xsp = y and only misclassifies a fraction
542
+
543
+ $$(27)$$
544
+
545
+ of examples if xsp ̸= y. Therefore, we have
546
+
547
+ $$\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[\mathcal{E}_{mc}\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{sp}}\neq y\right]$$ $$=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot(1-p^{\mathcal{P}}).$$
548
+ $$(28)$$
549
+
550
+ Similarly, for wsp < 0, we have
551
+
552
+ $$\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[\mathcal{E}_{mc}\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{op}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot\mathrm{P}_{\mathbf{x}\sim\mathcal{P}|y}\left[x_{\mathrm{sp}}=y\right]$$ $$=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|x_{\mathrm{op}}\neq y}\left[\mathcal{E}_{mc}\right]\cdot p^{\mathcal{P}}.$$
553
+ $$(29)$$
554
+ for $w_{\rm sp}>0$ and $\tau_{\rm sp}<0$.
555
+ (30) $\binom{31}{2}$ (31) ...
556
+ for $w_{\rm sp}>0$ and $\tau_{\rm sp}<0$.
557
+ (32) $\binom{33}{2}$ (33) .
558
+ Therefore, in order to establish equation (23), it suffices to show
559
+
560
+ $$\begin{array}{l}{{\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}\neq y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}\neq y}\left[\mathcal{E}_{m c}\right],}}\\ {{\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}=y}\left[\max_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{x_{\mathrm{inv}}\sim\mathcal{P}|y,x_{\mathrm{np}}=y}\left[\mathcal{E}_{m c}\right],}}\end{array}$$
561
+ πj (x) < q= Pxinv∼P|y,xsp̸=y [Emc] , for wsp > 0 and (30)
562
+ πj (x) < q= Pxinv∼P|y,xsp=y [Emc] , for wsp < 0. (31)
563
+ The feature xinv is identically distributed conditioned on y, i.e., uniformly distributed in the same interval, regardless of the underlying source or target distributions P and Q. Therefore, equations (30) and (31) are equivalent to
564
+
565
+ Pxinv∼Q|y,xsp̸=y max j∈{0,1} Pxinv∼Q|y,xsp=y max j∈{0,1}
566
+ πj (x) < q= Pxinv∼Q|y,xsp̸=y [Emc] , for wsp > 0 and (32)
567
+ πj (x) < q= Pxinv∼Q|y,xsp=y [Emc] , for wsp < 0. (33)
568
+ Equations (32) and (33) in turn follow from
569
+
570
+ $$\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\operatorname*{max}_{j\in\{0,1\}}\pi_{j}\left(\mathbf{x}\right)<q\right]=\mathrm{P}_{(\mathbf{x},y)\sim\mathcal{Q}}\left[\mathcal{E}_{m c}\right],$$
571
+ $$(34)$$
572
+
573
+ by carrying out the same steps that we carried out to expand the probabilities Px∼P|y
574
+ -maxj∈{0,1} πj (x) < q and P(x,y)∼P -maxj∈{0,1} πj (x) < qstarting from equation (24) to establish equations (30) and (31). Equation (34) in turn is a consequence of combining (16) with (20) at n → ∞. This establishes equation (23), as desired.
575
+
576
+ The finite sample case: We next show that the desired results approximately hold with high probability over a randomly drawn finite-sized set of examples DQ. We bound the difference between the LHS and RHS
577
+ of (32) and (33) with high probability.
578
+
579
+ First, consider the case of wsp > 0. Recall that for the case of wsp > 0 we are interested in the regime where xsp ̸= y. We denote the set of points in the target set DQ for which the spurious feature disagrees with the label as
580
+
581
+ $\mathcal{X}_{D}=\{i=1,\ldots,n:\text{\rm{\bf{x}}}_{\text{\rm{sp}}}\neq y,(\mathbf{x}_{i},y_{i})\in\mathcal{D}^{\mathcal{Q}}\}$,
582
+ and denote the set of points for which the spurious feature agrees with the label as
583
+
584
+ $${\mathcal{X}}_{A}=\{i=1,\ldots,n:x_{\mathrm{sp}}=y,({\mathbf{x}}_{i},y_{i})\in{\mathcal{D}}^{\mathcal{Q}}\}.$$
585
+
586
+ Note that the QTC threshold q found on the entire set DQ as in (20) satisfies
587
+
588
+ $$\frac{1}{|\mathcal{X}_{D}|}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x}_{i})<q\}}=\frac{1}{|\mathcal{X}_{D}|}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\mathcal{E}_{\text{\tiny{me}}}(\mathbf{x}_{i},y_{i})\}},\tag{35}$$
589
+
590
+ which follows from noting that the classifier only makes an error on the subset XD if wsp > 0 and therefore the only points for which the event Emc is observed lie in the set XD. Similarly, as established before in the infinite sample case, we have 1{maxj∈{0,1} πj (xi)<q}
591
+ = 0 for all i ∈ XD.
592
+
593
+ By the Dvoretzky-Kiefer-Wolfowitz-Massart (DKWM) inequality, for any q > 0 we have with probability at least 1 − δ/8
594
+
595
+ $$\left|{\frac{1}{|X_{D}|}}\sum_{i\in\mathcal{X}_{D}}{\frac{3}{4}}\left\{\operatorname*{max}_{x_{i}\in\{0,1\}}\pi_{j}(\mathbf{x}_{i})<q\right\}\right.-\operatorname*{\mathbb{E}}_{x_{i},w_{i}\sim\mathcal{Q}[y,w_{i}\neq y}\left[{\frac{1}{2}}\left\{\operatorname*{max}_{x_{j}\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}\right]\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|X_{D}|}}}.$$
596
+ 2|XD|. (36)
597
+ Plugging equation (35) into (36), we have with probability at least 1 − δ/8
598
+
599
+ $$\left|\Xi_{x_{1m}\sim\mathcal{Q}|y,x_{m}\neq y}\left[\mathbbm{1}_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}\right]-{\frac{1}{|\mathcal{X}_{D}|}}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\left\{\mathcal{E}_{m,c}\right\}}\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|\mathcal{X}_{D}|}}}.$$
600
+
601
+ We next bound the second term in the LHS of equation (37) from its expectation. Using Hoeffding's inequality, we have with probability at least 1 − δ/8
602
+
603
+ $$(36)$$
604
+ $$\left|{\frac{1}{|\mathcal{X}_{D}|}}\sum_{i\in\mathcal{X}_{D}}\mathbbm{1}_{\{\mathcal{E}_{m e}\}}-\mathbbm{E}_{x_{i m}\sim\mathcal{Q}|y,x_{i p}\neq y}\left[\mathbbm{1}_{\{\mathcal{E}_{m e}\}}\right]\right|\leq{\sqrt{\frac{\log(16/\delta)}{2|\mathcal{X}_{D}|}}}.$$
605
+
606
+ Combining equations (37) and (38) using the triangle inequality and union bound, we have with probability at least 1 − δ/4
607
+
608
+ $$\mathbb{E}_{x_{\text{true}}\sim\mathcal{Q}|y,x_{\text{w}}\neq\psi}\left[1_{\left\{\max_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}-\mathbb{E}_{x_{\text{true}}\sim\mathcal{Q}|y,x_{\text{w}}\neq\psi}\left[1_{\left\{\mathcal{E}_{\text{nn}}\right\}}\right]\right]\leq\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{D}|}}.\tag{39}$$
609
+
610
+ Recall that the invariant feature xinv is uniformly distributed in the same interval conditioned on y regardless of the source or target distributions P and Q and that Pxinv|y,xsp=y
611
+ -maxj∈{0,1} πj (x) > q=
612
+ Pxinv|y,xsp=y
613
+ -arg maxj∈{0,1} πj (x) ̸= y= 0 for the case of wsp > 0 as shown before. Therefore, by dividing both sides of (39) with 1/Px∼P|y [xsp ̸= y] we have with probability at least 1 − δ/4
614
+
615
+ $$\left|\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[1_{\left\{\max_{\mathbf{j}\in\{0,1\}}\tau_{\mathbf{j}}(\mathbf{x})<q\right\}}\right]-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[1_{\left\{\mathcal{E}_{\text{\tiny{\it rms}}}\right\}}\right]\right|\leq\frac{1}{\mathbb{P}_{\mathbf{x}\sim\mathcal{P}|\mathcal{P}|\left\{x_{\text{\tiny{\it rms}}}\neq\mathbf{y}\right\}}\sqrt{\frac{2\log(16/\delta)}{|X_{D}|}}$$ $$=\frac{1}{1-p^{p}}\sqrt{\frac{2\log(16/\delta)}{|X_{D}|}}.\tag{40}$$
616
+ $$(37)$$
617
+ $$(38)$$
618
+ $$(41)$$
619
+
620
+ For the case of wsp < 0, we can show an analogous result by noting that the above results can be shown on the set XA, where xsp = y. Specifically, noting that 1 |XA| Pi∈XA
621
+ 1{maxj∈{0,1} πj (xi)<q}
622
+ =1 |XA| Pi∈XA
623
+ 1{Emc}
624
+ if wsp < 0 and following exactly the same steps from equation (35) onward that lead to equation (40), we have with probability at least 1 − δ/4
625
+
626
+ $$\left|\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[1_{\left\{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\right\}}\right]-\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{P}}\left[1_{\left\{\mathcal{E}_{\mathrm{loc}}\right\}}\right]\right|\leq{\frac{1}{p^{\mathcal{P}}}}{\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{A}|}}}.$$
627
+
628
+ Using Hoeffding's inequality we can further bound the RHS of (40) and (41). For the set XD, we have with probability at least 1 − δ/2
629
+
630
+ $$\left|\left|\mathcal{X}_{D}\right|-n\cdot(1-p^{\mathfrak{Q}})\right|\leq\sqrt{\frac{\log(4/\delta)}{2n}},$$
631
+ $$\left(42\right)$$
632
+
633
+ ![18_image_0.png](18_image_0.png)
634
+
635
+ $$(43)$$
636
+ $$(44)$$
637
+ $$(45)$$
638
+
639
+ Figure 6: Coverage obtained by RAPS on the target distribution Q for various settings of (1 − α) w/ and w/o recalibration using QTC.
640
+ and for the set XA, we have with probability at least 1 − δ/2
641
+
642
+ $$|\mathcal{X}_{A}|-n\cdot p^{\mathcal{Q}}|\leq\sqrt{\frac{\log(4/\delta)}{2n}}.$$
643
+
644
+ We next bound the difference between the finite sample QTC estimation on the source from its expectation.
645
+
646
+ By DKWM inequality, for any q > 0 we have with probability at least 1 − δ/4
647
+
648
+ $$\left|{\frac{1}{|\overline{{D^{p}}}|}}\sum_{\mathbf{x}\in\mathcal{D}^{p}}{\mathbbm{1}}_{\left\{{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\}\right\}}-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{P}}\left[{\mathbbm{1}}_{\left\{{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}(\mathbf{x})<q\}\right\}}\right]\leq{\sqrt{\frac{\log(8/\delta)}{2n}}}.$$
649
+ 2n. (44)
650
+ We first show the result for the case wsp > 0. Combining equations (40) and (44) using the triangle inequality and union bound, we have with probability at least 1 − δ/2
651
+
652
+ $$\left|{\frac{1}{|D^{p}|}}\sum_{{\bf x}\in{\mathcal{D}}^{p}}{\mathbb{1}}\left\{\operatorname*{max}_{j\in\{0,1\}}\pi_{j}({\bf x})<q\right\}-{\mathbb{E}}_{({\bf x},{\bf y})\sim{\mathcal{P}}}\left[{\mathbb{1}}_{\left\{E_{m}\right\}}\right]\right|\leq{\frac{1}{1-p^{p}}}{\sqrt{\frac{2\log(16/\delta)}{|{\mathcal{X}}_{D}|}}}.$$
653
+
654
+ Plugging in the definitions of βQTC in (21) and β in (17) above, equivalently we get
655
+
656
+ $$|\beta_{\mathrm{QTC}}-\beta|\leq{\frac{1}{1-p^{\overline{{p}}}}}{\sqrt{\frac{2\log(16/\delta)}{|\mathcal{X}_{D}|}}},$$
657
+ $$(46)$$
658
+
659
+ which holds with probability at least 1 − δ/2. Combining (46) with (42) proves equation (22) for wsp > 0, as desired.
660
+
661
+ Similarly, for the case wsp < 0, following the same steps by first combining equation (41) with (44), we have with probability at least 1 − δ/2
662
+
663
+ $$|\beta_{\mathrm{QTC}}-\beta|\leq{\frac{1}{p^{\overline{{p}}}}}{\sqrt{\frac{2\log(16/\delta)}{|X_{A}|}}}.$$
664
+
665
+ Combining (47) with (43) yields equation (22), as desired, for the case wsp < 0, which concludes the proof.
666
+
667
+ ## A.1 Details On The Baseline Regression Methods
668
+
669
+ In this section, we provide details on the baseline regression based methods. Recall that we consider several regression-based methods as baselines by fitting a regression function fθ parameterized by a feature extractor
670
+
671
+ $$(47)$$
672
+
673
+ ![19_image_0.png](19_image_0.png)
674
+
675
+ Figure 7: Coverage obtained by RAPS on the target distribution Q (ImageNetV2) for kreg = 2 and various settings of λ when the threshold τ is replaced with the predicted threshold τˆ with the respective prediction method. For regression methods, only the best performing method of CHR- is shown.
676
+ ϕπ by minimizing the mean squared error between the output and the calibrated threshold τ across the distributions as
677
+
678
+ $$\hat{\theta}=\arg\operatorname*{min}_{\theta}\sum_{j}(f_{\theta}(\phi_{\pi}({\mathcal{D}}_{j}))-\tau^{{\mathcal{P}}_{j}})^{2}.$$
679
+
680
+ We consider the following choices for the feature extractor ϕπ:
681
+ - *Average confidence regression (ACR)*: The one-dimensional (d = 1) average confidence of the classifier across the entire dataset which is ϕπ(D) = 1 |D| Px∈D maxℓ πℓ(x).
682
+
683
+ - *Difference of confidence regression (DCR)* (Guillory et al., 2021): The one-dimensional (d = 1)
684
+ average confidence of the classifier across the entire dataset offset by the average confidence on the source dataset, which is ϕπ(D) = 1 |D| Px∈D maxℓ πℓ(x) −1 |DP | Px∈DP maxℓ πℓ(x), where DP is the source dataset. Prediction is also for the offset target τ − τ P .
685
+
686
+ We consider DCR in addition to ACR, because DCR performs better for predicting the classifier accuracy (Guillory et al., 2021). Since the threshold τ found by conformal calibration depends on the distribution of the confidences beyond the average, we propose the below techniques for extracting more detailed information from the dataset.
687
+
688
+ - *Confidence histogram-density regression (CHR)*: Variable dimensional (d = p) features extracted as ϕπ(D) = n1 |D| Px∈D 1{maxℓ πℓ(x)∈[
689
+ j−1 p
690
+ ,
691
+ j p ]}
692
+ o j={1*,...,p*}
693
+ . This corresponds to the normalized histogram density of the classifier confidence across the dataset, where p is a hyperparameter that determines the number of histogram bins in the probability range [0, 1]. Neural networks tend to be overconfident in their prediction which heavily skews the histogram densities to the last bin. We also therefore consider a variant of CHR, *dubbed CHR-*, where we have j = {1*, . . . , p* − 1} and hence d = p − 1, equivalent to dropping the last bin of the histogram as a feature.
694
+
695
+ - *Predicted class-wise average confidence regression (PCR)*: Features with dimensionality equal to the number of classes (d = L) extracted as ϕπ(D) = Px∈D
696
+ πj (x)·1{l=arg maxℓ πℓ P
697
+ (x)}
698
+ x∈D
699
+ 1{l=arg maxℓ πℓ
700
+ (x)}
701
+
702
+ j={1*,...,L*}
703
+ . This corresponds to the average confidence of the classifier across the samples for each predicted class.
704
+
705
+ ## B Raps Recalibration Experiments
706
+
707
+ APS is a powerful yet simple conformal predictor. However, other conformal predictors (Sadinle et al., 2019; Angelopoulos et al., 2020) are more efficient (in that they have on average smaller confidence sets for a given desired coverage 1 − α).
708
+
709
+ ![20_image_0.png](20_image_0.png)
710
+
711
+ Figure 8: Coverage obtained by RAPS on the target distribution Q for λ = 0.1 and various settings of
712
+ (1 − α) when the threshold τ is replaced with the predicted threshold τˆ with the respective prediction method.
713
+
714
+ For regression methods, only the best performing method of CHR- is shown.
715
+ In this section, we focus on the conformal predictor proposed by Angelopoulos et al. (2020), dubbed Regularized Adaptive Prediction Sets (RAPS). RAPS is an extension of APS that is obtained by adding a regularizing term to the classifier's probability estimates of the higher-order predictions (i.e., subsequent predictions after the top-k predictions). RAPS is more efficient and tends to produce smaller confidence sets when calibrated on the same calibration set as APS, as it penalizes large sets. While TPS tends to achieve slightly better results in terms of efficiency compared to RAPS, see (Angelopoulos et al., 2020, Table 9), RAPS coverage tends to be more uniform across different instances (in terms of difficult vs. easy instances) and therefore RAPS still carries practical relevance. Recall that while efficiency can be improved by constructing confidence sets more aggressively, efficient models tend to be less robust, meaning the coverage gap is greater when there is distribution shift at test time. For example, when calibrated to yield 1 − α = 0.9 coverage on ImageNet-Val and tested on Image-Sketch, the coverage of RAPS drops to 0.38 in contrast to that of APS, which only drops to 0.64 (see Section 3). It is therefore of great interest to understand how QTC performs for recalibration of RAPS under distribution shift. RAPS is calibrated using exactly the same conformal calibration process as APS and only differs from APS
716
+ in terms of the prediction set function C(x*, u, τ* ). The prediction set function for RAPS is defined as
717
+
718
+ $$\mathcal{C}^{\text{RAPS}}(\mathbf{x},u,\tau)=\left\{\ell\in\{1,\ldots,L\}\colon\sum_{j=1}^{\ell-1}[\pi_{(j)}(\mathbf{x})+\underbrace{1_{\{j-k_{\text{sep}}>0\}}\cdot\lambda\}_{\text{regularization}}+u\cdot\pi_{(\ell)}(\mathbf{x})\leq\tau\right\},\tag{48}$$
719
+
720
+ where u ∼ U(0, 1), similar to APS and *λ, k*reg are the hyperparameters of RAPS corresponding to the regularization amount and the number of top non-penalized predictions respectively. Note that the cutoff threshold τ P obtained by calibrating RAPS on some calibration set DP
721
+ cal can be larger than one due to the added regularization. Therefore, in order to apply QTC-ST, we map τ P back to the
722
+ [0, 1] range by dividing by the total scores after added regularization. QTC and QTC-SC do not require such an additional step as the coverage level α ∈ [0, 1] by definition. We show the results of RAPS' performance under distribution shift with or without calibration by QTC in Figure 6. The results show that while QTC is not able to completely mitigate the coverage gap, it significantly reduces it. Recall that RAPS utilizes a hyperparameter λ, which is the added penalty to the scores of the predictions following the top-kreg predictions, that can significantly change the cutoff threshold τ P when we calibrate on the calibration set DP . The regularization amount λ also implicitly controls the change in the cutoff threshold τ Q − τ P when the conformal predictor is calibrated on different distributions P and Q. That is, the value of τ Q − τ P increases with increasing λ as long as the distributions P and Q are meaningfully different, as is the case for all the distribution shifts that we consider.
723
+
724
+ Therefore, a good recalibration method should be relatively immune to the choice of λ in order to successfully predict the threshold τ Q based only on unlabeled examples. In Figure 7, we show the performance of RAPS
725
+ under the ImageNetV2 distribution shift for various values of λ. While QTC is able to improve the coverage gap for various choices of λ, the best performing regression-based baseline method does not generalize well to natural distribution shifts when λ is relatively large. In contrast, as demonstrated in Figure 8, when the regularization amount λ is relatively small, the best performing regression-based method of CHR- does very well in reducing the coverage gap of RAPS under various distribution shifts.
krQIuCCQsW/krQIuCCQsW_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 22,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 22,
14
+ "code": 0,
15
+ "table": 0,
16
+ "equations": {
17
+ "successful_ocr": 86,
18
+ "unsuccessful_ocr": 3,
19
+ "equations": 89
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }