RedTachyon
commited on
Commit
•
712d850
1
Parent(s):
da7e397
Upload folder using huggingface_hub
Browse files- KxBQPz7HKh/10_image_0.png +3 -0
- KxBQPz7HKh/11_image_0.png +3 -0
- KxBQPz7HKh/12_image_0.png +3 -0
- KxBQPz7HKh/15_image_0.png +3 -0
- KxBQPz7HKh/1_image_0.png +3 -0
- KxBQPz7HKh/20_image_0.png +3 -0
- KxBQPz7HKh/21_image_0.png +3 -0
- KxBQPz7HKh/22_image_0.png +3 -0
- KxBQPz7HKh/22_image_1.png +3 -0
- KxBQPz7HKh/23_image_0.png +3 -0
- KxBQPz7HKh/23_image_1.png +3 -0
- KxBQPz7HKh/24_image_0.png +3 -0
- KxBQPz7HKh/3_image_0.png +3 -0
- KxBQPz7HKh/KxBQPz7HKh.md +628 -0
- KxBQPz7HKh/KxBQPz7HKh_meta.json +25 -0
KxBQPz7HKh/10_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/11_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/12_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/15_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/1_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/20_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/21_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/22_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/22_image_1.png
ADDED
Git LFS Details
|
KxBQPz7HKh/23_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/23_image_1.png
ADDED
Git LFS Details
|
KxBQPz7HKh/24_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/3_image_0.png
ADDED
Git LFS Details
|
KxBQPz7HKh/KxBQPz7HKh.md
ADDED
@@ -0,0 +1,628 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Multi-Dimensional Concept Discovery (Mcd): A Unifying Framework With Completeness Guarantees
|
2 |
+
|
3 |
+
Johanna Vielhaben **johanna.vielhaben@hhi.fraunhofer.de**
|
4 |
+
Explainable Artificial Intelligence Group Fraunhofer Heinrich-Hertz-Institute Stefan Blüecher **bluecher@tu-berlin.de**
|
5 |
+
Machine Learning Group TU Berlin Nils Strodthoff *nils.strodthoff@uol.de* Division AI4Health Oldenburg University Reviewed on OpenReview: *https: // openreview. net/ forum? id= KxBQPz7HKh*
|
6 |
+
|
7 |
+
## Abstract
|
8 |
+
|
9 |
+
The completeness axiom renders the explanation of a post-hoc eXplainable AI (XAI) method only locally faithful to the model, i.e. for a single decision. For the trustworthy application of XAI, in particular for high-stake decisions, a more global model understanding is required.
|
10 |
+
|
11 |
+
To this end, concept-based methods have been proposed, which are however not guaranteed to be bound to the actual model reasoning. To circumvent this problem, we propose Multi-dimensional Concept Discovery (MCD) as an extension of previous approaches that fulfills a completeness relation on the level of concepts. Our method starts from general linear subspaces as concepts and does neither require reinforcing concept interpretability nor re-training of model parts. We propose sparse subspace clustering to discover improved concepts and fully leverage the potential of multi-dimensional subspaces. MCD offers two complementary analysis tools for concepts in input space: (1) concept activation maps, that show where a concept is expressed within a sample, allowing for concept characterization through prototypical samples, and (2) concept relevance heatmaps, that decompose the model decision into concept contributions. Both tools together enable a detailed global understanding of the model reasoning, which is guaranteed to relate to the model via a completeness relation. Thus, MCD paves the way towards more trustworthy concept-based XAI. We empirically demonstrate the superiority of MCD against more constrained concept definitions.
|
12 |
+
|
13 |
+
## 1 Introduction
|
14 |
+
|
15 |
+
Explainable AI (XAI) allows to peek insight the black box of inherently complex deep learning models. *Local* interpretability methods are particular valuable, as they measure attributions for an individual instance, which are easily comprehensible for any kind of end-users, see (Covert et al., 2021; Lundberg & Lee, 2017; Montavon et al., 2018; Samek et al., 2021) for reviews. For example, local methods make a prediction interpretable on the level of single images or individual bank customers for an image or credit risk classifier, respectively. Importantly, the commonly employed *completeness axiom* **(attributions sum up to the model**
|
16 |
+
prediction) ensures a meaningful interpretation of attributions (Lundberg & Lee, 2017; Sundararajan et al.,
|
17 |
+
2017). However, to actually comprehend the model reasoning we require a *global* **model understanding,**
|
18 |
+
which reliably explains the model behavior across multiple instances (e.g. a group of female vs. male bank customers). We stress that it is not viable to require an end-user to aggregate local attributions into common
|
19 |
+
|
20 |
+
![1_image_0.png](1_image_0.png)
|
21 |
+
|
22 |
+
Figure 1: We strive for the most general decomposition of the hidden feature space, spanned by the neurons c1, c2, c3**, into linear structures that form the concepts** C
|
23 |
+
i**. The most constrained approach is to identify**
|
24 |
+
concepts with single neurons (D1), i.e. directions in feature space aligned with canonical basis vectors. If one allows for arbitrary rotations of the concept directions, one arrives at D2. Leaving aside the orthogonality constraint, D3 allows concepts to form arbitrary directions in feature space. Finally, allowing concepts to form multi-dimensional subspaces, we arrive at the most general approach D4. Previous concept-based methods are based on D1, D2 and D3. We choose the most general approach D4, to discover concepts that are *faithful*.
|
25 |
+
|
26 |
+
model features (concepts). Such a procedure is prone to human confirmation bias and it is not clear how the imagined concepts align with the actual model reasoning. This urges for novel *local* and *concept-based* interpretability methods, which allow to understand shared model structures (used across multiple samples)
|
27 |
+
for an individual instance. This idea was first formalized by Kim et al. (2018) and further developed by ACE (Ghorbani et al., 2019) and its successors (Yeh et al., 2020; Zhang et al., 2021). Crucially, our work re-introduces *completeness* **within the context of concept-based explanations. Thereby, concepts obtained**
|
28 |
+
within our multi-dimensional concept discovery (MCD) scheme are *locally* and *globally* **interpretable in terms**
|
29 |
+
of a well-defined completeness decomposition. We outline the benefits of MCD in the following paragraphs.
|
30 |
+
|
31 |
+
Concepts as multi-dimensional subspaces **Indisputably, concept discovery in neural networks is inherently linked to structures in the activations of intermediate feature layers. In Figure 1, we illustrate different**
|
32 |
+
approaches to decompose the hidden feature space (union of all possible activations) into meaningful concepts, which are mathematically formalized as linear structures. As an illustrative example, we consider the activations of the last convolutional layer just before average pooling and a linear classification head. In this case, the model part remaining after the intermediate feature layer can only exploit linearly separable concepts, hence justifying the linearity assumption here. The most constrained definition (left most panel, D1) is to directly identify concepts with canonical basis vectors of the feature space (Bau et al., 2017). In our example, D1 identifies each convolutional channel with a concept. A slightly more general definition is to allow concepts to lie on orthogonal directions other than the unit axes in feature space (D2). In our example, this means, concepts are formed by orthogonal linear combinations of convolutional channels. Such a concept decomposition can be obtained via a principal component analysis (PCA) of the feature space (Zhang et al., 2021). Going one step further, we disregard the orthogonality constraint and allow arbitrary directions in feature space (D3) (Ghorbani et al., 2019; Kim et al., 2018; Yeh et al., 2020; Zhang et al.,
|
33 |
+
2021). Thereby, we can characterize related concepts which are linearly independent but not orthogonal.
|
34 |
+
|
35 |
+
This is sensible because in general, the model has no mechanism that enforces orthogonality of concepts (for example different parts of an animal). Allowing for arbitrary multi-dimensional subspaces unfolds the most general definition of a linear decomposition (D4). Coming back to the CNN example, D4 allows a concept to lie on a hyperplane spanned by multiple directions of the convolutional channels. We argue, that this general approach enables the most *faithful* **concepts among D1-D4, as it allows to capture any meaningful**
|
36 |
+
linear structure within the hidden feature layer (*benefit 1* ).
|
37 |
+
|
38 |
+
Multi-dimensionality ensures concise explanations **Concepts strive to organize the information about**
|
39 |
+
the global model reasoning in a concise manner. To this end, we want to cover the relevant feature space with only a few concepts and avoid fragmentation into a large number of low/one-dimensional subspaces. As a first step, we propose a concept completeness score, which measures the fraction of model prediction jointly covered by all concepts. We find that it requires significantly fewer multi-dimensional MCD concepts to reach a specified level of completeness as compared to more constrained concept definitions (D1-D3), i.e.,
|
40 |
+
MCD provides more concise explanations (*benefit 2* ). Re-establishing completeness for concepts **To define concept relevances, in Section 2.3, we uniquely**
|
41 |
+
decompose the hidden activations in conjunction with the model prediction into concept parts. To this end, we restrict to a high-level feature layer which is only succeeded by linear operations (e.g. a linear classification head with global pooling). Then, the concept relevances follow a completeness relation (*benefit* 3 **), i.e., summing all concept relevances equals the final prediction. Thus, we restore the often-desired**
|
42 |
+
completeness property mentioned above for concept explanations. We stress, that our concept relevances follow directly from the decomposition into concept parts and do not invoke any additional XAI method nor retraining model parts. Phrased differently, MCD can completely capture the model reasoning solely in terms of linear operations on concept subspaces.
|
43 |
+
|
44 |
+
In summary, MCD is a consistent framework to discover *faithful* **concepts, which are guaranteed to rely**
|
45 |
+
on the actual model reasoning via the completeness relation. Our framework provides several possibilities to investigate the discovered reasoning structure in input space. Thus, we see the main utility of MCD
|
46 |
+
in the domain of model understanding and certification. Concepts provide insights into model behavior that generalize across samples and are therefore a valuable tool for systematic investigations of spurious correlations (model biases) (Lapuschkin et al., 2019; Palatnik de Sousa et al., 2021; Weber et al., 2021), as well as for scientific discovery (Blücher et al., 2020; Hägele et al., 2020; McGrath et al., 2022; Šarčević et al.,
|
47 |
+
2022), where the model serves as a proxy for the unknown relationships in the data.
|
48 |
+
|
49 |
+
## 2 Multi-Dimensional Concept Discovery (Mcd)
|
50 |
+
|
51 |
+
We organize this methodological section into three parts: First, we introduce our novel concept definition in Section 2.1. Second, we describe practical concept discovery procedures that align with this definition in Section 2.2. Third, we introduce a concept decomposition and discuss how to construct local and global concept importance that fulfill a concept completeness relation in Section 2.3.
|
52 |
+
|
53 |
+
Fig. 2 presents a schematic summary of our MCD framework, which we shortly summarize here. During the training phase, the approach discovers multi-dimensional subspaces (concepts) in the hidden feature space of model, which we mathematically capture as concept bases. These concept bases are obtained from (i) clustering feature vectors (using some particular algorithm) and (ii) characterizing clusters by their dominant principal directions (lower left panel). During the testing phase, we can compare new feature vectors (e.g. from a new test sample) with these concept bases and thereby analyze/characterize all concepts
|
54 |
+
(right panel). Here, we provide two complementary tools: *concept activation maps***, which highlight strongly** expressed regions of the concept in input space and *concept relevance maps***, which show how indicative a**
|
55 |
+
particular concept is for the class predictions.
|
56 |
+
|
57 |
+
## 2.1 Concept Definition
|
58 |
+
|
59 |
+
Concepts are inherently tied to the hidden representations of intermediate feature layers. For our concept definition, we split the model f into two parts, f = g ◦ h, where h **is the mapping to a hidden feature layer,**
|
60 |
+
which is mapped to the prediction by g**. Our definition then relies on hidden representations** h(α) ∈ R
|
61 |
+
H×W×F
|
62 |
+
of input samples α (height H, width W and number of features F**, see upper left panel in Figure 2).**
|
63 |
+
|
64 |
+
![3_image_0.png](3_image_0.png)
|
65 |
+
|
66 |
+
$$(1)$$
|
67 |
+
|
68 |
+
Figure 2: Schematic illustration of the MCD framework for concept discovery. The **upper left panel**
|
69 |
+
illustrates how the model is split into a representation and prediction mapping. Feature vectors are extracted from the representation mapping of a sample. The lower left panel **illustrates the concept discovery**
|
70 |
+
methodology of MCD (Section 2.2). First, randomly choose and cluster a set of feature vectors {φ} **from**
|
71 |
+
a selection of samples (using any clustering algorithm). Second, construct subspace bases for all clusters C
|
72 |
+
l **via PCA (intrinsic dimension** d l). The upper right panel corresponds to the construction of *concept* activation maps and the lower right panel shows the construction of *concept relevance heatmaps***, both laid**
|
73 |
+
out in Section 2.3.
|
74 |
+
|
75 |
+
We spatially deconstruct the feature maps h(α**) and obtain a feature vector**1 φ α xy ∈ R
|
76 |
+
F **for each location**
|
77 |
+
(*x, y*) ∈ {1*, . . . , H*} × {1*, . . . , W*}. We now strive to identify concepts as (linear) structures in this F**dimensional feature space and pose no additional restrictions (one-dimensionality and/or orthogonality) on**
|
78 |
+
the structure of the subspaces.
|
79 |
+
|
80 |
+
Definition 1. *We define a concept* C
|
81 |
+
l *as a* d l dimensional linear subspace in the F**-dimensional feature**
|
82 |
+
space, spanned by the basis vectors c l j
|
83 |
+
,
|
84 |
+
|
85 |
+
$$C^{l}=s p a n\left(\{{\mathbf{c}}_{j}^{l}|j=1,\ldots,d^{l}\}\right)\,.$$
|
86 |
+
j|j = 1*, . . . , d*l . (1)
|
87 |
+
In particular, the dimensionality d lcan vary among the concepts l = 1*, . . . , n*c**. We denote the number**
|
88 |
+
of concepts as nc **and present a constructive way to determine it in Section 2.3. assume that the concept**
|
89 |
+
subspaces are pairwise disjoint. In Figure 1, we illustrate the linear structures that concepts could possibly form in hidden feature space: from single directions (D1-D3) to linear subspaces (D4). The MCD concept definition above corresponds to D4, which is the most general linear structure, i.e., arbitrarily orientated multi-dimensional linear subspaces. Note, that exploring even more general, non-linear concept structures, such as sub-manifolds in feature space, is an interesting idea for layers where non-linear operations follow. For these, linear multi-dimensional subspaces represent an improvement over the previous, more constrained linear concept definitions in terms of faithfulness. However, for a last hidden layer that is only followed by linear classification head, which we specialize to in Section 2.3, linear subspaces are the most general structure that can be separated, and thus form a faithful model concept definition.
|
90 |
+
|
91 |
+
1**Vectors are denoted lower-case bold (**φ ∈ RF ).
|
92 |
+
|
93 |
+
## 2.2 Concept Discovery
|
94 |
+
|
95 |
+
Typically, concept discovery, i.e., obtaining concepts as defined by Equation (1), can be subdivided into two steps: (i) cluster a user-defined set of feature vectors {φ α x,y} **into clusters** C
|
96 |
+
1*, . . . ,* C
|
97 |
+
nc **and (ii) identify a**
|
98 |
+
representative basis C
|
99 |
+
l = {c l j |j = 1 *. . . d*l} **for each concept cluster** C
|
100 |
+
l**(lower left panel in Figure 2).**
|
101 |
+
(i) Clustering feature vectors **In principle, any clustering method can be considered to discover concept**
|
102 |
+
clusters in feature space. This includes well-established baselines such as k-means clustering or PCA. Both have previously been proposed in (Zhang et al., 2021) to identify one-dimensional subspaces. However, kmeans does not incorporate any information about the final objective to identify linear subspaces as opposed to general clusters and PCA is restricted to orthogonal, one-dimensional subspaces. We, therefore, propose a dedicated approach for this particular purpose to discover multi-dimensional linear subspaces and draw on the rich body of literature on *sparse subspace clustering* **(SSC) (You et al., 2016a; Soltanolkotabi & Candes,**
|
103 |
+
2012; You et al., 2016b; Elhamifar & Vidal, 2013). SSC clusters datapoints that lie on a union of separate low-dimensional subspaces embedded in a high-dimensional space. It is based on the idea, that a feature vector φi **can be expressed as a linear combination of other feature vectors from the same cluster/subspace** C
|
104 |
+
l.
|
105 |
+
|
106 |
+
As nicely laid out in (Elhamifar & Vidal, 2013), SSC is ideally suited to identify clusters of linear subspaces and provides a number of advantages over standard clustering algorithms, which are directly applied to the data: SSC does not rely on the spatial proximity of the data, it can be implemented robustly against noise and outliers and does not require specifying the cluster dimensionalities in advance.
|
107 |
+
|
108 |
+
We start out with a user-specified set of samples S **for which we aim to discover concepts. The sample** selection S **is unrestricted: the user can decide on class-specific samples/concepts or use all training samples to**
|
109 |
+
obtain completely class-unspecific concepts. Then, the first step of SSC is to find a sparse self-representation matrix that expresses each φi**in terms of a minimal number of other feature vectors** {φ α x,y}**. Second, spectral**
|
110 |
+
clustering is applied to the self-representation to obtain clusters C
|
111 |
+
l**. We provide technical details on the**
|
112 |
+
particular subspace algorithm in Appendix A.
|
113 |
+
|
114 |
+
(ii) Constructing concept bases **We have now identified clusters** C
|
115 |
+
1*, . . . ,* C
|
116 |
+
nc
|
117 |
+
**, which contain all feature**
|
118 |
+
vectors φ α x,y **from the training set. Next, we want to obtain general concepts and become independent from**
|
119 |
+
the original specific cluster members. To this end, we aim to identify a basis C
|
120 |
+
l**that robustly covers**
|
121 |
+
the cluster C
|
122 |
+
l**. By construction, all** φi ∈ Cl**lie within a linear subspace, hence a linear tool like principal**
|
123 |
+
component analysis (PCA) constructs an accurate basis C
|
124 |
+
l**for the cluster** C
|
125 |
+
l**. We determine the intrinsic**
|
126 |
+
dimension d l **of the subspace using a heuristic proposed by Fukunaga & Olsen (1971) and implemented by**
|
127 |
+
Bac et al. (2021). The PCA components up to the intrinsic dimension d l**then serve as basis vectors** c l j for the subspace C
|
128 |
+
l**. Given two subspaces, we can quantify their relation in terms of their Grassmann distance,**
|
129 |
+
see Appendix B.
|
130 |
+
|
131 |
+
## 2.3 Concept Decomposition
|
132 |
+
|
133 |
+
Previously, we have laid out how to discover an expressive set of concepts C
|
134 |
+
l**. Next, we discuss how new**
|
135 |
+
features vectors {φ β x,y} (obtained from a test set sample β**) and the weights of the final linear classifier layer**
|
136 |
+
can be analyzed via a decomposition into concept contributions. To this end, we propose *concept activation* maps, *concept relevance heatmaps* and a *global concept relevance score***. These are complementary tools that**
|
137 |
+
form the final concept explanation, informing about the overall meaning of a concept (*concept activation* maps) and its impact on the classification of a specific sample (*concept relevance heatmaps***) or on a global** level (*global concept relevance score*).
|
138 |
+
|
139 |
+
To ensure that the union of all concepts spans the entire feature space, we define C
|
140 |
+
⊥ **to be the orthogonal**
|
141 |
+
complement of the subspace spanned by all concepts, i.e., C
|
142 |
+
⊥ **= span(**C
|
143 |
+
1*, . . . , C*nc )
|
144 |
+
⊥**. To simplify the**
|
145 |
+
notation, we identify C
|
146 |
+
nc+1 ≡ C
|
147 |
+
⊥**. In the following, we assume that the concept subspaces are pairwise**
|
148 |
+
disjoint. 2
|
149 |
+
|
150 |
+
2**This assumption was never violated in our experiments. If necessary, this could be enforced by removing the intersection**
|
151 |
+
between the subspaces from both and considering it as a separate concept.
|
152 |
+
Concept activation maps **quantify the activation of a chosen concept at a certain spatial location in the**
|
153 |
+
input space of a sample β**. For this purpose, we decompose the feature vectors** {φ β x,y} **into its unique concept**
|
154 |
+
contributions. Since the union of all concepts (including the orthogonal complement) forms a basis for the entire feature space, we can uniquely decompose any feature vector φ as
|
155 |
+
|
156 |
+
$$\mathbf{\phi}=\sum_{l=1}^{n_{c}+1}\sum_{i=1}^{d}\varphi_{i}^{l}\mathbf{c}_{i}^{l}\equiv\sum_{l=1}^{n_{c}+1}\mathbf{\phi}^{l}\,,\tag{2}$$
|
157 |
+
|
158 |
+
where ϕ l i are the components of φ **in the given basis. Now, one can interpret** |φ l|2 **as a measure for the extent**
|
159 |
+
to which a certain concept is expressed in the given feature vector. Performing this step for every feature vector within a sample β**, i.e.,** φ β x,y =Pnc+1 l=1 φ β,l x,y, leads to a *concept activation map* |φ β,l x,y|2 **whose spatial**
|
160 |
+
dimensions match those of the feature layer. For a fixed sample β, we normalize φ **such that the maximum**
|
161 |
+
length across all elements of the feature layer is 1, i.e., we divide the vectors elementwise by maxx,y|φ β x,y|2.
|
162 |
+
|
163 |
+
For CNNs, we follow the example of Selvaraju et al. (2020) and compute the corresponding concept activation map in input space by bilinear upsampling in the spatial dimensions. Our concept activation maps extend the concept visualization of Zhang et al. (2021) to multi-dimensional concepts (upper right panel in Figure 2).
|
164 |
+
|
165 |
+
For the final explanation, we also use them to characterize a concept in terms of prototypical examples. To this end, for each concept l**, we sort test set samples by the maximum activation max**xy|φ β,l x,y| **and choose the**
|
166 |
+
top-k samples as *concept prototypes*.
|
167 |
+
|
168 |
+
We stress, that our methodology is applicable beyond CNNs. In particular, one can decompose feature representations of any model based on MCD. However, the prerequisite for showing concept (activation) maps in input space is the locality of the trained model, i.e., the ability to associate locations in feature and input space. Whereas this locality is built in as an inductive bias into convolutional architectures, it also emerges for vision transformer models during training, as manifested for example in localized attention maps (Caron et al., 2021). To substantiate these claims, we show the first concept-based explanations for a vision transformer model in Section 4. As a final remark, we emphasize that concept activation maps can be evaluated for any model, including self-supervised pretrained models before finetuning with a classifiaction head, and any layer in the model. They reveal the learned structures in feature space and are only constrained by the restriction to linear subspaces instead of more general non-linear structures.
|
169 |
+
|
170 |
+
Concept relevance heatmaps and completeness relation **As a general requirement, any concept-based** XAI method should quantify the *relevance* **of a concept in terms of its impact on the classification decision.**
|
171 |
+
To this end, we specialize to the last hidden layer, which is only followed by linear operations (e.g., mean pooling and a linear classification head). We discuss the broad class of models to which this applies in the last paragraph of this section and empirically in Section 4.
|
172 |
+
|
173 |
+
For a given class, the weight vector w ∈ ❘F linearly connects the final F**-dimensional feature space with the**
|
174 |
+
scalar class prediction. First, we consider the feature vector after pooling φ ≡ φ β =1 WH
|
175 |
+
Px,y φ β xy (φ β ∈ ❘F )
|
176 |
+
in this very layer (see Figure 2 lower right panel). Now, we are interested in a *local* **(per-sample) concept**
|
177 |
+
relevance. For this, we can decompose the class logit under consideration, φ ·w + b, up to the bias term b**, as**
|
178 |
+
|
179 |
+
$$\mathbf{\phi}\cdot\mathbf{w}=\sum_{l=1}^{n_{c}+1}\mathbf{\phi}^{l}\cdot\mathbf{w}\equiv\sum_{l=1}^{n_{c}+1}r^{l}\,.\tag{1}$$
|
180 |
+
$$(3)$$
|
181 |
+
|
182 |
+
The decomposition above defines a *local concept relevance* r l = φ l· w**. Aggregating relevances** r l**from**
|
183 |
+
all concepts recovers the class logit prediction (up to the bias term), and thus, Equation (3) defines a completeness relation.
|
184 |
+
|
185 |
+
3 4 Second, we apply Equation (3) to the feature vectors φ β xy **before pooling. This leads to a relevance heatmap**
|
186 |
+
r l xy = φ β,l xy · w **that has the same spatial dimension as the feature layer. Importantly,** r l xy **reduces to** r l **after**
|
187 |
+
|
188 |
+
3
|
189 |
+
**In the special case of one-dimensional concepts,** r l**reduces to the local concept relevance in (Zhang et al., 2021).**
|
190 |
+
4**We briefly comment on the remaining commonly desired Shapley axioms Lundberg & Lee (2017). The local concept**
|
191 |
+
relevance trivially fulfills them since it is built on a linear additive model. Formally, the hidden activation φβ **of a given sample**
|
192 |
+
β **are segmented into concept contributions/unique features** φ l β
|
193 |
+
. Thus, the value function corresponding to the underlying Shapley values is given by vβ(S) = Pl∈S
|
194 |
+
φ l β
|
195 |
+
· w **(linear in** φ l**) for** S ⊆ {1*, . . . , n*c + 1}.
|
196 |
+
spatial pooling. As for the concept activation maps, we use spatial upsampling to map r l xy **back to the input**
|
197 |
+
space and obtain *concept relevance heatmaps***. Since upsampling preserves the completeness relation, these**
|
198 |
+
decompose the *local relevance maps***, commonly referred to as class activation maps (CAMs) (Zhou et al.,**
|
199 |
+
2016), rx,y =1 WH φ β xy · w **into concept contributions.**
|
200 |
+
Global relevance and completeness score Next, we establish a *global* **(model-wide) concept relevance**
|
201 |
+
score, which measures the extend to which the concepts explain/cover the overall prediction strategy of the model. Recall, that all c l j defined above represent a basis for the feature space ❘F **. Hence, we can directly**
|
202 |
+
decompose the weight vector w **into (analogously to Equation (2))**
|
203 |
+
|
204 |
+
$$\mathbf{w}=\sum_{l=1}^{n_{c}+1}\sum_{i=1}^{d^{l}}w_{i}^{l}\mathbf{c}_{i}^{l}\equiv\sum_{l=1}^{n_{c}+1}\mathbf{w}^{l}\,,\tag{1}$$
|
205 |
+
$$\quad(4)$$
|
206 |
+
$$\left(5\right)$$
|
207 |
+
$$(6)$$
|
208 |
+
|
209 |
+
where w l =Pd l i=1 w l i c l i and by construction, w l· w
|
210 |
+
⊥ = 0 for l = 1*, . . . n*c**. In this case, we have**
|
211 |
+
|
212 |
+
$$|\mathbf{w}|^{2}=|\mathbf{w}^{\perp}|^{2}+|\sum_{l=1}^{n_{c}}\mathbf{w}^{l}|^{2}=\sum_{l=1}^{n_{c}+1}|\mathbf{w}^{l}|^{2}+\sum_{l,k=1,l\neq k}^{n_{c}}|\mathbf{w}^{l}||\mathbf{w}^{k}|\cos(\angle(\mathbf{w}^{l},\mathbf{w}^{k}))$$
|
213 |
+
|
214 |
+
The first equality allows us to define
|
215 |
+
|
216 |
+
$$\eta(\{C^{l}\})=1-|\mathbf{w}^{\perp}|^{2}/|\mathbf{w}|^{2}$$
|
217 |
+
2(6)
|
218 |
+
as a *completeness score* (fraction of w **which is explained by all concepts** {C
|
219 |
+
1*, . . . , C*nc }**) with respect to a**
|
220 |
+
given class. To the best of our knowledge, we are the first to introduce a concept completeness score directly based on model parameters. Previous work (Yeh et al., 2020) defined a related measure based on model accuracy. Later, we will fix η to define the number of concepts nc**. Note, that for an orthonormal basis (e.g.,**
|
221 |
+
MCD-SSC-ortho, see below) the second term in Equation (5) (cosine) disappears. Then |w l|/|w| **can be**
|
222 |
+
directly interpreted as (global) concept relevances, which sum up to the previous completeness score over all concepts. Further, the angles in Equation (5) are lower- and upper-bounded by the corresponding minimal or maximal principal angles5 **between the two corresponding subspaces, i.e.,** θ kl min ≡ minmθ kl m ≤ ∠(w k, wl) ≤
|
223 |
+
maxmθ kl m ≡ θ kl max**. This means we can lower- and upper-bound** |w| 2 by
|
224 |
+
|
225 |
+
$$\sum_{l}^{n_{c}+1}|\mathbf{w}^{l}|^{2}+\sum_{l,k=1,l\neq k}^{n_{c}}|\mathbf{w}^{l}||\mathbf{w}^{k}|\cos(\theta_{\max}^{lk})\leq|\mathbf{w}|^{2}\leq\sum_{l}^{n_{c}+1}|\mathbf{w}^{l}|^{2}+\sum_{l,k=1,l\neq k}^{n_{c}}|\mathbf{w}^{l}||\mathbf{w}^{k}|\cos(\theta_{\min}^{lk})\,.\tag{7}$$
|
226 |
+
|
227 |
+
Obviously, the lower and upper bound coincide in the case of orthogonal subspaces. This implies, that the |w l| **are also informative in the non-orthogonal case, provided the principal angles between the different**
|
228 |
+
subspaces are given. This highlights the intricate connection between (global) relevances and the geometry in feature space, i.e., the relative orientation of the concept spaces (specified via principal angles between pairs).
|
229 |
+
|
230 |
+
Finally, we briefly comment on the applicability of our approach for local and global concept relevances via Equation (3) and Equation (4). In the form described above it can be used for any model with a linear layer as final layer, potentially preceded by a global pooling layer, if one aims to spatially resolve the relevances instead of considering only pooled feature vectors. This latter category covers a broad range of modern CNN
|
231 |
+
architectures such as ResNets, Inception-based models but also vision transformers, that do not base their prediction on a CLS token, such as Swin transformers (Liu et al., 2021). We envision, that our approach is even applicable, in approximate form, to other feature layers apart from the final hidden layer if one locally approximates the remainder of the model by a linear model, similarly as it is done by Ribeiro et al. (2016)
|
232 |
+
or by Selvaraju et al. (2020) to generalize (Zhou et al., 2016).
|
233 |
+
|
234 |
+
## 2.4 Alternative Mcd Variants
|
235 |
+
|
236 |
+
In Section 2.2, we describe our algorithmic choices for the two steps of concept discovery, namely SSC for clustering of feature vectors and PCA for basis construction. As a consequence of the modularity of the 5**A formal definition of principal angles is given in Appendix B.**
|
237 |
+
7 MCD framework, we can easily define alternative variants of MCD, that also serve for an ablation study later. To differentiate the original MCD flavor described in Section 2.2, we name it *MCD-SSC***. Replacing**
|
238 |
+
SSC with other clustering methods gives rise to the following two MCD variants:
|
239 |
+
- *MCD-kmeans* **We consider k-means clustering directly applied to the features. Like SSC, it leads to**
|
240 |
+
multi-dimensional and in general non-orthogonal subspaces. However, the clustering algorithm does not include any information about the linear subspaces as desired clustering target.
|
241 |
+
|
242 |
+
- *ICE/MCD-PCA* **We consider PCA applied to the feature vectors directly. This corresponds to**
|
243 |
+
the concept discovery algorithm considered by ICE (Zhang et al., 2021). Note, that this approach already encompasses the basis identification step and directly leads to one-dimensional, orthogonal subspaces by construction.
|
244 |
+
|
245 |
+
MCD-SSC does not assume that two different subspaces C
|
246 |
+
l and C
|
247 |
+
m **are orthogonal, as there is no mechanism**
|
248 |
+
that promotes this during model training. Still, concept orthogonality could be enforced through the use of dedicated orthogonal subspace clustering methods (Rahmani & Atia, 2017a), however, at the potential cost of slightly sub-optimal subspace clusters (Rahmani & Atia, 2017b). Alternatively, this could be implemented by sequentially rotating each identified subspace into the orthogonal complement of its predecessors. The latter leads to the last MCD flavor:
|
249 |
+
- *MCD-SSC-orth* **We construct orthogonal subspaces from those discovered by MCD-SSC in an iterative fashion. Starting with an empty set, we explore the effect of adding one of the subspaces**
|
250 |
+
on the completeness defined in Equation (6) and choose the one that leads to the largest increase.
|
251 |
+
|
252 |
+
Iteratively, we consider adding another subspace rotated into the orthogonal complement of the span of the subspaces in the set so far, again selecting the candidate that leads to the largest completeness increase.
|
253 |
+
|
254 |
+
Later, we find evidence that orthogonal are less faithful to the model than those that allow for arbitrary rotation.
|
255 |
+
|
256 |
+
## 3 Related Work
|
257 |
+
|
258 |
+
ACE (Ghorbani et al., 2019) uses a superpixel segmentation algorithm and k-means clustering to identify class-specific concept candidates for TCAV (Kim et al., 2018). The concept discovery scheme of ACE
|
259 |
+
has several shortcomings: The segmentation into candidate concept patches is model-independent and thus, segments are not necessarily meaningful as perceived by the model. To enable clustering of intermediate CNN
|
260 |
+
activations, segments are resized and mean padded to the original input shape. This leads to artificial, offmanifold samples with potentially distorted aspect ratios and discards the overall scale information. Finally, ACE relies on multiple heuristics to discard segments/clusters both before and after k-means clustering. In contrast, MCD is coherently based on hidden model representations without relying on additional pre- or post-processing. Similar limitations apply to methods that rely on ACE-discovered labeled concepts, like
|
261 |
+
(Li et al., 2021), which uses Shapley values for concept importance, and (Wu et al., 2020), which occludes particular neurons for neuron-wise relevances and transforms them into concept importances via concept classification. Recently, Crabbé & van der Schaar (2022) proposed a generalization of TCAV by invoking the kernel trick, which generalizes the concept definition towards non-linear structures. However, unlike MCD,
|
262 |
+
it does not allow quantifying the relevance of a concept towards the model prediction and can only verify predefined concepts instead of discovering them.
|
263 |
+
|
264 |
+
ICE (Zhang et al., 2021) defines concepts as directions in feature space. Technically, this is achieved via dimensionality reduction techniques applied to concatenated flattened feature maps. ICE measures the importance of its class-wise concepts using TCAV. Interestingly, ICE introduces the notion of a concept weight, which is analogous to our concept relevances on the logit layer. However, they do not consider spatially resolved concept relevance heatmaps and only address the special case of single-dimensional subspaces. Given these restrictions, ICE can be seen as a special realization of the MCD framework, which uses dimensionality reduction methods like PCA as clustering algorithms. Other methods learn concept vectors and a mapping to feature space either for all classes simultaneously (ConceptSHAP (Yeh et al., 2020)) or for each class separately (MACE (Kumar et al., 2021), PACE (Kamakshi et al., 2021)). ConceptShap, MACE and PACE
|
265 |
+
all use additional regularizers to enforce concept dissimilarity. In contrast, MCD restricts the concept discovery process as little as possible. Importantly, each method above defines a custom measure for concept importance, which is based on approximations of the original model. In contrast, the local and global concept relevance within MCD is solely based on the original model parameters. Other approaches (Chormai et al., 2022) use a concept definition similar to ours but use information from external attribution methods as well as orthogonality constraints to restrict the discovered concepts, whereas MCD works without such restrictions.
|
266 |
+
|
267 |
+
There is a complementary line of work of frameworks that try to identify concepts associated with particular neurons in hidden CNN representations, in conjunction with (Bau et al., 2017) or without (Achtibat et al.,
|
268 |
+
2022) special concept-annotated datasets. Network Dissection (Bau et al., 2017) investigates the alignment of human-understandable concepts and particular single hidden features (neurons). Net2vec (Fong & Vedaldi, 2018) extended this by allowing concepts to be represented by combinations of neurons.
|
269 |
+
|
270 |
+
Lastly, there is a line of research that constructs inherently interpretable concept models by design with (Koh et al., 2020; Radenovic et al., 2022; Chen et al., 2020; Marconato et al., 2022; Zarlenga et al., 2022) or without relying on concept annotations (Chen et al., 2019). The objective of the former ante-hoc concept models is to discover concepts (Koh et al., 2020; Chen et al., 2020; Marconato et al., 2022; Zarlenga et al., 2022) in feature space in order to identify them with known factors from the underlying data-generating process. In contrast, the objective of MCD is to recover the true concepts learned by an arbitrarily trained model, which do not have to align with concepts underlying the data-generating process. Even though technically feasible (Chen & Feng, 2012), we do not equip MCD with a mechanism that enforces identifiability with known concepts, e.g., through weak concept supervision. There is a crucial difference in enforcing concept interpretability in the sense of identifiability with known concepts between ante-hoc and post-hoc approaches.
|
271 |
+
|
272 |
+
Regularizing concept interpretability of post-hoc explanations might obfuscate the explanation and make the model appear more interpretable than it actually is. Concerning the latter category of concept models, our approach is best comparable with (Chen et al., 2019), as both can be reduced to a linear model operating on concepts that can be characterized via prototypes. We stress the essential difference, that our approach does not require retraining (with special training objectives) but is an interpretable reformulation of the original model.
|
273 |
+
|
274 |
+
## 4 Results
|
275 |
+
|
276 |
+
We carry out our experiments on ImageNet (Deng et al., 2009). As model architectures, we consider ResNet models (He et al., 2016) using original weights as provided by *torchvision* **and updated weights as provided by**
|
277 |
+
timm (Wightman, 2019) with an improved training procedure (Wightman et al., 2021). We also present results for a swin vision transformer (Liu et al., 2021), again using weights provided by *timm* **(SwinS3base224).**
|
278 |
+
In the following, we will refer to these models as ResNet50, ResNet50v2 and, Swin-T, respectively. We base all our experiments on images from a diverse selection of ten ImageNet classes, which roughly align with CIFAR10 classes6
|
279 |
+
|
280 |
+
## 4.1 Completeness Arithmetic
|
281 |
+
|
282 |
+
First, we provide a concrete example for an MCD explanation and showcase its completeness relation introduced in Section 2.3 (*benefit 3* **). To this end, Figure 3 shows an MCD-SSC explanation of a ResNet50v2**
|
283 |
+
prediction for a sample of the police van class in ImageNet. The number of concepts was chosen such that the completeness measure in Equation (6) reaches η = 0.**5. The three information components of the explanation**
|
284 |
+
all provide complementary information:
|
285 |
+
|
286 |
+
6**namely (airliner, beach wagon, hummingbird, siamese cat, ox, golden retriever, tailed frog, zebra, container ship, police**
|
287 |
+
van)
|
288 |
+
(1) *Concept relevance heatmaps* **show the alignment of a feature vector component** φ l **associated with concept**
|
289 |
+
C
|
290 |
+
l **and the weight vector of a specific class. Roughly speaking, this alignment indicates how typical the**
|
291 |
+
network perceives the particular instantiation of the concept for the class under consideration. Applying mean pooling leads to a corresponding decomposition of the class logit under consideration (up to the bias term) into contributions corresponding to different concepts. This demonstrates the completeness relation on the level of concept relevance heatmaps as well on the level of logits, which represents a unique feature of the MCD framework. Interestingly, for the explanation in Figure 3, only the orthogonal complement concept contributes negatively to the class logit. The contributions of the first two concepts clearly dominate the class logit.
|
292 |
+
|
293 |
+
(2) *Concept activation maps* **have positive scores showing how much a particular feature vector aligns with**
|
294 |
+
a specific concept subspace. These maps identify input regions where the concept is highly expressed. We color-code concept activation maps as a transparent overlay over the image where transparent regions indicate high activation. To guide the eye, we also include a yellow contour line at a threshold value of 0.5 and a white one at a value of 0.4.
|
295 |
+
|
296 |
+
(3) *Concept prototypes* **allow characterizing a concept subspace through examples. Here, we display the**
|
297 |
+
concept activation maps of three test set samples that show the highest activation with the given concept.
|
298 |
+
|
299 |
+
In many cases, an intuitive meaning of a concept can be inferred most easily from these samples and numerous previous approaches present concepts in this way (Zhang et al., 2021; Achtibat et al., 2022; Yeh et al., 2020).
|
300 |
+
|
301 |
+
In case of the explanation in Figure 3, this could be windows/livery, livery, blue lights, building, tires (and the orthogonal complement covering mainly the background). In addition, we also indicate the global concept relevances for the different concepts according to Equation (4).
|
302 |
+
|
303 |
+
At this point, we deem it worthwhile discussing the complementary nature of concept relevance maps and concept activation maps:
|
304 |
+
- Concept activation maps facilitate concept identification as they do not entangle concept activation and model prediction. More precisely, concept activation maps provide insight into the structures present in the feature space, including those that may not directly contribute to the prediction.
|
305 |
+
|
306 |
+
This is particularly evident in the case of the orthogonal complement, which is slightly activated in Figure 3 but has no positive relevance.
|
307 |
+
|
308 |
+
- Concept the positive parts of the concept relevance maps correlate with the activation maps: Among all test set samples, we find a mean Pearson correlation of 0.45 for the concepts of the CIFAR10 classes between the positive part of each concept relevance map and the corresponding concept activation map (for MCD-SSC and ResNet50v2). This result serves as a sanity check and confirms that concept relevance is high in sample areas where the respective concept is strongly activated.
|
309 |
+
|
310 |
+
Importantly, the negative parts in the concept relevance maps represent additional information.
|
311 |
+
|
312 |
+
In summary, the sample in Figure 3, is classified as a police van mainly due to its windows/livery, which are perceived as typical for the class by the network and are also the most relevant concept for the class globally.
|
313 |
+
|
314 |
+
Further, all other concepts are expressed in the sample and contribute positively, except for the orthogonal complement. Thus, we can confirm that MCD-SSC concepts indeed capture and focus on the relevant model reasoning structure.
|
315 |
+
|
316 |
+
## 4.2 Empirical Evaluation
|
317 |
+
|
318 |
+
We compare MCD with sparse subspace clustering (MCD-SSC), MCD with alternative clustering, and previous methods listed in Table 1 in terms of (1) faithfulness (*benefit 1* ) and (2) conciseness (*benefit 2* **) of the**
|
319 |
+
explanations.
|
320 |
+
|
321 |
+
## 4.2.1 Comparing Faithfulness Via Concept Flipping
|
322 |
+
|
323 |
+
In order to compare the methods in Table 1 in terms of faithfulness, we invoke the Smallest Destroying Concepts (SDC) benchmark as proposed in (Ghorbani et al., 2019) and (Wu et al., 2020). For concepts that
|
324 |
+
|
325 |
+
![10_image_0.png](10_image_0.png)
|
326 |
+
|
327 |
+
Figure 3: Completeness relation for the police van class in ImageNet. Concepts are discovered via MCD-SSC
|
328 |
+
for ResNet50v2. The number of concepts is chosen such that the completeness score reaches η = 0.**5. We**
|
329 |
+
distinguish between local (sample-specific) and global properties (characterizing a set of samples). Locally, we consider *concept relevance maps***, which quantify the spatially resolved contribution of a concept to the** prediction. These satisfy a *completeness relation*, as explicitly shown in the first line. **Concept activation** maps **provide complementary information and indicate how much a concept is activated depending on the**
|
330 |
+
spatial location in the sample. Globally, the overall relevance of a particular concept is quantified by the global relevance **scores. Finally, we also present concept prototypes (concept activation maps of the most**
|
331 |
+
strongly activated samples) to characterize a particular concept.
|
332 |
+
|
333 |
+
| Table 1: Summary of concept discovery methods considered in this | work. | |
|
334 |
+
|--------------------------------------------------------------------|------------|-------------|
|
335 |
+
| | Arbitrary | |
|
336 |
+
| Method | Multi-dim. | orientation |
|
337 |
+
| MCD-SSC | ✓ | ✓ |
|
338 |
+
| MCD-SSC-ortho | ✓ | ✗ |
|
339 |
+
| MCD-kmeans | ✓ | ✓ |
|
340 |
+
| ICE/MCD-PCA (Zhang et al., 2021) | ✗ | ✗ |
|
341 |
+
| ACE (Ghorbani et al., 2019) | ✗ | ✓ |
|
342 |
+
|
343 |
+
reflect the model's actual reasoning structure in feature space and *faithful* **concept relevance scores, SDC**
|
344 |
+
should show a sharp decline of the model accuracy with the number of flipped concepts.
|
345 |
+
|
346 |
+
To evaluate SDC, we subsequently remove concepts, as represented by concept masks in input space, in order of their sample-wise (local) relevance starting from high to low. To inpaint the removed segments, we use a classical imputation algorithm (Bertalmio et al., 2001), which leads to comparably realistic imputed images. Thus, the model is evaluated on-manifold in contrast to imputing with gray patches as often done in the literature (Samek et al., 2017). For similar reasons, we avoid the Smallest Sufficient Concepts (SSC)
|
347 |
+
|
348 |
+
![11_image_0.png](11_image_0.png)
|
349 |
+
|
350 |
+
Figure 4: Left: Concepts are flipped one at a time in descending order of local concept importance/TCAV score, respectively. We measure the decline in model accuracy and show the mean accuracy across CIFAR10 classes against the fraction of deleted pixels. Meaningful concept discovery and quantification methods are supposed to show a sharp decline in this figure, but the decline should not happen after flipping only a single concept (i.e. the whole object). Right: Qualitative comparison between hard concept assignments.
|
351 |
+
benchmark, which would require high-quality imputation algorithms to avoid evaluating the model far from the data manifold. We obtain concept masks, i.e., hard concept assignments, in input space by taking the argmax of the corresponding concept activation maps over all concepts including the orthogonal complement.
|
352 |
+
|
353 |
+
After the argmax operation, we disregard (do not remove) the orthogonal complement during the SDC experiments. For each concept mask we obtain local relevance scores by pooling the corresponding concept relevance heatmaps over the respective regions. This provides concept masks in input space which are ordered according to their importance. ACE does not provide a measure of per-sample concept relevance. Therefore, we revert to the order of their (global) TCAV scores after discarding concepts where statistical testing in comparison to random input samples fails to stay below p = 0.**05. In contrast to previous studies (Ghorbani**
|
354 |
+
et al., 2019; Wu et al., 2020), we report the model performance depending on the fraction of occluded pixels, which is essential for comparability since the segment size varies between different approaches. In order to show a meaningful average of the samples across all classes we flip only as many concepts as are present for the class with minimum number of concepts nc **for each method. We base our evaluation on the CIFAR10**
|
355 |
+
classes described above and work with the ResNet50v2 model, for which we extract concepts from the last hidden layer. For all methods within the MCD framework, we fix the number of concepts in a class-dependent way such that we reach a completeness score of η = 0.5.
|
356 |
+
|
357 |
+
In the left panel of Figure 4, we show the results of the SDC experiment. As mentioned above, a meaningful concept discovery and quantification method should show a sharp decline in Figure 4. However, in principle, the sharpest decline is achievable via assigning the complete object to a single concept. Such a concept would simply highlight the entire relevant region, i.e., this would not provide any insights beyond those that can be inferred by standard (non-concept-based) attribution methods such as LRP (Bach et al., 2015), PredDiff
|
358 |
+
(Blücher et al., 2022) or Shapley values (Lundberg & Lee, 2017). The SCD results show, that ICE/MCDPCA and MCD-SSC-ortho consistently detect only a single relevant concept, as the accuracy curve stagnates after flipping the first concept. Thus, ICE/MCD-PCA and MCD-SCC-ortho, counteract the benefits of concept-based explanations. Since both approaches rely on orthogonal concepts, this constraint seems unfitted for a fine-grained analysis of related but distinct characteristics in feature space. Among the remaining algorithms, MCD-SSC shows the strongest decline as compared to MCD-kmeans and ACE, hence its discovered concepts are the most faithful.
|
359 |
+
|
360 |
+
To provide a qualitative impression of the concept relevance heatmaps across methods, we show them together with concept activation maps for a selected sample of the golden retriever class (one of the CIFAR10 classes)
|
361 |
+
|
362 |
+
![12_image_0.png](12_image_0.png)
|
363 |
+
|
364 |
+
}
|
365 |
+
Figure 5: Concept heatmaps and activation maps for ResNet50v2 and a randomly chosen sample from the golden retriever class in ImageNet. The number of concepts is chosen such that the completeness score reaches η = 0.**5. Concepts are ordered from left to right according to global concept relevance. Concept**
|
366 |
+
heatmaps are titled by the pooled local concept relevance that sums to the prediction logit minus the bias.
|
367 |
+
|
368 |
+
For ICE, we only show the first six out of 142 and for ACE the first six out of 25 concepts. For ACE, no complement exists. For MCD-kmeans, relevance is distributed over all three concepts and also four of the five MCD-SSC concepts have notable relevance. In contrast, among the MCD flavors with orthogonal concepts (MCD-SSC-ortho and ICE), only one concept notably contributes to the prediction.
|
369 |
+
in Figure 5, and more equivalent results for other classes in Figures 7 and 8.7**In Figure 4 (right panel),**
|
370 |
+
we also show hard concept assignments for an example image of the golden retriever class, which form the basis of the concept flipping experiment described above. These visually support the findings of the concept flipping experiment. Most approaches only discover a single concept for the dog (apart from a potential genuine background concept). In particular, consider the concept assignments in Figure 4 on the right: Here, both orthogonal approaches do not distinguish between the dog head and fur parts on the image. In contrast, the unconstrained MCD-SSC approach can successfully distinguish these two correlated regions in features space (incorporate two similar concepts) and shows the most fine-grained decomposition.
|
371 |
+
|
372 |
+
7**Correspondong prototypes can be found in Figures 9 to 11.**
|
373 |
+
To summarize the results of the concept flipping experiment, our general MCD definition leads to the most faithful concepts, as the two unconstrained MCD flavors (MCD-SSC and MCD-kmeans), show the steepest descent among all methods without reverting to the non-informative solution of a single relevant concept.
|
374 |
+
|
375 |
+
## 4.2.2 Conciseness Of Explanations
|
376 |
+
|
377 |
+
For an accessible explanation, it is desirable, to explain the model reasoning with as few meaningful concepts as completely as possible, i.e. to deliver concise concept explanations. To compare the conciseness of concept explanations we measure the number of concepts required to reach a certain completeness score η**, i.e. how**
|
378 |
+
many concepts are necessary to cover the whole relevant feature space. Again, there is a trivial solution, namely leveraging more and more dimensions to cover the majority of feature space via a single concept.
|
379 |
+
|
380 |
+
Therefore, we additionally evaluate the average subspace dimension dl **and the mean (scaled) Grassmann**
|
381 |
+
distance ∆kl c
|
382 |
+
, as defined in Equation (10), between all concept pairs (*k, l*) within one class c **to quantify**
|
383 |
+
how dissimilar two concepts are. 8In summary, we argue that concepts should be concise (small nc**), but**
|
384 |
+
dissect the feature space into meaningful building blocks of model reasoning. While the latter is difficult to quantify, we argue that there is a trade-off between (1) covering feature space with very few concepts of high dimensionality and potentially small distance vs. (2) dissecting it into a high number of concepts with small dimensionality (extreme case: one-dimensional). To support this reasoning, we also inspect the visual impression of concepts for a selection of classes. Again, we base our evaluation on the CIFAR10 classes and concepts for the last hidden layer of the ResNet50v2 model. For all methods within the MCD framework, we fix the number of concepts in a class-dependent way such that we reach a completeness score of η = 0.5.
|
385 |
+
|
386 |
+
We list the number of concepts nc that is required to reach a completeness score of η = 0.**5 and** dlin Section 4.2.1. To provide a visual comparison of the concepts discovered by these methods, we show concept activation maps of prototypes for basketball, golden retriever and airliner class in ImageNet in Figures 9 to 11 and judge how broad they appear in input space. MCD-kmeans discovers the smallest number of concepts with the highest mean concept dimensionality of 74.7 and the smallest inter-concept distance
|
387 |
+
(mean(∆kl c
|
388 |
+
) = 0.**83) among all methods. This is reflected in the visual appearance of the concept prototypes,**
|
389 |
+
which are visually broad and difficult to distinguish. MCD-SSC discovers on average 4.8 concepts with a 41% smaller mean concept dimensionality of 44.2. Visually, concepts are medium broad and are easier to distinguish in input space than for MCD-kmeans, which is reflected in a higher inter-concept distance of 1.19.
|
390 |
+
|
391 |
+
When requiring orthogonality the Grassman angle is fixed to mean(∆kl c
|
392 |
+
) = π/2 = 1.**57 (MCD-SSC-ortho**
|
393 |
+
and ICE). For orthogonal concept one concept is medium broad in input space while all others are almost not activated. Most likely, the orthogonality constraint hinders the concepts to reflect a natural similarity between certain concepts. This aligns with the conclusions drawn from the SDC benchmark. Most notably, to achieve a comparable model faithfulness (completeness score of η = 0.**5) 30 times more one-dimensional**
|
394 |
+
ICE concepts than multi-dimensional MCD concepts are required, meaning this method delivers concept explanations that are not concise. Intuitively, a single concept is split up into several concepts, which is also reflected in their weak activation on test set samples. Lastly, the visual impression of ACE concepts is fixed by the choice of the superpixel algorithm. While ACE concepts are all one-dimensional, they do not provide a mechanism to quantify how complete they are, thus we cannot quantify nc **required to reach**
|
395 |
+
a completeness of 50%. As an overall summary, MCD-SSC is superior in dissecting the feature space into enclosed and meaningful concepts.
|
396 |
+
|
397 |
+
## 4.3 Use Case: Mcd Concepts Reveal Differences In Classification Strategies Between Model Architectures And Training Procedures
|
398 |
+
|
399 |
+
Finally, we showcase how MCD can unravel different classification strategies depending on the model architecture (ResNet50 vs. Swin-T) and the training strategies (ResNet50 vs. ResNet50v2), most notably the fact that the ResNet50v2 was trained using a multilabel loss. The test accuracies for the subset of CIFAR10-classes are 0.80 (ResNet50), 0.84 (ResNet50v2) and 0.86 (Swin-T). Here, we focus on MCD-SCC
|
400 |
+
and, as before, restrict ourselves to concepts in the last hidden feature layer. First, we compare the discov-8**We use a scaled version of the original Grassmann distance that aggregates the principle angles (in radian) between two**
|
401 |
+
subspaces, for which 0 ≤ ∆kl c ≤ π**. Two special cases are ∆**kl c **= 0, meaning that subspace bases vectors are perfectly aligned,**
|
402 |
+
and ∆kl c = π/**2, meaning that they are orthogonal.**
|
403 |
+
Table 2: Summary of concept discovery methods considered in this work in comparison to prior work from Zhang et al. (2021) (ICE/MCD-PCA) and Ghorbani et al. (2019) (ACE). We measure average subspace dimension d l and the number of concepts nc that is required to reach a completeness score of η = 0.5 for ResNet50v2 on the CIFAR10 classes. A small number of relevant concepts nc **is desirable since this**
|
404 |
+
summarizes the complete model into an accessible and meaningful format. Here, multi-dimensional concepts have an advantage. Additionally, we evaluate the mean (scaled) Grassmann distance ∆kl c
|
405 |
+
, see Equation (10),
|
406 |
+
between all concept pairs (*k, l*) within one class c **to quantify the distinctness between concepts. The visual**
|
407 |
+
inspection is based on prototypes of the basketball, golden retriever and airliner class concepts in Figures 9 to 11. Medium broad and distinct concepts are the most informative.
|
408 |
+
|
409 |
+
Method mean(d l) mean(nc**) mean(∆**kl c
|
410 |
+
) Visual inspection
|
411 |
+
|
412 |
+
| to 11. Medium broad and distinct concepts are the most informative. Method mean(d l ) mean(nc) mean(∆kl c ) | | Visual inspection | | |
|
413 |
+
|---------------------------------------------------------------------------------------------------------------|------|---------------------|------|------------------------------|
|
414 |
+
| MCD-SSC | 44.2 | 4.8 | 1.19 | medium broad |
|
415 |
+
| MCD-SSC-ortho | 44.2 | 4.8 | 1.57 | only one broad (rest narrow) |
|
416 |
+
| MCD-kmeans | 74.7 | 2.7 | 0.83 | very broad |
|
417 |
+
| ICE/MCD-PCA | 1 | 146.7 | 1.57 | only one broad (rest narrow) |
|
418 |
+
| ACE | 1 | n.a. | n.a. | medium broad |
|
419 |
+
|
420 |
+
ered concepts between the models by the activation maps of concept prototypes for the beach wagon class of ImageNet in Figure 6. We fix the number of concepts to nc **= 5. For Swin-T, we only apply a spatial**
|
421 |
+
upsampling of the concept activation maps from the feature to the input space to 14×**14 in order to account**
|
422 |
+
for the 16×**16 patch tokenization. We find that ResNet50 concepts, which could roughly be identified as (car**
|
423 |
+
body, windows, car roof, wheels, street), are more narrow than the expression of Swin-T and ResNet50v2 concepts. The latter are related to broader views of the car, such as concepts (1, 2, 4) for ResNet50v2 and concepts (1, 3) for Swin-T. Interestingly, ResNet50v2 concepts reach a much lower completeness score of η = 0.49 than ResNet50 (η = 0.89) and Swin-T (η = 0.84) for fixed nc **= 5. In Figure 6 we show the** relation between the total concept space dimensionality, the number of concepts nc **and the completeness**
|
424 |
+
score η across the CIFAR10-classes. Even for nc = 30, the ResNet50v2 concepts have a lower η **than those**
|
425 |
+
of the ResNet50 for nc **= 3, although the former covers already a much larger part of the concept space.**
|
426 |
+
These observations support the statement that feature space of the ResNet50v2 exhibits a comparably richer structure than ResNet50 . Thus, MCD-SSC concepts can reveal interesting differences in the character of the feature space as a consequence of two different training procedures for the same architecture. Interestingly, the dependence of η on nc **for the concepts between two models with different architectures, ResNet50 and**
|
427 |
+
Swin-T, is quite similar. This also aligns with the visual appearance of the concepts.
|
428 |
+
|
429 |
+
To summarize, Swin-T and ResNet50 build on broader and more versatile concepts. In comparison, ResNet50v2 builds on more narrow and thus specific concepts for its classification strategy. These broad concepts are not unexpected for a transformer architecture like Swin-T with coarse self-attention windows, but a rather surprising finding for ResNet50 in comparison to ResNet50v2.
|
430 |
+
|
431 |
+
## 5 Summary And Discussion
|
432 |
+
|
433 |
+
In this work, we put forward MCD, a general framework for concept discovery based on the hidden representation of a trained deep neural network. Unlike prior work in the field, we propose a general concept definition (incorporating previous approaches) as multi-dimensional linear subspaces without restricting to single directions or enforcing orthogonality between subspaces. We use concept activation maps to visualize concepts in input space. Considering the final hidden layer representation, we can reformulate the original model as a linear classifier acting on linear concept subspaces without the need to retrain with a special objective. This leads to a completeness relation, i.e., a natural decomposition of class logits into contributions corresponding to specific concepts and allows to resolve their spatial importance in terms of concept relevance heatmaps. As a particularly suited realization of our framework, we put forward MCD-SCC, which
|
434 |
+
|
435 |
+
![15_image_0.png](15_image_0.png)
|
436 |
+
|
437 |
+
Figure 6: Left: Mean concept space completeness score ν **for the CIFAR10 classes across architectures**
|
438 |
+
against the dimensionality of the union of all concept subspaces Pl d l**. The number of concepts can be**
|
439 |
+
inferred from the points on the line where the first point on each line corresponds to nc **= 3 and the last**
|
440 |
+
one to nc = 30. ResNet50v2 shows a much lower completeness score at roughly the same nc and Pl d l as ResNet50. The feature space dimensionality is F = 2048 for ResNet50(v2) and F **= 768 for Swin-T. Right:**
|
441 |
+
We show MCD-SSC concept activation maps for concept prototypes for ResNet50, ResNet50v2 and Swin-T
|
442 |
+
and the beach wagon class in ImageNet. We fixed the number of concepts to nc **= 5. In this way, ResNet50v2**
|
443 |
+
reaches η = 0.49, ResNet50 η = 0.89 and Swin-T η = 0.**84. Each row shows a single concept and is titled by**
|
444 |
+
its global concept importance. The last row shows the orthogonal complement of the concept space.
|
445 |
+
relies on sparse subspace clustering for concept discovery. Based on qualitative and quantitative insights, we show the superiority of MCD-SCC over other MCD flavors that build on traditional clustering algorithms.
|
446 |
+
|
447 |
+
We showcase the ability of MCD via discriminating between hidden representations obtained from different model architectures and training strategies. This paves the way towards further novel use-cases for MCD
|
448 |
+
concepts such as gaining insights in the natural sciences, e.g., identifying sub-classes of cancerous cells in histopathology or summarizing model behavior beyond single examples and thereby systematically discovering model biases. MCD prioritizes faithfulness of concepts over identifiability with known human concepts by not including any interpretability-enforcing regularizers that could obfuscate the original structures learned by the model. In this way, we can guarantee to cover the original model reasoning structure, which is crucial for auditing models or scientific discovery use cases. However, this might obfuscate non-expert human users at first sight, since deep learning models will most likely not rely entirely on human-like features.
|
449 |
+
|
450 |
+
Code to reproduce our experiments is publicly available at **https://github.com/jvielhaben/MCD-XAI**.
|
451 |
+
|
452 |
+
## Acknowledgments
|
453 |
+
|
454 |
+
This work was supported by the German Ministry for Education and Research (BMBF) through BIFOLD
|
455 |
+
(refs. 01IS18025A and 01IS18037A).
|
456 |
+
|
457 |
+
## References
|
458 |
+
|
459 |
+
Reduan Achtibat, Maximilian Dreyer, Ilona Eisenbraun, Sebastian Bosse, Thomas Wiegand, Wojciech Samek, and Sebastian Lapuschkin. From "where" to "what": Towards human-understandable explanations through concept relevance propagation. *arXiv preprint 2206.03208***, 2022.**
|
460 |
+
Bac, Evgeny M. Mirkes, Alexander N. Gorban, Ivan Tyukin, and Andrei Zinovyev. Scikit-dimension: a python package for intrinsic dimension estimation. *arXiv preprint 2109.02596***, 2021.**
|
461 |
+
Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PLOS ONE***, 10(7):e0130140, 2015.**
|
462 |
+
David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In *IEEE Conference on Computer Vision and Pattern* Recognition**, pp. 6541–6549, 2017.**
|
463 |
+
Marcelo Bertalmio, Andrea L Bertozzi, and Guillermo Sapiro. Navier-stokes, fluid dynamics, and image and video inpainting. In *IEEE Conference on Computer Vision and Pattern Recognition***. IEEE, 2001.**
|
464 |
+
Stefan Blücher, Lukas Kades, Jan M Pawlowski, Nils Strodthoff, and Julian M Urban. Towards novel insights in lattice field theory with explainable machine learning. *Physical Review D***, 101(9):094507, 2020.**
|
465 |
+
Stefan Blücher, Johanna Vielhaben, and Nils Strodthoff. Preddiff: Explanations and interactions from conditional expectations. *Artificial Intelligence***, 312:103774, 2022. ISSN 0004-3702.**
|
466 |
+
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *International Conference on Computer Vision***, pp. 9630–9640, 2021.**
|
467 |
+
Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that:
|
468 |
+
deep learning for interpretable image recognition. *Advances in Neural Information Processing Systems*,
|
469 |
+
32, 2019.
|
470 |
+
|
471 |
+
Weifu Chen and Guocan Feng. Spectral clustering: A semi-supervised approach. *Neurocomputing***, 77(1):**
|
472 |
+
229–242, 2012.
|
473 |
+
|
474 |
+
Zhi Chen, Yijie Bei, and Cynthia Rudin. Concept whitening for interpretable image recognition. *Nature* Machine Intelligence**, 2(12):772–782, Dec 2020.**
|
475 |
+
Pattarawat Chormai, Jan Herrmann, Klaus-Robert M"uller, and Gr'egoire Montavon. Disentangled explanations of neural network predictions by finding relevant subspaces. *arXiv preprint 2212.14855***, 2022.**
|
476 |
+
Ian Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for model explanation. *Journal of Machine Learning Research***, 22(209):1–90, 2021.**
|
477 |
+
Jonathan Crabbé and Mihaela van der Schaar. Concept activation regions: A generalized framework for concept-based explanations. *Advances in Neural Information Processing Systems***, 35, 2022.**
|
478 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition***, pp. 248–255, 2009.**
|
479 |
+
Ehsan Elhamifar and René Vidal. Sparse subspace clustering: Algorithm, theory, and applications. In *IEEE*
|
480 |
+
Transactions on Pattern Analysis and Machine Intelligence**, pp. 2765–2781. IEEE, 2013.**
|
481 |
+
Ruth Fong and Andrea Vedaldi. Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. In *IEEE Conference on Computer Vision and Pattern Recognition***, pp. 8730–8738,**
|
482 |
+
2018.
|
483 |
+
|
484 |
+
Keinosuke Fukunaga and David R Olsen. An algorithm for finding intrinsic dimensionality of data. *IEEE*
|
485 |
+
Transactions on Computers**, 100(2):176–183, 1971.**
|
486 |
+
Amirata Ghorbani, James Wexler, James Y Zou, and Been Kim. Towards automatic concept-based explanations. *Advances in Neural Information Processing Systems***, 32, 2019.**
|
487 |
+
Miriam Hägele, Philipp Seegerer, Sebastian Lapuschkin, Michael Bockmayr, Wojciech Samek, Frederick Klauschen, Klaus-Robert Müller, and Alexander Binder. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. *Scientific Reports***, 10:6423, 2020.**
|
488 |
+
Jihun Hamm. *Subspace-based learning with Grassmann kernels***. PhD thesis, University of Pennsylvania,**
|
489 |
+
2008.
|
490 |
+
|
491 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition**, pp. 770–778, 2016.**
|
492 |
+
Camille Jordan. Essai sur la géométrie à n dimensions. *Bulletin de la Société Mathématique de France***, 3:**
|
493 |
+
103–174, 1875.
|
494 |
+
|
495 |
+
Vidhya Kamakshi, Uday Gupta, and Narayanan C Krishnan. Pace: Posthoc architecture-agnostic concept extractor for explaining cnns. In *International Joint Conference on Neural Networks***, 2021.**
|
496 |
+
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International Conference on Machine Learning**, pp. 2668–2677. PMLR, 2018.**
|
497 |
+
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. Concept bottleneck models. In Hal Daum'e III and Aarti Singh (eds.), *International Conference* on Machine Learning**, volume 119, pp. 5338–5348, 2020.**
|
498 |
+
Ashish Kumar, Karan Sehgal, Prerna Garg, Vidhya Kamakshi, and Narayanan Chatapuramkrishnan. Mace:
|
499 |
+
Model agnostic concept extractor for explaining image classification networks. *IEEE Transactions on* Artificial Intelligence**, 2(6):574–583, 2021.**
|
500 |
+
Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. *Nature* communications**, 10:1096, 2019.**
|
501 |
+
Jiahui Li, Kun Kuang, Lin Li, Long Chen, Songyang Zhang, Jian Shao, and Jun Xiao. Instance-wise or class-wise? a tale of neighbor shapley for concept-based explanation. In **ACM International Conference** on Multimedia**, pp. 3664–3672, 2021.**
|
502 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In **IEEE International Conference on**
|
503 |
+
Computer Vision**, 2021.**
|
504 |
+
Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. *Advances in Neural* Information Processing Systems**, 30, 2017.**
|
505 |
+
Emanuele Marconato, Andrea Passerini, and Stefano Teso. Glancenets: Interpretabile, leak-proof conceptbased models. In *UAI 2022 Workshop on Causal Representation Learning***, 2022.**
|
506 |
+
Thomas McGrath, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Martin Wattenberg, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. Acquisition of chess knowledge in alphazero. *Proceedings* of the National Academy of Sciences**, 119(47):e2206625119, 2022.**
|
507 |
+
Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. *Digital Signal Processing***, 73:1–15, 2018.**
|
508 |
+
Iam Palatnik de Sousa, Marley MBR Vellasco, and Eduardo Costa da Silva. Explainable artificial intelligence for bias detection in covid ct-scan classifiers. *Sensors***, 21(16):5657, 2021.**
|
509 |
+
Filip Radenovic, Abhimanyu Dubey, and Dhruv Mahajan. Neural basis models for interpretability. **Advances**
|
510 |
+
in Neural Information Processing Systems**, 35, 2022.**
|
511 |
+
Mostafa Rahmani and George Atia. Innovation pursuit: A new approach to the subspace clustering problem.
|
512 |
+
|
513 |
+
In *International Conference on Machine Learning***, volume 70, pp. 2874–2882, 2017a.**
|
514 |
+
Mostafa Rahmani and George K. Atia. Subspace clustering via optimal direction search. **IEEE Signal**
|
515 |
+
Processing Letters**, 24(12):1793–1797, 2017b.**
|
516 |
+
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In *International Conference Knowledge Discovery and Data Mining***, pp. 1135–1144,**
|
517 |
+
2016.
|
518 |
+
|
519 |
+
Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller.
|
520 |
+
|
521 |
+
Evaluating the visualization of what a deep neural network has learned. *IEEE Transactions on Neural* Networks and Learning Systems**, 28(11):2660–2673, 2017. doi: 10.1109/TNNLS.2016.259982.**
|
522 |
+
Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. **Proceedings**
|
523 |
+
of the IEEE**, 109(3):247–278, 2021.**
|
524 |
+
Ana Šarčević, Damir Pintar, Mihaela Vranić, and Agneza Krajna. Cybersecurity knowledge extraction using xai. *Applied Sciences***, 12(17):8669, 2022.**
|
525 |
+
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. *International Journal of Computer Vision***, 128(2):336–359, 2020.**
|
526 |
+
Mahdi Soltanolkotabi and Emmanuel J Candes. A geometric analysis of subspace clustering with outliers.
|
527 |
+
|
528 |
+
The Annals of Statistics**, 40(4):2195–2238, 2012.**
|
529 |
+
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In *International* Conference on Machine Learning**, volume 70, pp. 3319–3328. JMLR, 2017.**
|
530 |
+
Ulrike Von Luxburg. A tutorial on spectral clustering. *Statistics and computing***, 17(4):395–416, 2007.**
|
531 |
+
Manuel Weber, David Kersting, Lale Umutlu, Michael Schäfers, Christoph Rischpler, Wolfgang P Fendler, Irène Buvat, Ken Herrmann, and Robert Seifert. Just another "clever hans"? neural networks and fdg pet-ct to predict the outcome of patients with breast cancer. *European journal of nuclear medicine and* molecular imaging**, 48(10):3141–3150, 2021.**
|
532 |
+
Ross Wightman. Pytorch image models. https://github.com/rwightman/pytorch-image-models**, 2019.**
|
533 |
+
Ross Wightman, Hugo Touvron, and Hervé Jégou. Resnet strikes back: An improved training procedure in timm. *arXiv preprint 2110.00476***, 2021.**
|
534 |
+
Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, and Yu-Wing Tai. Towards global explanations of convolutional neural networks with concept attribution. In **IEEE Conference on**
|
535 |
+
Computer Vision and Pattern Recognition**, pp. 8649–8658, 2020.**
|
536 |
+
Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. *Advances in Neural Information* Processing Systems**, 33, 2020.**
|
537 |
+
Chong You, Chun-Guang Li, Daniel P Robinson, and René Vidal. Oracle based active set algorithm for scalable elastic net subspace clustering. In *IEEE Conference on Computer Vision and Pattern Recognition*,
|
538 |
+
pp. 3928–3937, 2016a.
|
539 |
+
|
540 |
+
Chong You, Daniel Robinson, and René Vidal. Scalable sparse subspace clustering by orthogonal matching pursuit. In *IEEE Conference on Computer Vision and Pattern Recognition***, pp. 3918–3927, 2016b.**
|
541 |
+
Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, and Mateja Jamnik. Concept embedding models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems***, 2022.**
|
542 |
+
Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, and Benjamin IP Rubinstein. Invertible concept-based explanations for cnn models with non-negative concept activation vectors. In **AAAI**
|
543 |
+
Conference on Artificial Intelligence**, pp. 11682–11690, 2021.**
|
544 |
+
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In *IEEE Conference on Computer Cision and Pattern Recognition***, pp.**
|
545 |
+
2921–2929, 2016.
|
546 |
+
|
547 |
+
## A Ssc Algorithmic Details
|
548 |
+
|
549 |
+
Concept-determining self-representation We compute sparse self-representations R for a random subcollection of n ≤ N ·H ·W **feature vectors** {φ α xy} sampled from S**. Here, the term self-representation refers to**
|
550 |
+
a coefficient matrix that expresses each sample as a linear combination of all other samples. More specifically, using the notation from (Elhamifar & Vidal, 2013), given the feature vectors Φ = [φ1*, . . . , φ*n] ∈ R
|
551 |
+
F ×n**, we**
|
552 |
+
identify a sparse coefficient matrix R = [r1*, . . . , r*n] ∈ R
|
553 |
+
n×n **such that**
|
554 |
+
|
555 |
+
$$({\boldsymbol{\delta}})$$
|
556 |
+
$$\phi_{j}=\Phi r_{j}{\mathrm{~where~}}r_{i i}=0.$$
|
557 |
+
φj = Φrj where rii = 0. (8)
|
558 |
+
The particular kind of sparsity constraints that are imposed on Equation (8) and how it is optimized depends on the chosen SSC algorithm. Here, we use elastic net subspace clustering (You et al., 2016a), which is robust against noise and scales well for large sample sizes. In all our experiments, we fix the hyperparameter γ, which balances sparsity vs. robustness, to γ **= 10. We confirmed that the results are not sensitive to**
|
559 |
+
variation of this parameter over a range of values from 5 to 50. As computation time for SSC is dependent on this parameter, we chose γ **such that this is minimized.**
|
560 |
+
We remove outliers based on the ℓ1**-norm as in (Soltanolkotabi & Candes, 2012), where we empirically fix**
|
561 |
+
the percentile threshold to 0.75 and re-fit the sparse self-representation for the remaining elements.
|
562 |
+
|
563 |
+
Another scalable alternative to the elastic net clustering is orthogonal matching pursuit (OMP)(You et al.,
|
564 |
+
2016b), which is, however, not robust against noise and does not allow for outlier removal via thresholding.
|
565 |
+
|
566 |
+
Finally, the original sparse subspace clustering method from (Elhamifar & Vidal, 2013) is robust against noise and outliers but does not scale to large datasets. The particular robustness and scalability properties make elastic net subspace clustering (with thresholding) an ideal choice for the first step of our concept discovery method.
|
567 |
+
|
568 |
+
Spectral clustering **In a second step, we perform spectral clustering with the affinity matrix** W = |R|+|RT|, which encodes the similarity of two feature vectors according to their self-representations. We determine the number of clusters nc **either via the largest gap in the spectrum of the Laplacian (Von Luxburg, 2007) or**
|
569 |
+
use a predetermined value. This step assigns every input feature φi to a particular cluster C1*, . . . ,* Cnc or to the set of outliers.
|
570 |
+
|
571 |
+
## B Characterizing Relations Between Subspaces By Principal Angles
|
572 |
+
|
573 |
+
In this section, we briefly review the definition of principal angles, which can be used to characterize the relation between two linear subspaces. The principal angles θ AB
|
574 |
+
i(Jordan, 1875) (i = 1*, . . . ,* min(dim A, dim B))
|
575 |
+
between two linear subspaces *A, B***, are defined recursively via**
|
576 |
+
|
577 |
+
$$\cos\theta_{i}^{A B}=\max_{\mathbf{a}\in A,\mathbf{b}\in B}\frac{\mathbf{a}^{T}\mathbf{b}}{|\mathbf{a}||\mathbf{b}|}=:\frac{\mathbf{a}^{T}\mathbf{b}_{i}}{|\mathbf{a}||\mathbf{b}_{i}|}\,\tag{1}$$
|
578 |
+
|
579 |
+
, (9)
|
580 |
+
where the maximum is taken subject to the orthogonality constraints a Taj **= 0 and** b Tbj **= 0 for** j =
|
581 |
+
1*, . . . , i* − 1.
|
582 |
+
|
583 |
+
To quantify the similarity between two subspaces A and B**, we use a scaled version of their Grassmann**
|
584 |
+
distance Hamm (2008), which is defined as,
|
585 |
+
|
586 |
+
$$\Delta^{A B}=1/{\sqrt{\operatorname*{min}(\dim A,\dim B)}}{\sqrt{(\theta_{1}^{A B})^{2}+\ldots+(\theta_{\operatorname*{min}(\dim A,\dim B)}^{A B})^{2}}}\,.$$
|
587 |
+
2 . **(10)**
|
588 |
+
This allows comparing the similarity of concepts within a given class or across classes regardless of the concept subspaces' dimensionality.
|
589 |
+
|
590 |
+
$$({\mathfrak{g}})$$
|
591 |
+
|
592 |
+
$$(10)$$
|
593 |
+
|
594 |
+
## C Qualitative Results
|
595 |
+
|
596 |
+
For a qualitative comparison between of the concept activation maps and relevance heatmaps between the methods in Section 4.2, we provide results for selected samples in Figures 5, 7 and 8. In Figures 9 to 11 we
|
597 |
+
|
598 |
+
![20_image_0.png](20_image_0.png)
|
599 |
+
|
600 |
+
}
|
601 |
+
|
602 |
+
Figure 7: Concept heatmaps and activation maps for ResNet50v2 and a randomly chosen sample from the basketball class in ImageNet. The number of concepts is chosen such that the completeness score reaches η = 0.**5. Concepts are ordered from left to right according to global concept relevance. Concept heatmaps**
|
603 |
+
are titled by the pooled local concept relevance that sums to the prediction logit minus the bias. For ICE,
|
604 |
+
we only show the first six out of 105 and for ACE the first six out of 25 concepts.
|
605 |
+
|
606 |
+
![21_image_0.png](21_image_0.png)
|
607 |
+
|
608 |
+
}
|
609 |
+
|
610 |
+
Figure 8: Concept heatmaps and activation maps for ResNet50v2 and a randomly chosen sample from the airliner class in ImageNet. The number of concepts is chosen such that the completeness score reaches η = 0.**5. Concepts are ordered from left to right according to global concept relevance. Concept heatmaps**
|
611 |
+
are titled by the pooled local concept relevance that sums to the prediction logit minus the bias. For ICE,
|
612 |
+
we only show the first seven out of 141 and for ACE the first seven out of 25 concepts.
|
613 |
+
|
614 |
+
![22_image_0.png](22_image_0.png)
|
615 |
+
|
616 |
+
![22_image_1.png](22_image_1.png)
|
617 |
+
|
618 |
+
Figure 9: Concept activation maps for concept prototypes for basketball class of ImageNet. The last row shows prototype for the complement, except for ACE, where no complement exists. For ICE, we only show the first six out of 105 and for ACE the first six out of 25 concepts.
|
619 |
+
|
620 |
+
![23_image_0.png](23_image_0.png)
|
621 |
+
|
622 |
+
![23_image_1.png](23_image_1.png)
|
623 |
+
|
624 |
+
Figure 10: Concept activation maps for concept prototypes for golden retriever class of ImageNet. The last row shows prototype for the complement, except for ACE, where no complement exists. For ICE, we only show the first six out of 142 and for ACE the first six out of 25 concepts.
|
625 |
+
|
626 |
+
![24_image_0.png](24_image_0.png)
|
627 |
+
|
628 |
+
Figure 11: Concept activation maps for concept prototypes for airliner class of ImageNet. The last row shows prototype for the complement, except for ACE, where no complement exists. For ICE, we only show the first seven out of 141 and for ACE the first seven out of 25 concepts.
|
KxBQPz7HKh/KxBQPz7HKh_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 25,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 0,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 0,
|
10 |
+
"ocr_engine": "none"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 25,
|
14 |
+
"code": 0,
|
15 |
+
"table": 2,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 18,
|
18 |
+
"unsuccessful_ocr": 2,
|
19 |
+
"equations": 20
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|