ask_my_thesis / assets /txts /pg_0054.txt
jordyvl's picture
First commit
e0a78f5
raw
history blame
2.61 kB
22
FUNDAMENTALS
model needs to be able to communicate its own uncertainty to the user. This is
the focus of Chapter 3.
Definition 4 [Robustness]. Robustness is the ability of a system to maintain
its intended function despite a wide range of disturbances, with a minimal
degradation of performance [395]. Such disturbances can take the form of
adversarial attacks, distributional shifts, or other types of noise. In the ML
context, this entails all evaluation violating the i.i.d. assumption, including
adversarial and label noise robustness, out-of-distribution detection, domain
generalization, extrapolation, etc.
Robustness is more involved with the application scope in which a model can
perform well, assuming that the model can maintain some degree of its prediction
capacity on non-i.i.d. data which might be unknown at training time. Detecting
when the model is operating outside of its intended scope is an important part
of robustness to prevent failure propagation to downstream systems.
Resilience is another component of the R3 : reliability, robustness, resilience
concept in systems engineering, yet it is not a focus of this thesis, nor is it
a relevant qualifier of the ML model in isolation, as it is more related to the
system as a whole. Resilient systems are able to recover from disturbances, even
those caused by model misspecification, e.g., by adapting to new environments
and unexpected inputs from unknown distributions or by self-healing.
2.2.1
Generalization and Adaptation
To complete the R3 picture, we cannot overlook the generalizationadaptation spectrum, which has been less explored in our works, yet it is an
important part of current practices in ML.
Definition 5 [Generalization-adaptation]. Generalization is the ability of
a system to perform its intended function in a wide range of environments,
including those not known at design time [395]. Each environment is defined by
a data distribution over a domain and a task, and generalization is the ability
of a model to perform well on new data drawn from the same distribution.
Adaptation is the ability of a system to perform its intended function in a specific,
known environment, despite changes in the system itself or its environment
[395]. This entails the ability of a model to perform well on new data drawn
from a different distribution, which is known at design time.
Different settings of generalization-adaptation are: in-distribution (same
domain and task), domain generalization (same task, different domain), task
generalization (same domain, different task), out-of-distribution (different