Spaces:
Paused
Paused
12 | |
FUNDAMENTALS | |
Contents | |
2.1 | |
2.2 | |
2.3 | |
2.4 | |
2.1 | |
Statistical Learning - basics . . . . . . . . . . . . | |
2.1.1 Neural Networks . . . . . . . . . . . . . | |
2.1.2 Probabilistic Evaluation . . . . . . . . . | |
2.1.3 Architectures . . . . . . . . . . . . . . . | |
Reliability and Robustness . . . . . . . . . . . . | |
2.2.1 Generalization and Adaptation . . . . . | |
2.2.2 Confidence Estimation . . . . . . . . . . | |
2.2.3 Evaluation Metrics . . . . . . . . . . . . | |
2.2.4 Calibration . . . . . . . . . . . . . . . . | |
2.2.5 Predictive Uncertainty Quantification . . | |
2.2.6 Failure Prediction . . . . . . . . . . . . . | |
Document Understanding . . . . . . . . . . . . . | |
2.3.1 Task Definitions . . . . . . . . . . . . . . | |
2.3.2 Datasets . . . . . . . . . . . . . . . . . . | |
2.3.3 Models . . . . . . . . . . . . . . . . . . . | |
2.3.4 Challenges in Document Understanding | |
Intelligent Automation . . . . . . . . . . . . . . | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
. | |
12 | |
14 | |
15 | |
17 | |
18 | |
19 | |
20 | |
21 | |
25 | |
27 | |
29 | |
30 | |
31 | |
33 | |
34 | |
35 | |
38 | |
Statistical Learning | |
Two popular definitions of Machine Learning (ML) are given below. | |
Machine Learning is the field of study that gives computers the ability | |
to learn without being explicitly programmed. [406] | |
A computer program is said to learn from experience E with respect to | |
some class of tasks T, and performance measure P, if its performance | |
at tasks in T, as measured by P, improves with experience E. [317] | |
Following these, different types of learning problems [472] can be discerned, of | |
which the most common (and the one used throughout our works) is supervised | |
learning. It defines experience E as a set of input-output pairs for which the | |
task T is to learn a mapping f from inputs X ∈ X to outputs Y ∈ Y, and the | |
performance measure P is the risk or expected loss (Equation (2.1)), given a | |
(0-1) loss function ` : Y × Y → R+ . | |
R(f ) = E(X,Y )∼P [`(Y, f (X))] | |
(2.1) | |
The mapping f (·; θ) : X → Y is typically parameterized by a set of parameters | |
θ (omitted whenever it is fixed) and a hypothesis class F, which is a set of | |