Spaces:
Paused
Paused
File size: 2,175 Bytes
e0a78f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
12 FUNDAMENTALS Contents 2.1 2.2 2.3 2.4 2.1 Statistical Learning - basics . . . . . . . . . . . . 2.1.1 Neural Networks . . . . . . . . . . . . . 2.1.2 Probabilistic Evaluation . . . . . . . . . 2.1.3 Architectures . . . . . . . . . . . . . . . Reliability and Robustness . . . . . . . . . . . . 2.2.1 Generalization and Adaptation . . . . . 2.2.2 Confidence Estimation . . . . . . . . . . 2.2.3 Evaluation Metrics . . . . . . . . . . . . 2.2.4 Calibration . . . . . . . . . . . . . . . . 2.2.5 Predictive Uncertainty Quantification . . 2.2.6 Failure Prediction . . . . . . . . . . . . . Document Understanding . . . . . . . . . . . . . 2.3.1 Task Definitions . . . . . . . . . . . . . . 2.3.2 Datasets . . . . . . . . . . . . . . . . . . 2.3.3 Models . . . . . . . . . . . . . . . . . . . 2.3.4 Challenges in Document Understanding Intelligent Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 14 15 17 18 19 20 21 25 27 29 30 31 33 34 35 38 Statistical Learning Two popular definitions of Machine Learning (ML) are given below. Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed. [406] A computer program is said to learn from experience E with respect to some class of tasks T, and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. [317] Following these, different types of learning problems [472] can be discerned, of which the most common (and the one used throughout our works) is supervised learning. It defines experience E as a set of input-output pairs for which the task T is to learn a mapping f from inputs X ∈ X to outputs Y ∈ Y, and the performance measure P is the risk or expected loss (Equation (2.1)), given a (0-1) loss function ` : Y × Y → R+ . R(f ) = E(X,Y )∼P [`(Y, f (X))] (2.1) The mapping f (·; θ) : X → Y is typically parameterized by a set of parameters θ (omitted whenever it is fixed) and a hypothesis class F, which is a set of |