|
# Bayesian Optimization With Derivatives Acceleration |
|
|
|
Guillaume Perrin *guillaume.perrin@univ-eiffel.fr* Université Gustave Eiffel, COSYS, 5 Bd Descartes 77454 Marne-La-Vallée, France Rodolphe Leriche **leriche@emse.fr** |
|
LIMOS (CNRS, Mines Saint-Etienne, UCA) Saint-Etienne, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= XXXX* |
|
|
|
## Abstract |
|
|
|
Bayesian optimization algorithms form an important class of methods to minimize functions that are costly to evaluate, which is a very common situation. These algorithms iteratively infer Gaussian processes from past observations of the function and decide where new observations should be made through the maximization of an acquisition criterion. Often, the objective function is defined on a compact set such as in a hyper-rectangle of the d-dimensional real space, and the bounds are chosen wide enough so that the optimum is inside the search domain. In this situation, this work **provides a way to integrate** |
|
in the acquisition criterion the *a priori* **information that these functions, once modeled as** |
|
GP trajectories, should be evaluated at their minima, and not at any point as usual acquisition criteria do. We propose an adaptation of the widely used Expected Improvement acquisition criterion that accounts only for GP trajectories where the first order partial derivatives are zero and the Hessian matrix is positive definite. The new acquisition criterion keeps an analytical, computationally efficient, expression. This new acquisition criterion is found to improve Bayesian optimization on a test bed of functions made of Gaussian process trajectories in low dimension problems. The addition of first and second order derivative information is particularly useful for multimodal functions. |
|
|
|
## 1 Introduction |
|
|
|
Over the last 20 years, Bayesian optimization (BO) methods have established themselves as one of the references for approximating the point(s) minimizing an expensive-to-evaluate black-box function, from as few calls to this function as possible. This is reflected in the existence of many reviews and tutorials on BO in the literature (see for instance Jones (2001); Sobester et al. (2008); Shahriari et al. (2015); Gramacy (2020); Garnett (2023); Frazier (2018), as well as many applications of BO in industrial applications, such as aeronautics Forrester et al. (2007); Lam et al. (2018) or agriculture Picheny et al. (2017)). Today, the machine learning community is a key contributor to BO advances, motivated by the need to optimize hyperparameters Bergstra et al. (2011); Snoek et al. (2012); Klein et al. (2017); Wu et al. (2019); Turner et al. |
|
|
|
(2021) or exploration strategies in reinforcement learning Wang et al. (2023). More specifically, BO is concerned with minimization problems that can be written in **the following form:** |
|
|
|
$$x^{*}\in{\underset{x\in\mathbb{X}}{\operatorname{arg\,min}}}\,y(x),$$ |
|
ypxq, (1) |
|
where y **is a pointwise observable function defined over the compact set** X Ă R |
|
d, d ě **1. BO assumes that** |
|
y **can be usefully represented as a Gaussian process (GP), whose mean and covariance functions have been** identified from a limited number of calls to function y**. It then sequentially adds new observations of** y at points maximizing an *acquisition criterion* **whose objective, in the search for the global minimum, is** |
|
|
|
$$(1)$$ |
|
|
|
to make a judicious trade-off between the exploration of X **and the exploitation of past observations. In** |
|
the theory of decision under uncertainty, acquisition criteria are the expectation of a *utility* **of the possible** |
|
function observations according to the stochastic model of **the objective function Žilinskas & Calvin (2019).** Several acquisition criteria have been proposed. The earliest, one-dimensional, version of BO Kushner (1962) involved the probability of improvement and an upper **confidence bound. The upper confidence** bound was later theoretically studied in many dimensions in **Srinivas et al. (2010). Another early BO** |
|
acquisition criterion was described in Močkus (1972) which **is, since Frazier & Powell (2007), called the** knowledge gradient. It is a one-step-ahead expected progress in GP mean. The Expected Improvement beyond the current best observation (EI) is the most classical acquisition criterion. The EI has a simple interpretation and an analytical expression deprived of parameters to tune, two features which have contributed to its popularity. It was first proposed in Saltenis **(1971) and popularized in Schonlau (1997);** |
|
Jones et al. (1998); Močkus (2012). More recently, acquisition criteria based on information theory have been suggested which target entropy reductions in the GP model extrema Hernández-Lobato et al. (2014) or locations of extrema Villemonteix et al. (2009); Hennig & **Schuler (2012).** |
|
BO is particularly efficient when the dimension of the search space remains limited (d ď **5 to 10) and** |
|
when the function is multimodal with some structure Le Riche **& Picheny (2021). Several adaptations of** this formalism have been proposed to extend the efficiency of these approaches to larger input spaces, by playing directly on the acquisition criterion Siivola et **al. (2018), on the identification of latent spaces** |
|
of reduced dimensions Bouhlel et al. (2016); Gaudrie et al. (2020), on the introduction of trust regions Diouane et al. (2022), or by replacing the GP by a Bayesian neural network Kim et al. (2022). It sometimes happens that the derivatives of the true function at the observed points is available (e.g., |
|
through automatic differentiation, or adjoint codes in partial differential equations solvers). It is then possible to add these derivatives as part of the observations of a vectorized Gaussian process composed of the function prediction and its derivatives Laurent et al. (2019). All the above acquisition criteria could then be calculated with such a gradient-enriched underlying Gaussian Process. This has been done with the gradient-knowledge criterion in Wu et al. (2017). It is nevertheless interesting to note that all of these methods only exploit a limited part of the information conveyed by the GP (once conditioned by the observations of the true function and potentially by the observations of its gradient). In particular, they do not take into account the information that the GP |
|
derivatives could bring, whether the function y is convex or not, and even if the derivatives of y **are not** observed. Indeed, when y is twice differentiable, it is well known that the first derivatives of y **become** |
|
zero and that its Hessian matrix is positive definite at its minimum (unless the minimum lies at an edge of the domain). It is reasonable to believe that the minimization strategy can only benefit from this supplementary knowledge on derivatives. Figure 1 provides **a graphical illustration to support this** intuition. It shows three plots with GP trajectories conditioned by three observations (also known as a kriging model). The two bottom plots further condition the trajectories on their derivatives so that they have local minima at the wrong (left) or right location. Because it is easier to force local minima where the true function really has an optimum, the information on local minima helps better discriminating the optimal from the non-optimal region. Note also that in Figure 1, while the first and second order derivatives of the GP trajectories are constrained, no information about the derivatives of the true function is used. This is a key difference with other work on BO with gradient knowledge such as Wu et al. (2017). With this in mind, the main contribution of this paper is to propose an adaptation of the famous EI criterion so that it integrates the information of zero derivative and positive definite Hessian matrix of the GP trajectories. In other terms, this new criterion only accounts for possible minima of the GP trajectories, as opposed to the traditional EI that can confer a utility to any part of a trajectory. We emphasize that the proposed criterion does not imply that derivatives of the true function ypxq **be calculated. The derivatives** |
|
only concern the GP. The new criterion is meaningful if the minimum is located inside the search domain, which is a reasonable assumption in most applications where, precisely, the bounds are chosen as extremes that should not be reached. A complementary idea for cases where the bounds might be active is nevertheless given as a |
|
|
|
![2_image_0.png](2_image_0.png) |
|
|
|
Figure 1: An illustration of the motivation for conditioning kriging trajectories with derivatives. Top: kriging trajectories in red, true function as a black solid line. Bottom: kriging trajectories forcing, by Gaussian conditioning, a zero derivative and a positive second derivative at the vertical dotted bar, i.e. at the global maximum of the true function in the figure on the left, and at the global minimum in the figure on the right. The difference between trajectories at the maximum and minimum of the true function is more apparent when forcing local minima at the right location. |
|
perspective: a method is proposed to estimate the likelihood that the minimum of y **is on the edge of the** |
|
domain. Empirically investigating the effect of a new idea - here adding derivatives acceleration - on an optimization algorithm is difficult because the performance of an algorithm depends on both the function it is applied to and the tuning of its hyperparameters. The empirical tests we provide are designed to exclusively show the effects of the derivatives acceleration while avoiding all such experimental side effects. This is achieved firstly by testing on Gaussian processes whose hyper-parameters are known, therefore guaranteeing the compatibility of the model and the test function. Secondly, **the maximization of the acquisition criteria is** |
|
done with extreme care, which is feasible up to dimension 5 and needs to be relaxed beyond. The outline of this paper is as follows. Section 2 recalls the **theoretical bases of the Gaussian process** |
|
regression (GPR) and its use for the minimization of black-box functions. Section 3 introduces the acquisition criterion we propose for taking into account information on the derivatives of y**. Section 4 then** |
|
illustrates the benefits of this new acquisition criterion on simulated test functions that can be modeled as realizations of Gaussian processes. Section 5 describes **how optima on the bounds can be handled and** |
|
concludes the paper. |
|
|
|
## 2 The Bo General Framework |
|
|
|
For d ě 1, let X **be a compact subset of** R |
|
d**. In this work, we are interested in finding the solution(s)** |
|
x |
|
‹ of the optimization problem defined by Eq. (1) using as few pointwise observations of y **as possible.** |
|
Anticipating the developments in the following sections exploiting the gradient of y**, we assume that** y is an element of C |
|
2pX, Rq, the set of real-valued twice continuously differentiable functions defined on X**. In** |
|
addition, we treat X as explicit, which means that the function y **cannot be evaluated outside the search** |
|
region (it is defined as a product of intervals in the applications). |
|
|
|
To solve this problem, we consider Bayesian Optimization guided by the Expected Improvement (EI) acquisition criterion. Such methods are sometimes called Efficient Global Optimization algorithms in reference to Jones et al. (1998), although implementations **(of the GP and of the EI maximization) vary.** The choice of the EI acquisition criterion is guided by simplicity: it is the most standard criterion and most importantly, it does not require GP simulations to be evaluated. Others criteria could benefit from derivatives acceleration as discussed in the perspectives **of this article (Section 5.2).** BO relies on the evaluation of the objective function at a sequence of well-chosen points as summarized hereunder and in Algorithm 1. |
|
|
|
## Initialization |
|
|
|
To begin, the function y is evaluated at N0 points uniformly chosen in X **(typically according to a spacefilling design of experiments (DoE) Fang et al. (2006); Perrin & Cannamela (2017)). We note** px pnq, yn :" |
|
ypx pnqqqN0 n"1 the obtained pairs. Given this available data, a GP-based surrogate model is trained for y. To obtain convergence results, a common theoretical assumption is that y **is a particular realization** |
|
of a Gaussian process Y " GPp*µ, C*q**, whose prior mean and prior covariance functions are noted** µ and C **respectively (see Santner et al. (2003); Rasmussen (2003) for more details about Gaussian process** |
|
regression). In practice, it is only required that y **can be observed at a finite number of points and the** assumption of y being a sample of Y may not hold. The algorithm then conditions Y **to interpolate the** |
|
N0 **available input-output pairs, resulting in a new** YN0 " GPpµN0 |
|
, CN0 q, **where:** |
|
|
|
$$\mu_{N_{0}}(\mathbf{x})=\mu(\mathbf{x})+C(\mathbf{x},\mathbf{X})C(\mathbf{X},\mathbf{X})^{-1}(y(\mathbf{X})-\mu(\mathbf{X})),\ \ \mathbf{x}\in\mathbb{X},$$ |
|
´1pypXq ´ µpXqq, x P X, (2) |
|
$$C(x,x^{\prime})-C(x)$$ |
|
|
|
CN0 |
|
px, x |
|
1q " Cpx, x |
|
1q ´ Cpx, XqCpX, Xq |
|
´1CpX, x |
|
$$\mathbf{\tau},\ \ \mathbf{x},\mathbf{x}^{\prime}\in\mathbb{X}.$$ |
|
1P X. (3) |
|
$$\left(2\right)$$ |
|
In the former expressions, X :" rx p1q**¨ ¨ ¨** x pN0qs Tis the pN0 ˆ dq**-dimensional matrix that gathers the** |
|
available input points, and for any function f and g defined on X and X ˆ X **respectively, the following** |
|
notation is adopted: |
|
|
|
$$(f(\mathbf{X}))_{n}=f(\mathbf{x}^{(n)}),\ \ (g(\mathbf{X},\mathbf{X}))_{n m}=g(\mathbf{x}_{n},\mathbf{x}_{m}),\ \ 1\leqslant n,m\leqslant N_{0}.$$ |
|
$$\left({4}\right)$$ |
|
|
|
## Iteration |
|
|
|
Given N ě N0 evaluations of y**, an acquisition criterion is introduced to choose at which point to carry out** the pN ` 1q th evaluation of y**. In the noise-free setting, the classical acquisition criterion is the Expected** |
|
Improvement (EI). It is the expectation of a utility at x **defined as the progress below the current best** observation: |
|
|
|
$$\mathrm{EI}_{N}(\mathbf{x}):=\mathbb{E}\left[\max(0,y_{\min}-Y_{N}(\mathbf{x}))\right]=\int_{\mathbb{R}}\max(0,y_{\min}-y)f_{Y_{N}(\mathbf{x})}(y)dy$$ $$=\sigma_{N}(\mathbf{x})\left(U_{N}(\mathbf{x})\Phi\left(U_{N}(\mathbf{x})\right)+\phi\left(U_{N}(\mathbf{x})\right)\right).$$ |
|
|
|
$$\left({\overline{{5}}}\right)$$ |
|
|
|
Here, UN pxq :" pymin ´ µN pxqq{σN pxq, σN pxq :" |
|
aCN px, xq, ymin **is the current minimum of the** N |
|
observations of y**, noted** ypx p1qq*, . . . , y*px pNqq, Φ and φ **denote the probability density function (PDF)** |
|
and cumulative density function (CDF) of the standard Gaussian variables, and fYN pxqpyq " φppy ´ |
|
µN pxqq{σN pxqq{σN pxq is the PDF of the Gaussian random variable YN pxq " N pµN pxq, σN pxq 2q**, where** |
|
|
|
$$Y_{N}:=Y\mid Y(\mathbf{x}^{(1)})=y(\mathbf{x}^{(1)}),\ldots,Y(\mathbf{x}^{(N)})=y(\mathbf{x}^{(N)}).$$ |
|
pNqq. (6) |
|
By construction, this acquisition criterion seeks a compromise between exploitation (first term) and exploration (second term) for the global search of the minimum, and the next evaluation point is chosen such that |
|
|
|
$$({\mathfrak{h}})$$ |
|
$$\mathbf{x}^{(N+1)}\in\arg\operatorname*{max}_{\mathbf{x}\in\mathbb{X}}\operatorname{E}\!\mathrm{I}_{N}(\mathbf{x}).$$ |
|
$$\left(7\right)$$ |
|
EIN pxq. (7) |
|
|
|
## Stopping Criterion |
|
|
|
For most existing implementations of BO, the stopping criterion is a maximum number of evaluations of y. Indeed, unlike gradient-based approaches for minimizing **convex functions, once a local minimum of** y has been found, there is no theoretical guarantee that it corresponds to the global minimum of y**. While** |
|
it may be tempting, stopping the search when the expected improvement drops below a lower bound is unstable in practice as the EI changes a lot with the GP length **scales.** |
|
|
|
## Degrees Of Freedom |
|
|
|
The performance of the BO method depends on several degrees of freedom that vary between implementations. The choice for µ and C, the way the parameters on which µ and C **depend are optimized, the** |
|
ratio N0{**budget, the way the initial DoE is constructed, the way the acquisition criterion is maximized** |
|
are all important (see Le Riche & Picheny (2021) for an investigation of the influence of these choices). |
|
|
|
However, as the paper studies an adaptation of the acquisition criterion, it is clearer to fix these parameters to standard values of the literature. To this end, the function µ **is taken as a constant, and the function** C |
|
is chosen in the class of tensorized Matérn kernels with smoothing parameter ν " 5{**2 (see Santner et al.** |
|
(2003) for alternative classes of functions): |
|
|
|
$$\mu(\mathbf{x}):=\beta,\ \ C(\mathbf{x},\mathbf{x}^{\prime}):=\sigma^{2}\prod_{i=1}^{d}\kappa\left({\frac{|x_{i}-x_{i}^{\prime}|}{\ell_{i}}}\right),\ \ \mathbf{x},\mathbf{x}^{\prime}\in\mathbb{X},$$ |
|
1P X, (8) |
|
$$({\boldsymbol{\delta}})$$ |
|
|
|
Algorithm 1: **Standard BO algorithm.** |
|
Choose N0, budget, Y " GPp*µ, C*q ; Ñ **Initialization** |
|
Draw at random N0 **points** x p1q*, . . . ,* x pN0qin X ; |
|
Compute ypx pnqq, 1 ď n ď N0, estimate the parameters on which µ and C **depend ;** |
|
Define YN0 |
|
|
|
:" Y |Y px p1qq " ypx p1qq*, . . . , Y* px pN0qq " ypx pN0qq ; |
|
Set k " 0 ; Ñ Iteration while k ă *budget* do Search for x pN0`k`1q:" **arg max**xPX EIN0`kpxq ; |
|
Evaluate y at x pN0`k`1q(and potentially adjust the expressions of µ and C) ; |
|
Define YN0`k`1 :" Y |Y px p1qq " ypx p1qq*, . . . , Y* px pN0`k`1qq " ypx pN0`k`1qq ; |
|
Update k Ð k ` 1 end Return min1ďiďN0`**budget** ypx piqq. |
|
|
|
$$\kappa(u):=\left(1+{\sqrt{5}}u+{\frac{5}{3}}u^{2}\right)\exp\left(-{\sqrt{5}}u\right),\ \ u\geqslant0.$$ |
|
$\left(\mathfrak{g}\right)$. |
|
¯, u ě 0. (9) |
|
The Matérn 5/2 kernel is selected here because it is well-performing and common in the literature (Le Riche & Picheny (2021)). Furthermore, we will soon introduce an acquisition criterion that needs the fourth order derivatives of the kernel (to have information about the curvatures of the trajectories). |
|
|
|
The Matérn 5/2 kernel is precisely four times differentiable, yielding trajectories that are two times continuously differentiable. The hyperparameter vector θ :" pβ, σ, ℓ1*, . . . , ℓ*dq **will either be considered known** |
|
(via the definition of test functions to be minimized in the form of a particular realization of a Gaussian process of chosen parameters), or estimated by its maximum likelihood estimator (see Williams & Rasmussen |
|
(2006) for further details). As we focus on costly functions, we will set the maximal budget between 10 and 20 times the dimension d of the problem, while N0 **will be chosen small (most of the time we will** |
|
have N0 " **3). The initial DoE will always be a random space-filling Latin Hypercube Sample (LHS)** |
|
Damblin et al. (2013); Perrin & Cannamela (2017). For objective numerical comparisons, the maximization of the acquisition criteria, whether it is the EI in Equation (7) or one of the new criteria of Section 3, is always carried out in the same way. At each iteration, the acquisition criterion is first evaluated at a very large number of points randomly chosen in X (typically of the order of 10d`1**). The Nelder-Mead algorithm** |
|
Nelder & Mead (1965) then maximizes the acquisition criterion starting from the 10 most promising points among the random points. |
|
|
|
## 3 Extending The Expected Improvement With Derivatives |
|
|
|
We now show how to extend the Expected Improvement acquisition criterion so that it accounts for gradient and Hessian information. The principles underlying the calculations are that GP derivatives are GPs, and that local optima away from the bounds coincide with **canceling derivatives and positive definite** Hessians. These principles have already been used in the context of BO in Hernández-Lobato et al. (2014) for approximating the entropy of local optima. An independent and differing version, adapted to EI, is described hereafter. |
|
|
|
## 3.1 Reminders On Gaussian Process Derivation |
|
|
|
The acquisition criteria reviewed in the Introduction, in particular the EI, are only based on the distribution of YN pxq **and do not include information related to higher derivatives. Yet, when the functions** x ÞÑ µpxq and px, x 1**q ÞÑ** Cpx, x 1q are sufficiently regular, the statistical properties of the derivatives of Y can be deduced by simple derivations of µ and C**. Indeed, as the Gaussian distribution is stable by linear** |
|
operations, for any linear operator L such that Ly **is a function from** R |
|
dto R |
|
dL , LY **is also a Gaussian** |
|
process, with: |
|
|
|
$$\mathbb{E}\left[{\mathcal{L}}Y(\mathbf{x})\right]={\mathcal{L}}\mu(\mathbf{x}),\ \ \operatorname{Cov}({\mathcal{L}}Y(\mathbf{x}),{\mathcal{L}}Y(\mathbf{x}^{\prime}))={\mathcal{L}}C(\mathbf{x},\mathbf{x}^{\prime}){\mathcal{L}}^{T}.$$ |
|
T. **(10)** |
|
Here, the notations LCpx, x 1q and Cpx, x 1qL |
|
Tindicate that operator L **is applied as a function of** x and x 1**respectively, so that Cov**pLY pxq,LY px 1qq is a pdL ˆ dLq**-dimensional matrix. In particular, for** |
|
dL " 1 ` dpd ` 3q{**2, if we choose** |
|
|
|
$${\mathcal{L}}:Y\;\mapsto{\mathcal{L}}Y:=\left(Y,{\frac{\partial Y}{\partial x_{1}}},\ldots,{\frac{\partial Y}{\partial x_{d}}},{\frac{\partial^{2}Y}{\partial x_{1}^{2}}},\ldots,{\frac{\partial^{2}Y}{\partial x_{1}\partial x_{2}}},\ldots,{\frac{\partial^{2}Y}{\partial x_{d}^{2}}}\right),$$ |
|
|
|
we obtain the joint distribution of Y **and its first and second order derivatives. For each twice-differentiable** |
|
function z**, we introduce the following notations,** |
|
|
|
$$\partial z:=\left[\begin{array}{c}\frac{\partial z}{\partial x_{1}}\\ \vdots\\ \frac{\partial z}{\partial x_{d}}\end{array}\right],\quad\partial^{2}z:=\left[\begin{array}{cccc}\frac{\partial^{2}z}{\partial x_{1}^{2}}&\cdots&\frac{\partial^{2}z}{\partial x_{1}\partial x_{d}}\\ \vdots&\ddots&\vdots\\ \frac{\partial z}{\partial x_{1}\partial x_{d}}&\cdots&\frac{\partial z}{\partial x_{d}^{2}}\end{array}\right],\quad D^{2}z:=\operatorname{diag}(\partial^{2}z)=\left[\begin{array}{c}\frac{\partial^{2}z}{\partial x_{1}}\\ \vdots\\ \frac{\partial^{2}z}{\partial x_{d}}\end{array}\right],\tag{11}$$ |
|
|
|
$$(10)$$ |
|
|
|
$$(12)$$ |
|
$$(13)$$ |
|
|
|
and we denote by M`pdq the set of pd ˆ dq**-dimensional positive definite matrices.** |
|
|
|
## 3.2 An Acquisition Criterion Accounting For The Derivatives |
|
|
|
For any x in X, it is well known that if Bzpxq " 0 and B |
|
2zpxq P M`pdq, x is a local minimum of z**. As** |
|
the input space X is bounded, the reciprocal is however not true, since a local **minimum can be found at** |
|
the boundary of X **with a non-zero gradient and/or** B |
|
2zpxq R M`pdq**. The case when the optima are on** |
|
the bounds will be discussed in Section 5.1. For now we focus on the interior of X**, to integrate as prior** |
|
knowledge that the gradient will be zero and the matrix of curvatures positive definite at the local minima of y**, the EI criterion defined by Eq. (5) can be replaced by:** |
|
|
|
$$\operatorname{deriv-}\operatorname{EI}_{N}(\mathbf{x}):=\mathbb{E}\left[1_{R_{N}(\mathbf{x})\oplus Y_{N}(\mathbf{x})\oplus\mathcal{B}(\epsilon),\epsilon^{2}Y_{N}(\mathbf{x})\in\mathcal{M}^{+}(d)}\operatorname*{max}(0,y_{\operatorname*{min}}-Y_{N}(\mathbf{x}))\right],$$ |
|
|
|
where for any ε ą 0, Bpεq :" |
|
y9 P R |
|
d, }y9 } ď ε |
|
(is the d-dimensional hypersphere of radius ε, RN pxq **is a** |
|
matrix such that |
|
|
|
$$\mathbf{R}_{N}(\mathbf{x})\mathrm{Cov}(\partial Y_{N}(\mathbf{x}))\mathbf{R}_{N}(\mathbf{x})^{T}=\mathbf{I}_{d},$$ |
|
T " Id, **(13)** |
|
and for any event a, 1a is equal to 1 if a is true and to 0 otherwise. BYN pxq **has a covariance matrix** (made of the second derivatives of CN px, x 1q**) with correlations and its density has ellipsoidal level sets.** |
|
The normalized vector RN pxqBYN pxq is uncorrelated, its covariance is the identity Id**, and its density** |
|
has spherical level sets that can be compared to the sphere Bpǫq**. The dependency of this scaling on** x disappears for ε **small (see Appendix A). Equivalently, we can write the criterion deriv-EI**N as |
|
|
|
$$\operatorname{deriv-}\operatorname{EI}_{N}(\mathbf{x}):=\mathbb{E}\left[1_{\partial Y_{N}(\mathbf{x})\in{\mathcal{E}}(\mathbf{x},\varepsilon),\,\partial^{2}Y_{N}(\mathbf{x})\in{\mathcal{M}}^{+}(d)}\operatorname*{max}(0,y_{\operatorname*{min}}-Y_{N}(\mathbf{x}))\right],$$ |
|
|
|
with Epx, εq :" |
|
!y9 P R |
|
d, y9 T RN pxq T RN pxqy9 ď ε 2)a d-dimensional ellipsoid. In connection with theoretical decision under uncertainty Žilinskas & Calvin (2019), deriv-EIN pxq **is the expectation of a utility of** |
|
the function model (the GP trajectories) at x**. Here, the utility is defined as the progress of the stochastic** model below the best observation knowing that the function model has a minimum at x **i.e., it has null** |
|
first order derivatives and positive curvatures. The key idea of deriv-EI is to account only for minima of |
|
|
|
$$(14)$$ |
|
|
|
the possible functions. On the contrary, EI accounts for any **value of the possible functions which is below** the best observation, which is less consistent with the goal **of minimization. Because it characterizes the** |
|
behavior of the minima of the GP realizations, deriv-EI can be seen as a criterion between EI and information theoretic criteria based on the expected reduction in entropy of the optima Hernández-Lobato et al. |
|
|
|
(2014). |
|
|
|
By considering deriv-EIN rather than EIN **as a new acquisition criterion in Algorithm 1, we expect to** improve its exploitation capabilities, without degrading **its exploration capabilities too much. Like EI**N , |
|
deriv-EIN **needs only evaluations of the true function,** ypx p1qq*, . . . , y*px pNqq through YN , BYN and B |
|
2YN **. It** |
|
does not need derivatives of the true function, y**. Only the GP is differentiated. However, this acquisition** criterion can no longer be calculated simply, and in the general case it will require the use of sampling techniques for its evaluation, which may complicate its use. Nevertheless, if we choose ε **small, if we neglect** |
|
the off-diagonal terms of the Hessian (as it was already proposed in Hernández-Lobato et al. (2014)) while assuming a well-chosen conditional independence of its diagonal terms, we obtain the following relaxed acquisition criterion (see Appendix A for a detailed derivation): |
|
|
|
$$\operatorname{deriv-EI}_{N}(\mathbf{x})\approx\operatorname{LikelyMin}_{N}(\mathbf{x})\times\operatorname{cond-EI}_{N}(\mathbf{x}),$$ |
|
deriv-EIN pxq « LikelyMinN pxq ˆ cond-EIN pxq, **(15)** |
|
$$\operatorname{L}_{\mathrm{L}}^{\mathrm{L}}(\mathbf{x}):=v\times\varepsilon^{d}\times\exp\left(-{\frac{\dot{\mathbf{m}}^{T}{\hat{\mathbf{S}}}^{-1}\dot{\mathbf{m}}}{2}}\right)\times\prod_{i=1}^{d}\Phi\left({\frac{{\bar{r}}_{i}}{\sqrt{1-r_{i}^{2}}}}\right),$$ |
|
$$(15)$$ |
|
$$(16)$$ |
|
$$\mathrm{cond-EI}_{N}(\mathbf{x}):=s\left((z_{\mathrm{min}}-a)\Phi(z_{\mathrm{min}})+\phi(z_{\mathrm{min}})\right),$$ |
|
cond-EIN pxq :" s ppzmin ´ aqΦpzminq ` φpzminqq, **(17)** |
|
where v is a constant that does not depend on x and ε d**, and where the following notations have been** |
|
introduced to simplify the expressions: |
|
|
|
$$\partial Y_{N}(\mathbf{x})\ \sim\ {\mathcal{N}}\left(\dot{\mathbf{m}},\dot{\mathbf{S}}\right),\quad D^{2}Y_{N}:=\left((\partial^{2}Y_{N})_{1,1},\ldots,(\partial^{2}Y_{N})_{d,d}\right),$$ |
|
$$(Y_{N}(\mathbf{x}),D^{2}Y_{N}(\mathbf{x})))|\partial Y_{N}(\mathbf{x})=\mathbf{0}~\sim~{\mathcal{N}}\left(\left(\begin{array}{c}{{m}}\\ {{\hat{m}_{1}}}\\ {{\vdots}}\\ {{\hat{m}_{d}}}\end{array}\right),\left[\begin{array}{c c c c c}{{s^{2}}}&{{\rho_{1,1}}}&{{\cdots}}&{{\rho_{1,d}}}\\ {{\rho_{1,1}}}&{{\hat{s}_{1}}}&{{\ddots}}&{{\vdots}}\\ {{\vdots}}&{{\ddots}}&{{\ddots}}&{{\rho_{d-1,d}}}\\ {{\rho_{d,1}}}&{{\cdots}}&{{\rho_{d,d-1}}}&{{\hat{s}_{d}}}\end{array}\right]\right),$$ |
|
$$(17)$$ |
|
$$(18)$$ |
|
$$\quad(19)$$ |
|
$$(20)$$ |
|
, **(19)** |
|
$$z_{\min}:=\frac{y_{\min}-m}{s},\ r_{i}:=\frac{\rho_{1i}}{s\overline{s}_{i}},\ \overline{r}_{i}=\frac{\tilde{m}_{i}}{\overline{s}_{i}},\ a=\sum_{i=1}^{d}\frac{r_{i}}{\sqrt{1-r_{i}^{2}}}\ \Phi\left(\frac{\tilde{z}_{i}}{\sqrt{1-r_{i}^{2}}}\right).\tag{2}$$ |
|
|
|
The precise choice of ε **has thus no impact.** |
|
|
|
## 3.3 Comments On The Proposed Acquisition Criterion |
|
|
|
Analysis of the terms in deriv-EI |
|
Comparing the criteria EIN pxq and deriv-EIN pxq**, we first notice the presence of the function** x ÞÑ LikelyMinN pxq, whose role is to concentrate the search of the new point to be **evaluated around the points** |
|
x that are likely to lead to a zero gradient of y **(small values of** m9 TS9 |
|
´1m9 **), while favouring the areas of** |
|
positive second derivatives (high values of Φpτ:i{ |
|
a1 ´ r 2 iq for all i). The second function x ÞÑ **cond-EI**N pxq estimates the expected improvement assuming that the function has a minimum at x, and looks particularly like the expression given by Eq. (5). The more the second derivatives of YN **will be positive in** |
|
probability, which translates into large values of τ:i**, the more this similarity will be important because, in** |
|
this case, the constant a tends towards 0. In addition, as the statistical properties of YN pxq, BYN pxq and D2YN pxq are known explicitly, it is important to notice that the evaluation cost of deriv-EIN pxq **is of the** |
|
same order of magnitude as that of the classical EIN pxq**. Importantly, there is no need to use sampling** |
|
methods to estimate it. |
|
|
|
In addition, if µ is chosen to be constant and C **is a stationary covariance kernel (which remains the most** |
|
common configuration in BO), then BY pxq is statistically independent of Y pxq and D2Y pxq **for any** x in X. In particular, if we focus on the first iteration of the BO procedure (N " **0), and put aside the** |
|
constraint on the Hessian, it can be noted that for any ε ą **0 and any** x P X, |
|
|
|
$$\mathbb{E}\left[1_{R_{0}(\mathbf{x})\partial Y(\mathbf{x})\in\mathcal{B}(\varepsilon)}\operatorname*{max}(0,y_{\operatorname*{min}}-Y(\mathbf{x}))\right]=p_{\varepsilon}\times\operatorname{EL}_{0}(\mathbf{x}),$$ |
|
$$(21)$$ |
|
|
|
where pε :" PpR0BY pxq P Bpεqq is a constant independent of x **as the statistical properties of** BY pxq do not depend on x (stationarity). In that case, EI0 is very close to deriv-EI0 **(up to the influence of the** |
|
second derivatives), and maximizing either of these criteria should give close results. Then, the more the process Y is conditioned by observations of y, the more Y , BY and B |
|
2Y **are correlated, and the more** |
|
chances there are for deriv-EIN pxq and EIN pxq **to propose different points. After many observations, it** |
|
is anticipated that the interesting areas from the EI point of view will have low gradients, so that the two criteria should again propose close new evaluation points. The *a priori* **interest of the deriv-EI**N pxq criterion thus lies in intermediate values of N, in the exploration of the various local minima of y**, and the** |
|
search for the smallest zone of X **likely to contain the global minimum of** y. |
|
|
|
At last, when the dimension d increases, one of the classical difficulties of BO based on EIN **is to favor** |
|
exploration over exploitation, by placing a very large number of points on the edges of the domain, which effectively represent the majority of the volume of X when d **is large Siivola et al. (2018). This effect** |
|
should be limited by substituting deriv-EIN for EIN **, i.e., by requiring that each partial derivative of** YN |
|
be close to 0 and that each main curvature be positive through the factor LikelyMinpxq**, which becomes** |
|
more restrictive as d **increases.** |
|
|
|
## A More Exploratory Deriv-Ei |
|
|
|
In return, by trying to quickly visit potential high-performance local minima, it is possible that the deriv-EIN criterion explores fewer regions of X than EIN **, which could be penalizing for the minimization of functions with multiple local minima. If this were the case (this kind of phenomenon was not** |
|
observed on the test cases studied in Section 4), several techniques could be proposed to rebalance the exploration/exploitation ratio. For instance, the control of the exploration-exploitation balance by changing target values has been studied in Jones (2001) for the probability of improvement and a likelihood criterion. Such a shift in target around ymin **was included in the EI criterion in Berk et al. (2019); Lizotte** |
|
(2008). Another way of reinforcing exploration with respect to exploitation consists in maximizing the expected improvement at a certain power p ě 1. Indeed, by taking p **greater than 1, we further encourage low-probability high improvements compared to more probable small improvements. This idea was** |
|
pursued in Schonlau et al. (1998) where expressions for the generalized EIN criterion with p ě **2 can be** |
|
found. The new EI with derivatives can also benefit from elevating the improvement at a given power. It becomes, deriv-EIppq N :" E |
|
"1BY pxqPEpx,εq,B |
|
2YN pxqPM`pdq maxp0, ymin ´ YN pxqqp‰. |
|
|
|
For p " **2 (see Appendix A for more details) and using the same notations as in Section 3.2, such a criterion** |
|
can again be approximated under an analytical form close to the one of Eq. (15): |
|
|
|
$$\operatorname{deriv-EI}_{N}^{(2)}(\mathbf{x})\approx\operatorname{LikelyMin}(\mathbf{x})\times\operatorname{cond-EI}^{(2)}(\mathbf{x}),$$ |
|
N pxq « LikelyMinpxq ˆ cond-EIp2qpxq, **(22)** |
|
$$\mathrm{cond-EI}^{(2)}(\mathbf{x}):=s^{2}\left((1+z_{\mathrm{min}}^{2}-2a z_{\mathrm{min}})\Phi(z_{\mathrm{min}})+(z_{\mathrm{min}}-2a)\phi(z_{\mathrm{min}})\right).$$ |
|
˘. **(23)** |
|
$$(22)$$ |
|
$$(23)$$ |
|
|
|
## Adaptation To Noisy Outputs |
|
|
|
The criteria EIN and deriv-EIN are introduced in a context where the observations of Y **are assumed** |
|
noise-free. If it turns out that these outputs are in fact noisy, and if this noise can be modeled as a centered Gaussian vector with covariance matrix Σ **(which can be assumed to be diagonal or not), only a** few adjustments are needed to calculate these two criteria (and these adjustments are the same in both cases). First, in order to integrate the noisy nature of the observations in the conditioning formulas, the distribution of YN for any value of N **is obtained by replacing** CpX, Xq |
|
´1 by pCpX, Xq ` Σq |
|
´1in equations (2) and (3). Then, as the observations are noisy, the notion of a current minimum value makes no longer sense, and we need to adapt the value of ymin **in the proposed expressions accordingly. This** |
|
value can, for example, be chosen as the value minimizing over X the predictive mean x ÞÑ ErYN pxqs**, as** |
|
it is done in the knowledge gradient approaches Frazier & Powell (2007). To limit the computational cost associated with the choice of ymin**, it is also common practice to minimize the predictive mean only at the** observed input points. Once these two adjustments have been made, the criteria EIN and deriv-EIN can be applied to noisy observations. |
|
|
|
## 4 Numerical Experiments |
|
|
|
In this Section, we first illustrate the way the proposed criterion works, and the differences it implies with the classical EI criterion. In particular, it will be seen that iterates stemming from the maximization of deriv-EIN are more concentrated inside the search domain than EIN **iterates. Then, by considering** |
|
functions with minima inside the search domain, we show that deriv-EIN **allows faster average convergence** than EIN **does. This is particularly visible with highly multimodal functions. In the experiments, deriv-EI**N |
|
is calculated through the approximation of Eq. (15). |
|
|
|
## 4.1 Analysis Of The Deriv-Ein **Criterion In Dimension 1 And 2** Test Functions And Experimental Protocol |
|
|
|
We analyze the behavior of the deriv-EIN **criterion through the study of an oscillating function in dimension** |
|
d " **1, noted** y 1D, and of a modified Branin function in dimension d " **2, noted** y 2D **(see Figure 2 for a** |
|
graphical representation of these functions, and Appendix **B for their definitions). In order to focus** exclusively on the effects of the acquisition criterion, we fix the hyperparameters (length scales, variance, trend parameters) of the Gaussian predictor to their maximum likelihood estimate for a large number of points. It has been observed that a good optimization of the acquisition criterion is a condition for BO to be efficient Le Riche & Picheny (2021). For this reason, the maximization of the acquisition criteria is performed by an exhaustive search on a fine grid, which is possible in such low dimensions. In higher dimension, a careful choice of the initial points of the acquisition function maximization is required Zhao et al. (2024). |
|
|
|
## Visualizing The Terms Making The New Acquisition Criterion |
|
|
|
We illustrate the roles of the LikelyMinN and cond-EIN **functions (which are defined in Section 3.2), by** |
|
evaluating y 1D at N0 " **5 points and** y 2D at N0 " 12 points randomly chosen in X**. The evolutions of** |
|
LikelyMinN and cond-EIN associated to these evaluations are given in Figures 2-a and **b. As expected,** |
|
the function LikelyMinN **is large at the points the most likely to correspond to local minima, while the** |
|
function cond-EIN **highlights the areas the most likely to lead to GP trajectories that have a null gradient** |
|
while having values lower than the current minimum. For these particular examples, the product of the two functions, which yields the deriv-EIN **criterion, favors new points inside the input domain, when the EI**N |
|
criterion encourages to evaluate y 1D **(resp.** y 2D) on an edge of X**. We also notice that by concentrating the** |
|
search at areas of low gradient for y 1D or y 2D, we limit the significant values of deriv-EIN **to sub-regions** |
|
of X that are smaller than what EIN **would give.** |
|
|
|
## Performance Of Deriv-Ein **Over One Step** |
|
|
|
The performance of the deriv-EIN **criterion is now analyzed in terms of minimization of** y 1D and y 2D. |
|
|
|
![10_image_0.png](10_image_0.png) |
|
|
|
(a) LikelyMinN ( ) and cond-EIN ( **) for** y 1D |
|
|
|
(b) LikelyMinN ( ) and cond-EIN ( **) for** y 2D |
|
|
|
![10_image_1.png](10_image_1.png) |
|
|
|
Figure 2: The function y 1D **is shown in black thick solid lines in plots (a) and (c) where the grey areas** |
|
correspond to 95% confidence intervals of the Gaussian predictor. Identically, the black solid lines in plots |
|
(b) and (d) are the contours of function y 2D**. In each plot, the global minimum is indicated by a cyan** |
|
triangle, while the green dots show the points where the function has been evaluated. Plots (a) and (b) |
|
show, for the two considered functions, the evolution of x ÞÑ LikelyMinN pxq **in orange solid line, and of** x ÞÑ cond-EIN pxq in magenta dotted line. Plots (c) and (d) compare the evolution of x ÞÑ EIN pxq **(in blue** solid line) to that of x ÞÑ deriv-EIN pxq **(in red dotted line). For ease of reading, the functions LikelyMin**N , |
|
cond-EIN , EIN , and deriv-EIN **are normalized in such a way that their maximum value is fixed to 1.** |
|
We start with a single step. For each j P t1, 2u, and each k ě 2d ´ **1, we generate 500 space-filling LHS** |
|
made of k points in X **Perrin & Cannamela (2017), which are written** !X |
|
pjq k,i )500 i"1 |
|
. For each 1 ď i ď **500, we** |
|
then construct a Gaussian predictor of y jD **based on its evaluations at each point in** X |
|
pjq k,i **, and we denote** |
|
by x pjq,**deriv-EI** |
|
k,i and x pjq,EI |
|
k,i the points of X maximizing the criteria deriv-EIk and EIk**, respectively. Let** |
|
yp pjq,EI |
|
k,i and yp pjq,**deriv-EI** |
|
k,i **be the smallest value of** y jD **that we obtain:** |
|
|
|
$$\widehat{y}^{(j),\text{EL}}_{k,i}:=\min_{\mathbf{x}\in\mathcal{X}^{(j)}_{k,i}\cup\left\{\mathbf{x}^{(j),\text{EL}}_{k,i}\right\}}y^{\text{ID}}(\mathbf{x}),\quad\widehat{y}^{(j),\text{derive-EL}}_{k,i}:=\min_{\mathbf{x}\in\mathcal{X}^{(i)}_{k}\cup\left\{\mathbf{x}^{(j),\text{demire-EL}}_{k,i}\right\}}y^{\text{ID}}(\mathbf{x}).\tag{24}$$ |
|
|
|
By construction, the lower these values are, the better the acquisition criteria should be. In this prospect, for j P t1, 2u**, Figure 3 compares the evolution of the 25%, 50% and 75% empirical quantiles of** yp pjq,EI |
|
k,i and yp pjq,**deriv-EI** |
|
k,i as a function of k**. As announced in Section 3.3, the interest of the proposed criterion** |
|
lies in the intermediate (about r5, 18s ˆ d) values of k**. For too low values, as the Gaussian predictor and** |
|
its first-order derivatives are not very correlated, the criteria deriv-EIN and EIN **are very close, and lead** |
|
to similar results in terms of minimization of the objective function. For k **large, the Gaussian predictor** |
|
approaches the objective function with little uncertainty, and the criteria deriv-EIN and EIN **are equally** |
|
capable of identifying the global minimum. For intermediate values of k**, this phenomenon can be clearly** seen in the evolution of the values of y 2D **(right plot). Because a one-dimensional space is rapidly explored,** |
|
the advantage of deriv-EIN over EIN **is less clear in the evolution of the values of** y 1D **(left plot). There,** |
|
the two criteria give almost the same results, with deriv-EIN **allowing only slight improvements.** |
|
|
|
## Performance Of Deriv-Ein **Over Many Steps** |
|
|
|
In the above numerical experiments, one step was studied and **the new evaluation points were independent** |
|
of each other. Getting closer to a BO algorithm, we now quantify the effect of the acquisition criteria when defining a sequence of points where y jD is evaluated. To this end, for j P t1, 2u**, we generate 500** |
|
new space-filling LHS in X **composed of 3 points each, which are written** !Xrpjq 3,i )500 i"1 |
|
. For each j P t1, 2u and each repetition of the experiment 1 ď i ď **500, the function** y jD **is evaluated at each point of** Xrpjq 3,i , |
|
and Algorithm 1 presented in Section 2 is run twice, taking as **acquisition criterion deriv-EI first, then the** |
|
classical criterion EI. At each iteration k ě **1 of the algorithm, we note** y pjq,**deriv-EI** |
|
k,i and y pjq,EI |
|
k,i **the obtained** |
|
current minima of y jD**. The empirical estimates of the median and the mean of these current minima is** |
|
shown in Figure 4. The interest of the deriv-EI acquisition criterion is again underlined by these results. |
|
|
|
Indeed, for all iterations k**, the median and the mean of the current minima associated with the deriv-EI** |
|
criterion are lower than those of the current minima associated with the EI criterion. Again, the advantage of deriv-EI over EI takes place in the middle of the iterations k**. Note that the median is well below the** |
|
mean for the minimization of y 1D. It comes from the fact that, for both EI and deriv-EI, some of **the runs** |
|
have taken a significant number of iterations to identify the **area of the global minimum.** |
|
|
|
## 4.2 Performance Analysis In Dimensions 2, 3 And 5 Test Functions Construction |
|
|
|
The EI and deriv-EI acquisition criteria are now compared on **a larger set of test functions. To define this** set of functions, we elaborate on the idea of using GPs Hennig **& Schuler (2012) which are by construction** compatible with the working assumptions. We start by noting Z |
|
pdq θ**the Gaussian process defined on** |
|
X " r0, 1s d**such that for any** x, x 1 P X **and any** θ ą 0 |
|
|
|
$$\mathbb{E}\left[Z_{\theta}^{(d)}(\mathbf{x})\right]=0,\ \ \mathrm{Cov}(Z_{\theta}^{(d)}(\mathbf{x}),Z_{\theta}^{(d)}(\mathbf{x}^{\prime}))=\prod_{i=1}^{d}\kappa\left(\sqrt{\frac{2}{d}\frac{|x_{i}-x_{i}^{\prime}|}{\theta}}\right)\ ,$$ |
|
$$(25)$$ |
|
, **(25)** |
|
|
|
![12_image_0.png](12_image_0.png) |
|
|
|
Figure 3: Influence of the acquisition criterion deriv-EIN and EIN **when minimizing** y 1D and y 2D**. For** |
|
k ě 2d ´ **1, the lower and upper parts of the black rectangles correspond to the 25% and 75% quantiles** |
|
of yp pjq,EI |
|
k,i **, while the black circles show the median value. Similarly, the lower and upper parts of the red** |
|
rectangles correspond to the 25% and 75% quantiles of yp pjq,**deriv-EI** |
|
k,i **, while the red triangles show the median** |
|
value. |
|
EI **deriv-EI** |
|
|
|
![12_image_1.png](12_image_1.png) |
|
|
|
Figure 4: Influence of the acquisition criterion deriv-EIN and EIN **when minimizing** y 1D and y 2D**. For** |
|
k ě **1, the filled black circles (resp. the filled red triangles) represent the empirical median of** !y pjq,EI |
|
k,i )500 i"1 |
|
(resp. of !y pjq,**deriv-EI** |
|
k,i )500 i"1 |
|
), and the empty black circles (resp. empty red triangles) indicate the empirical means. |
|
where κ **is the Matérn-5/2 covariance function of Equation (9), which is such that** Z |
|
pdq θ**is twice differentiable** |
|
in the mean-square sense. Notice the normalization of the length scales in ad{**2, allowing to define** |
|
Gaussian processes in any dimension d **with close dependence structures. This normalization can also be** |
|
understood by seeing that distances (between the two farthest points, or the expected distance of two points randomly drawn in X) grow in ?d, therefore the length scales have to grow in ?d**. We consider as** |
|
test function class the set F |
|
pdq θ**of realizations of** Z |
|
pdq θthat admits a global minimum strictly inside X **(i.e.,** |
|
at a point of zero partial derivatives). The following numerical tests then focus on two particular values of θ: θ " 0.**2 will characterize strongly oscillating functions admitting a large number of local minima, while** θ " 0.5 will refer to more regular functions presenting a smaller number of local minima. For θ P t0.2, 0.5u and d P t2, 3, 5u**, we generate 100 functions from** F |
|
pdq θ**in a random and independent way. These functions** |
|
are noted !y pdq i,θ )100 i"1 |
|
(see Appendix C for a detailed description of their construction). We finally subtract from each function its minimum value so that |
|
|
|
$$\operatorname*{min}_{\mathbf{x}\in\mathbb{X}}y_{i,\theta}^{(d)}(\mathbf{x})=0,$$ |
|
$$(26)$$ |
|
|
|
i,θ pxq " 0, **(26)** |
|
and we proceed to the same shifting on the Y **process. Figure 5 shows four examples of such functions** |
|
belonging to F |
|
p2q 0.2 and F |
|
p2q 0.5 in the case d " 2. |
|
|
|
## Experimental Protocol |
|
|
|
The global minimum of these functions is then searched twice **with Algorithm 1 by, first, taking derivEI and, then, EI as the acquisition criterion. The total number of calls to the objective function of** |
|
each optimization run is equal to budget " **100. The two types of searches are initialized with the** |
|
evaluation of y pdq i,θ at the same space-filling LHS of dimension N0 " **3 (a different design is generated for** |
|
each function minimization). The size of the design is small and does not depend on d**. As observed in** Le Riche & Picheny (2021); Hutter et al. (2013), small random **designs at the beginning of BO searches** are more efficient. Moreover, the effect of the acquisition criterion is more visible for small initial random designs. The growth of the length scales in ?d **(Equation 25) guarantees that the correlation between the** |
|
N0 points is the same, independently of d**. In order to investigate the influence of the acquisition criterion** |
|
only on the optimization but not on the learning of the GP, the **properties of the Gaussian process** Y |
|
used to guide the search are chosen equal to those of Z |
|
pdq θ**. The maximization of the acquisition criteria is** |
|
performed in two steps: each acquisition criterion is first evaluated in 105 **points randomly chosen in** X, |
|
and 10 Nelder-Mead algorithms starting from the 10 most promising points among the random points are then launched in parallel to identify the new point at which to evaluate the objective function. |
|
|
|
Two quantities of interest are then extracted from these Bayesian optimizations. For each 1 ď k ď **budget,** |
|
each d P t2, 3, 5u, and each θ P t0.2, 0.5u**, we first note** yp pdq,**deriv-EI** |
|
θpkq **(resp.** yp pdq,EI |
|
θpkq**) the empirical mean** |
|
of the current minimum (mean best-so-far performance) obtained at the k th **iteration on all the tested** |
|
functions when taking deriv-EI (resp. EI) as the acquisition criterion. Second, we define pk pjq,deriv-EIp*θ, s*q |
|
(resp. pk pjq,EIp*θ, s*q**), the mean time-to-target that is the average number of iterations necessary for the** |
|
best-so-far observation to be lower than a threshold s ą **0 when using deriv-EI (resp. EI). Note that for** |
|
both quantities of interest, the average is done on the different test-functions, which have the same kind of variations and the same minimum equal to 0, which makes them comparable although potentially very different. |
|
|
|
## Optimization Results |
|
|
|
The evolution of these quantities of interest are shown in Figure 6 for θ " 0.2 and Figure 7 for θ " 0.**5. In** |
|
all of these figures, a substantial gain is brought by the deriv-EI criterion with respect to the EI criterion. The gain is visible both in terms of the mean best-so-far objective function (plots (a) to (c)) and the mean time-to-target (plots (d) to (f)). We notice, as we had hoped **in Section 3.3, that the observed improvements** |
|
|
|
![14_image_0.png](14_image_0.png) |
|
Figure 5: Representation of 4 particular elements of F |
|
p2q 0.2 and F |
|
p2q 0.5 |
|
, the set of test functions to be minimized. |
|
|
|
In each function instance, the global minimum location is indicated with a cyan triangle. |
|
brought by the deriv-EI criterion are greater as the dimension d **of the input space increases. As expected,** |
|
we also observe that choosing deriv-EI rather than EI is of more interest for more multimodal functions, i.e., when the length scale θ **is small. Indeed, it is in these configurations with a large number of local** |
|
minima that adding information about null first order derivatives and positive definite Hessian matrices is useful. |
|
|
|
Remark **For all the test functions studied in this section, the condition numbers of the covariance** |
|
matrices of the observation points were all between 103 **and 2** ˆ 106. |
|
|
|
## 5 Extensions And Conclusions 5.1 Management Of Minima On Bounds |
|
|
|
The article has assumed until now that the minimum is inside the search space X**. If this is not the case,** |
|
orienting the search towards areas with a zero gradient can actually be counterproductive, as the global minimum will typically be associated with a nonzero gradient. Denoting by BX the boundary of X, a first possibility to circumvent this problem is to penalize the objective function so that optima on the boundary are pushed inside the domain but arbitrarily close **to the boundary, and therefore are associated** to a null gradient and positive definite Hessian. This is the idea of the barrier functions of the interior point methods Wright & Nocedal (2006). With barrier functions, the objective x ÞÑ ypxq **is replaced** by x ÞÑ ypxq ` λcpxq, where λ is a positive constant and c **is a continuously twice-differentiable positive** |
|
function such that cpxq would be close to zero when x is far from the boundaries of X**, and would take** |
|
potentially infinite values when x P BX. For instance, if X " r0, 1s, the function c **can be chosen as:** |
|
|
|
$$c(x)=\frac{1}{\min(x,1-x)}\quad\mbox{or}\quad c(x)=-\log(\min(x,1-x))\quad,\ \ 0\leqslant x\leqslant1.\tag{27}$$ |
|
|
|
The larger λ is, the further from BX the global minimum of y ` λc **is, whether the global minimum of** y is on BX or not. And by making λ **progressively tend towards 0, we make this global minimum, which** |
|
|
|
![15_image_0.png](15_image_0.png) |
|
|
|
EI **deriv-EI** |
|
|
|
Figure 6: Plots (a), (b), and (c) show the mean best performance, k ÞÑ yp pdq,EI |
|
θpkq **(in black solid line** |
|
) and k ÞÑ yp pdq,**deriv-EI** |
|
θpkq (in red dotted line ) for strongly multimodal functions (θ " 0.**2) and** |
|
d P t2, 3, 5u**. Plots (d), (e), and (f) give the mean time-to-target** s ÞÑ pk pjq,EIp*θ, s*q **(in black solid line) and** |
|
s ÞÑ pk pjq,deriv-EIp*θ, s*q **(in red dotted line).** |
|
|
|
![16_image_0.png](16_image_0.png) |
|
|
|
Figure 7: Plots (a), (b), and (c) compare the mean best-so-far performances for the two acquisition criteria, k ÞÑ yp pdq,EI |
|
θpkq (in black solid line **) and** k ÞÑ yp pdq,**deriv-EI** |
|
θpkq (in red dotted line **), for** |
|
moderatly multimodal functions (θ " 0.5) and and d P t2, 3, 5u**. Plots (d), (e), and (f) compare the mean** |
|
time-to-target, s ÞÑ pk pjq,EIp*θ, s*q **(in black solid line) and** s ÞÑ pk pjq,deriv-EIp*θ, s*q **(in red dotted line).** |
|
|
|
## Ei **Deriv-Ei** |
|
|
|
will be well associated to a zero gradient of y ` λc, tend to the global minimum of y**. Such an approach** |
|
is well-studied for convex optimization problems, with bounds linking the choice of λ **to the degradation** |
|
in the optimal value of the objective function Wright & Nocedal (2006). In general however, the effect of the λ **decay law is difficult to understand. And the penalty adds a steep function increase at the edge of** |
|
X **that GPR-based metamodels will have difficulty to learn.** |
|
Alternatively, we propose to evaluate *a priori* the likelihood, noted ℓN , that the minimum of y **lies on the** |
|
boundary of X. |
|
|
|
The ℓN **likelihood can be evaluated through a three-step procedure. Starting from the GP-based surrogate** model of y, noted YN **, the first step would be to look for positions that may correspond to local minima of** |
|
y, by running in parallel M " 1 regularized Newton descent algorithms on the trajectories of YN **. These** |
|
minimizations would be in that case initialized at randomly **chosen points** x m 0 P X, 1 ď m ď M**, and we** |
|
would denote by x m |
|
˚ **the obtained minimum when starting from** x m 0 |
|
. Then, we generate Q " **1 random** |
|
samples of the Gaussian random vector pYN px 1 |
|
˚q*, . . . , Y*N pxM |
|
˚qq**, which we denote by** |
|
|
|
$$(Y_{N}^{\omega_{q}}({\mathbf{x}}_{*}^{1}),\ldots,Y_{N}^{\omega_{q}}({\mathbf{x}}_{*}^{M})),\ 1\leqslant q\leqslant Q.$$ |
|
$$(28)$$ |
|
˚qq, 1 ď q ď Q. **(28)** |
|
The indicator ℓN **is finally estimated by counting how often the minima of the draws are on the bounds,** |
|
|
|
$$\ell_{N}:=\frac{1}{Q}\sum_{q=1}^{Q}\frac{1}{{\mathbf{x_{*}^{q}}}^{*}\in\mathbb{Z}}\quad,\quad{\mathbf{x_{*}^{m}}}^{*}\in\arg\operatorname*{min}_{1\leqslant m\leqslant M}Y_{N}^{\omega_{q}}({\mathbf{x_{*}^{m}}}).$$ |
|
$$(29)$$ |
|
|
|
Depending on the value of ℓN **, the method described in the rest of the article can be complemented in** one of the two following fashions. If ℓN is too large, the traditional EIN **acquisition criterion should be** used instead of deriv-EIN . Alternatively, ℓN **can be calculated specifically for each bound and if it is likely** |
|
that some specific bounds are hit, then the corresponding variables can be set to these bounds, the BO |
|
iteration being carried out with the deriv-EIN **criterion in the lower dimensional space.** |
|
Nonetheless, the objective of this work was to come up with an **acquisition criterion applicable when the** |
|
minimum of y is not on the boundary of X**. We leave the continuation of the above analysis, based on the** |
|
likelihood to have the optimum at a bound, as a perspective to **this work.** |
|
|
|
## 5.2 Summary And Further Perspectives |
|
|
|
In the context of Bayesian optimization, this work proposes **a novel acquisition criterion allowing to** |
|
integrate as additional *a priori* **the fact that interior minima are associated to zero first order derivatives** |
|
and positive definite Hessians. With this addition, a classical acquisition criterion such as the expected improvement takes on a feature of information theoretic criteria by characterizing the distribution of potential optima when the plain expected improvement accounts for all improving values of the function model. The new expected improvement with derivatives, called deriv-EI, does not need the derivatives of the true function. A computationally efficient approximation to deriv-EI is proposed in the article. It has been observed through several test cases that the new criterion allows significant gains in terms of function minimization at intermediate budgets of function evaluations. This benefit is larger when the function to minimize presents several local minima or the dimension is high, since in these cases the classical expected improvement is too exploratory in particular in areas near the bounds Siivola et al. (2018). |
|
|
|
All the consequences of the proposed acquisition criterion **could not be investigated in this paper. In order** to simplify the interpretation of the results, all the comparisons between the classical EI and the deriv-EI |
|
criteria have been carried out in *ideal* **configurations in the sense that the test functions are realizations** |
|
of the Gaussian process guiding the minimization. The hyperparameters of the GP characterizing its statistical properties are always known by construction of **the test functions. Therefore, it will be interesting** to study the sensitivity of Bayesian optimization with deriv-EI to the iterative estimation of the GP hyperparameters, as it happens in practice. In the same manner, only problems in moderate dimensions are implemented (d ď **5), as it seems important to restrict ourselves to cases for which the maximization of** |
|
the EI and deriv-EI criteria can be sufficiently well solved. During the analysis of configurations in higher dimensions (d ě 10), the maximization of these criteria becomes a problem in **itself and the performances** |
|
of the EI and deriv-EI criteria turned out to be too dependent **on our ability to correctly maximize them.** Working at the definition of efficient methods to maximize the deriv-EI criterion would therefore be an appropriate continuation to this work. |
|
|
|
Finally, this work has focused exclusively on the EI acquisition criterion because it is a standard in BO, but other acquisition criteria should also benefit from the predictor's derivatives. For example, the EI criterion of Equation (12) can be adapted to the related probability of **improvement Kushner (1964): instead of** |
|
maximizing x ÞÑ PpYN pxq ă yminq " E |
|
"1YN pxqăymin ‰**, we could maximize** |
|
|
|
$$\mathbf{x}\mapsto\mathbb{E}\left[1_{Y_{N}(\mathbf{x})<y_{\mathrm{min}}}\,\times\,1_{\mathbf{R}_{N}(\mathbf{x})\hat{c}Y_{N}(\mathbf{x})\in\mathcal{B}(\varepsilon),\hat{c}^{2}Y_{N}(\mathbf{x})\in\mathcal{M}+(d)}\right].$$ |
|
$$(30)$$ |
|
|
|
‰. **(30)** |
|
In the same manner, the well-known upper confidence bound acquisition criterion Srinivas et al. (2009) could be adapted by adding a penalty term on the derivatives, **which would make it possible to explicitly** play on the exploitation vs. exploration tradeoff in the same **way as the standard deviation. The new** point to be evaluated could for instance be sought as a solution to the following problem: |
|
|
|
$${\mathbf{x}}^{(N+1)}\in\arg\operatorname*{min}_{{\mathbf{x}}\in\mathbb{X}}\mu_{N}({\mathbf{x}})-\lambda_{1}\sigma_{N}({\mathbf{x}})+\lambda_{2}\left\|\mathbb{E}\left[{\hat{c}}Y_{N}({\mathbf{x}})\right]\right\|,$$ |
|
$$(31)$$ |
|
µN pxq ´ λ1σN pxq ` λ2 }ErBYN pxqs} , **(31)** |
|
with λ1 and λ2 **two positive constants.** |
|
|
|
## Acknowledgments |
|
|
|
This action benefited from the support of the Chair Stress Test, Risk Management and Financial Steering, led by the French Ecole polytechnique and its **foundation and sponsored by BNP** |
|
Paribas. This research also benefited from the consortium in **Applied Mathematics CIROQUO** |
|
(https://doi.org/10.5281/zenodo.6581217**), gathering partners in technological and academia in the** |
|
development of advanced methods for Computer Experiments. |
|
|
|
## References |
|
|
|
James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. *Advances in neural information processing systems***, 24, 2011.** |
|
Julian Berk, Vu Nguyen, Sunil Gupta, Santu Rana, and Svetha Venkatesh. Exploration enhanced expected improvement for Bayesian optimization. In *Machine Learning and Knowledge Discovery in Databases:* |
|
European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part II 18**, pp. 621–637. Springer, 2019.** |
|
Mohamed Amine Bouhlel, Nathalie Bartoli, Abdelkader Otsmane, and Joseph Morlier. Improving kriging surrogates of high-dimensional design models by partial least squares dimension reduction. *Structural* and Multidisciplinary Optimization**, 53:935–952, 2016.** |
|
G. Damblin, M. Couplet, and B. Iooss. Numerical studies of space filling designs: optimization of latin hypercube samples and subprojection properties. *Journal of Simulation***, 7:276–289, 2013.** |
|
Youssef Diouane, Victor Picheny, Rodolphe Le Riche, and Alexandre Scotto Di Perrotolo. TREGO: a trust-region framework for efficient global optimization. *Journal of Global Optimization***, pp. 1–23, 2022.** |
|
K.T. Fang, R. Li, and A. Sudjianto. *Design and modeling for computer experiments***. Chapman & Hall,** |
|
Computer Science and Data Analysis Series, London, 2006. |
|
|
|
Alexander IJ Forrester, András Sóbester, and Andy J Keane. Multi-fidelity optimization via surrogate modelling. Proceedings of the royal society a: mathematical, physical *and engineering sciences***, 463** |
|
(2088):3251–3269, 2007. |
|
|
|
Peter Frazier and Warren Powell. The knowledge gradient policy for offline learning with independent normal rewards. In *2007 IEEE International Symposium on Approximate Dynamic Programming and* Reinforcement Learning**, pp. 143–150. IEEE, 2007.** |
|
Peter I Frazier. A tutorial on Bayesian optimization. *arXiv preprint arXiv:1807.02811***, 2018.** |
|
Roman Garnett. *Bayesian optimization***. Cambridge University Press, 2023.** |
|
David Gaudrie, Rodolphe Le Riche, Victor Picheny, Benoit Enaux, and Vincent Herbert. Modeling and optimization with Gaussian processes in reduced eigenbases. *Structural and Multidisciplinary Optimization***, 61:2343–2361, 2020.** |
|
Robert B Gramacy. *Surrogates: Gaussian process modeling, design, and optimization for the applied* sciences**. Chapman and Hall/CRC, 2020.** |
|
Philipp Hennig and Christian J Schuler. Entropy search for information-efficient global optimization. |
|
|
|
Journal of Machine Learning Research**, 13(6), 2012.** |
|
José Miguel Hernández-Lobato, Matthew W Hoffman, and Zoubin **Ghahramani. Predictive entropy search** |
|
for efficient global optimization of black-box functions. *Advances in neural information processing systems***, 27, 2014.** |
|
Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. An evaluation of sequential model-based optimization for expensive blackbox functions. In *Proceedings of the 15th annual conference companion on* Genetic and evolutionary computation**, pp. 1209–1216, 2013.** |
|
Donald Jones, Matthias Schonlau, and William Welch. Efficient global optimization of expensive black-box functions. *Journal of Global Optimization***, 13:455–492, 12 1998. doi: 10.1023/A:1008306431147.** |
|
Donald R Jones. A taxonomy of global optimization methods based on response surfaces. **Journal of** |
|
Global Optimization**, 21:345–383, 2001.** |
|
Samuel Kim, Peter Y Lu, Charlotte Loh, Jamie Smith, Jasper Snoek, and Marin Soljacic. Deep learning for bayesian optimization of scientific problems with high-dimensional structure. **Transactions on Machine** Learning Research**, 2022. ISSN 2835-8856. URL** https://openreview.net/forum?id=tPMQ6Je2rB. |
|
|
|
Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. In *Artificial intelligence and statistics***, pp. 528–** |
|
536. PMLR, 2017. |
|
|
|
Harold J Kushner. A versatile stochastic model of a function of unknown and time varying form. *Journal* of Mathematical Analysis and Applications**, 5(1):150–167, 1962.** |
|
Harold J Kushner. A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. *J. Basic Engineering***, 86:97—-106, 1964.** |
|
Rémi Lam, Matthias Poloczek, Peter Frazier, and Karen E Willcox. Advances in Bayesian optimization with applications in aerospace engineering. In *2018 AIAA Non-Deterministic Approaches Conference*, |
|
pp. 1656, 2018. |
|
|
|
Luc Laurent, Rodolphe Le Riche, Bruno Soulier, and Pierre-Alain Boucard. An overview of gradientenhanced metamodels with applications. *Archives of Computational Methods in Engineering***, 26(1):** |
|
61–106, 2019. |
|
|
|
Rodolphe Le Riche and Victor Picheny. Revisiting Bayesian optimization in the light of the COCO |
|
benchmark. *Structural and Multidisciplinary Optimization***, 64(5):3063–3087, 2021.** |
|
Daniel James Lizotte. *Practical Bayesian optimization***. PhD thesis, University of Alberta, 2008.** |
|
Jonas Močkus. On Bayesian methods of search for extremum. *Automatics and Computers***, 3:53–62, 1972.** |
|
Jonas Močkus. *Bayesian approach to global optimization: theory and applications***. Springer Science &** |
|
Business Media, 2012. |
|
|
|
John A Nelder and Roger Mead. A simplex method for function minimization. *The computer journal*, 7 |
|
(4):308–313, 1965. |
|
|
|
G. Perrin and C. Cannamela. A repulsion-based method for the **definition and the enrichment of optimized** |
|
space filling designs in constrained input spaces. *Journal de la Société Française de Statistique***, 158(1):** |
|
37–67, 2017. |
|
|
|
Victor Picheny, Pierre Casadebaig, Ronan Trépos, Robert Faivre, David Da Silva, Patrick Vincourt, and Evelyne Costes. Using numerical plant models and phenotypic correlation space to design achievable ideotypes. *Plant, Cell & Environment***, 40(9):1926–1939, 2017.** |
|
C. E. Rasmussen. Gaussian processes to speed up hybrid Monte **Carlo for expensive Bayesian integrals.** |
|
Bayesian Statistics**, 7:651–659, 2003.** |
|
V. R. Saltenis. One method of multiextremum optimization. *Avtomatika i Vychislitel'naya Tekhnika* |
|
(Automatic Control and Computer Sciences)**, 5(3):33–38, 1971.** |
|
T. J. Santner, B.J. Williams, and W.I. Notz. *The design and analysis of computer experiments***. Springer,** |
|
New York, 2003. |
|
|
|
Matthias Schonlau. *Computer experiments and global optimization***. PhD thesis, University of Waterloo,** |
|
1997. |
|
|
|
Matthias Schonlau, William J Welch, and Donald R Jones. Global versus local search in constrained optimization of computer models. *Lecture notes-monograph series***, pp. 11–25, 1998.** |
|
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and **Nando De Freitas. Taking the human** |
|
out of the loop: A review of Bayesian optimization. *Proceedings of the IEEE***, 104(1):148–175, 2015.** |
|
Eero Siivola, Aki Vehtari, Jarno Vanhatalo, Javier González, and Michael Riis Andersen. Correcting boundary over-exploration deficiencies in Bayesian optimization with virtual derivative sign observations. |
|
|
|
In *2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP)***, pp.** |
|
1–6. IEEE, 2018. |
|
|
|
J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian **optimization of machine learning algorithms.** |
|
In *Advances in neural information processing systems***, pp. 2951–2959, 2012.** |
|
András Sobester, Alexander Forrester, and Andy Keane. **Engineering design via surrogate modelling: a** |
|
practical guide**. John Wiley & Sons, 2008.** |
|
Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. *arXiv preprint arXiv:0912.3995***, 2009.** |
|
Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In *ICML***, 2010. URL** |
|
https://icml.cc/Conferences/2010/papers/422.pdf. |
|
|
|
Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: |
|
Analysis of the black-box optimization challenge 2020. In *NeurIPS 2020 Competition and Demonstration* Track**, pp. 3–26. PMLR, 2021.** |
|
Julien Villemonteix, Emmanuel Vazquez, and Eric Walter. An **informational approach to the global** |
|
optimization of expensive-to-evaluate functions. *Journal of Global Optimization***, 44:509–534, 2009.** |
|
Yijia Wang, Matthias Poloczek, and Daniel R. Jiang. Dynamic **subgoal-based exploration via** |
|
bayesian optimization. *Transactions on Machine Learning Research***, 2023. ISSN 2835-8856. URL** |
|
https://openreview.net/forum?id=ThJl4d5JRg. |
|
|
|
C.K. Williams and C.E. Rasmussen. *Gaussian processes for machine learning***, volume 2 (3). MIT Press,** |
|
Boston, 2006. |
|
|
|
Stephen J Wright and Jorge Nocedal. *Numerical optimization***. Springer, 2006.** |
|
Jia Wu, Xiu-Yun Chen, Hao Zhang, Li-Dong Xiong, Hang Lei, and Si-Hao Deng. Hyperparameter optimization for machine learning models based on Bayesian optimization. **Journal of Electronic Science** |
|
and Technology**, 17(1):26–40, 2019. ISSN 1674-862X. doi: https://doi.org/10.11989/JEST.1674-862X.** |
|
80904120. URL https://www.sciencedirect.com/science/article/pii/S1674862X19300047. |
|
|
|
Jian Wu, Matthias Poloczek, Andrew G Wilson, and Peter Frazier. Bayesian optimization with gradients. |
|
|
|
In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, **S. Vishwanathan, and R. Garnett (eds.),** |
|
Advances in Neural Information Processing Systems**, volume 30. Curran Associates, Inc., 2017. URL** |
|
https://proceedings.neurips.cc/paper_files/paper/2017/file/64a08e5f1e6c39faeb90108c430eb120-Paper.pdf. |
|
|
|
Jiayu Zhao, Renyu Yang, Shenghao Qiu, and Zheng Wang. Unleashing the potential of acquisition functions in high-dimensional bayesian optimization. *Transactions on Machine Learning Research***, 2024. ISSN** |
|
2835-8856. URL https://openreview.net/forum?id=0CM7Hfsy61. |
|
|
|
Antanas Žilinskas and James Calvin. Bi-objective decision making in global optimization based on statistical models. *Journal of Global Optimization***, 74:599–609, 2019.** |
|
|
|
## A Approximation Of The Ei With Derivatives Information |
|
|
|
We start by recalling the notations used in Equations (19) and (20). For each x P X**, the means and** |
|
covariance matrices of random vectors BYN pxq and pYN pxq, D2YN pxqq|BYN pxq " 0 **are written** |
|
|
|
$$\partial Y_{N}(\mathbf{x})\ \sim\ {\mathcal{N}}\left({\dot{\mathbf{m}}},{\dot{\mathbf{S}}}\right),$$ |
|
$$(Y_{N}(\mathbf{x}),D^{2}Y_{N}(\mathbf{x}))|\partial Y_{N}(\mathbf{x})=\mathbf{0}\ \sim\ {\cal N}\left(\left(\begin{array}{c}{{m}}\\ {{\tilde{m}_{1}}}\\ {{\vdots}}\\ {{\tilde{m}_{d}}}\end{array}\right),\left[\begin{array}{c c c c}{{s^{2}}}&{{\rho_{1,1}}}&{{\cdots}}&{{\rho_{1,d}}}\\ {{\rho_{1,1}}}&{{\tilde{s}_{1}}}&{{\ddots}}&{{\vdots}}\\ {{\vdots}}&{{\ddots}}&{{\ddots}}&{{\rho_{d-1,d}}}\\ {{\rho_{d,1}}}&{{\cdots}}&{{\rho_{d,d-1}}}&{{\tilde{s}_{d}}}\end{array}\right]\right)$$ |
|
, |
|
where the notation D2YN :" |
|
`pB2YN q1,1, . . . ,pB2YN qd,d˘**refers to the diagonal terms of the matrix** B |
|
2YN . |
|
|
|
Using the notations and the general expressions introduced in Section 3.1, the expressions of m and s 2 come from the conditioning of the pd ` 1q-dimensional Gaussian vector pYN pxq, BYN pxqq**, whose mean is** pµN pxq, BµN pxqq**, and whose covariance matrix is** |
|
|
|
$$\left[\begin{array}{l l}{C_{N}(x,x)}\\ {\partial C_{N}(x,x)}\end{array}\right.$$ |
|
"CN px, xq BCN px, xq |
|
$$\left.\begin{array}{l}{{\partial C_{N}(x,x)^{T}}}\\ {{\partial^{2}C_{N}(x,x)}}\end{array}\right].$$ |
|
BCN px, xq B2CN px, xq |
|
Applying the conditioning formula yields, m :" ErYN pxq|BYN pxq " 0s " µN px**q ` B**CN px, xq JB |
|
2CN px, xq |
|
´1p0 ´ BµN pxqq , |
|
s 2:" VarpYN pxq|BYN pxq " 0q " CN px, x**q ´ B**CN px, xq JB |
|
2CN px, xq |
|
´1BCN px, xq . |
|
|
|
The expressions for the mean and covariance matrix of the Gaussian vector YN pxq, D2YN pxq|BYN pxq " 0, i.e., the symbols m: i, s:i and ρi,j **in Equation (19), are obtained following the same conditioning principle, but** |
|
applied to the Gaussian vector pYN pxq, D2YN pxq, BYN pxqq, whose mean is pµN pxq, D2µN pxq, BµN pxqq**, and** |
|
whose covariance matrix is made of properly ordered terms such as Cov´B |
|
2YN |
|
Bx 2 i pxq, BYN |
|
Bxjpxq |
|
¯" |
|
B |
|
3CN |
|
Bx 2 i Bxjpx, xq, Cov ´B |
|
2YN |
|
Bx 2 i pxq, B |
|
2YN |
|
Bxjpxq |
|
¯"B |
|
4C |
|
Bx 2 i B2xjpx, xq, *. . .*. |
|
|
|
The conditioning can be applied one more time to pYN pxq, D2YN pxq|BYN pxq " 0q **to account for an** |
|
observation of YN pxq**, leading to** |
|
|
|
$$(D^{2}Y_{N}(\mathbf{x}))_{i}|\partial Y_{N}(\mathbf{x})=\mathbf{0},Y_{N}(\mathbf{x})=y\sim{\cal N}(\bar{m}_{i}+\rho_{1i}(y-m)/s^{2},\bar{s}_{i}^{2}-\rho_{1i}^{2}/s^{2}).\tag{32}$$ |
|
|
|
For p P t1, 2u**, let us first assume that the off-diagonal terms of** B |
|
2Y:N **can be neglected. In that case,** |
|
ensuring that B |
|
2YN is in M`pdq comes down to ensuring that its diagonal terms are positive, **i.e. ensuring** |
|
that D2YN is in r0, `8rd**, and we can write (using the former notations):** |
|
|
|
deriv-EIN pxq :" ż ymin y"´8 ż y9 PEpx,εq ż Y: PM`pdq pymin ´ yq pdPpy, y9 ,Y: q y:Pr0,`8rd pymin ´ yq pfBYN pxqpy9qfYN pxq,D2YN pxq|BYN pxq"y9py, y:qdydy9 dy:. « ż ymin y"´8 ż y9 PEpx,εq ż |
|
$$(33)$$ |
|
As it is necessary for a matrix to have positive terms on its diagonal to be positive definite, this approximation is an overestimation of the number of trajectories that are actually positive definite. In addition, if ε, the size of the ellipsoid centered at 0 to which BYN pxq **belongs, is sufficiently small, it is possible to** |
|
approximate fBYN pxqpy9q by fBYN pxqp0q for any y9 in Epx, εq**, which leads to:** |
|
deriv-EIN pxq |
|
|
|
$$\operatorname{EI}_{N}(x)$$ |
|
$\approx\text{Vol}(\mathcal{E}(\mathbf{x},\varepsilon))f_{\mathcal{D}Y_{N}(\mathbf{x})}(0)\int_{y=-\infty}^{y_{\text{min}}}\int_{\mathbf{y}\in[0,+\infty[^{d}}(y_{\text{min}}-y)^{p}f_{Y_{N}(\mathbf{x}),D^{2}Y_{N}(\mathbf{x})]\mathcal{D}Y_{N}(\mathbf{x})=\mathbf{0}}(y,\tilde{\mathbf{y}})dyd\tilde{\mathbf{y}}.$ |
|
$$(34)$$ |
|
|
|
where VolpEpx, εqq is the volume of Epx, εq. |
|
|
|
We further assume that for any y P R, the components of D2YN pxq conditioned by the event pBYN pxq " |
|
0, YN pxq " yq are statistically independent. In other words, a trajectory which passes through px, yq and which is flat is assumed to have independent curvatures. In this case, the density of D2YN pxq|pBYN pxq " |
|
0, YN pxq " yq **is a product of univariate densities. This leads to,** |
|
|
|
ż ymin y"´8 ż y:Pr0,`8rd pymin ´ yq pfYN pxq,D2YN pxq|BYN pxq"0py, y:qdydy: " ż ymin y"´8 ż y:Pr0,`8rd pymin ´ yq pfYN pxq|BYN pxq"0pyqfD2YN pxq|BYN pxq"0,YN pxq"ypy:qdydy: p2πq d`1 2 exp ˆ´ py ´ mq 2 2s 2 ˙¨ ˝ź d s:i b1 ´ ρ1i ss:i ˛ « ż ymin i"1 ż `8 y:i"0 exp ˆ´ py:i ´ pm: i ` ρ1ipy ´ mq{s 2qq2 2s: 2 ip1 ´ ρ 2 1i{ps 2s: 2 iqq ˙dy:i y"´8 pymin ´ yq p ' dy s . (35) |
|
|
|
If we now perform the following variable changes: z :" py ´ mq{s, zmin :" pymin ´ mq{s, ri:" ρ1i{pss:iq and τ:i " m: i{s:i**, it comes:** |
|
|
|
p2πq d`1 2 exp ˆ´ py ´ mq 2 2s 2 ˙¨ ˝ź d s:i b1 ´ ρ1i ss:i ˛ ż ymin y"´8 pymin ´ yq p i"1 ż `8 y:i"0 exp ˆ´ py:i ´ pm: i ` ρ1ipy ´ mq{s 2qq2 2s: 2 ip1 ´ ρ 2 1i{ps 2s: 2 iqq ˙dy:i ' dy s p2πq d`1 2 exp ˆ´ z 2 2 ˙ ˜ź d z:i"´τ:i exp ˆ´ pz:i ´ rizq 2 2p1 ´ r 2 iq ˙ a dz:i 1 ´ r 2 i ¸ dz " s p ż zmin i"1 ż `8 z"´8 pzmin ´ zq p i"1 Φ ˜τ:i ` riz a1 ´ r 2 i ¸ dz. " s p ż zmin z"´8 pzmin ´ zq p ?2πexp ˆ´ z 2 2 ˙ź d (36) |
|
|
|
Recalling that Φ and φ are respectively the CDF and the PDF of the standard Gaussian **variables, the** |
|
former expression can be further simplified by introducing the following first order Taylor expansion of the function Φ, |
|
|
|
$$\Phi\left(\frac{\tilde{r}_{i}+r_{i}z}{\sqrt{1-r_{i}^{2}}}\right)\approx\Phi\left(\frac{\tilde{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right)+\frac{r_{i}z}{\sqrt{1-r_{i}^{2}}}\phi\left(\frac{\tilde{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right),\tag{37}$$ |
|
|
|
and by truncating to the first polynomial orders, so that: |
|
|
|
i"1 Φ ˜τ:i ` riz a1 ´ r 2 i ¸ dz z"´8 pzmin ´ zq p ?2πexp ˆ´ z 2 2 ˙ź d ż zmin i"1 ˜ Φ ˜ a τ:i 1 ´ r 2 i ¸ ` riz a1 ´ r 2 i φ ˜ a τ:i 1 ´ r 2 i ¸¸ dz « ż zmin z"´8 pzmin ´ zq p ?2πexp ˆ´ z 2 2 ˙ź d (38) i"1 Φ ˜ a τ:i 1 ´ r 2 i ¸ ż zmin « ź d z"´8 pzmin ´ zq pp1 ` zaq ?2πexp ˆ´ z 2 2 ˙dz , |
|
$$\quad(38)$$ |
|
$$(39)$$ |
|
where |
|
|
|
$$a=\sum_{i=1}^{d}\frac{r_{i}}{\sqrt{1-r_{i}^{2}}}\,\frac{\phi\left(\frac{\vec{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right)}{\Phi\left(\frac{\vec{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right)}\,\,.$$ |
|
|
|
It finally comes deriv-EIN pxq « LikelyMinpxq ˆ cond-EIppqpxq, **(40)** |
|
with |
|
|
|
$$\text{LikelyMin}(\mathbf{x}):=\text{Vol}(\mathcal{E}(\mathbf{x},\varepsilon))f_{\widehat{\varepsilon}\mathcal{V}_{N}(\mathbf{x})}(\mathbf{0})\prod_{i=1}^{d}\Phi\left(\frac{\tilde{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right)$$ $$=v\varepsilon^{d}\times\exp\left(-\frac{\tilde{\mathbf{m}}^{T}\tilde{S}^{-1}\tilde{\mathbf{m}}}{2}\right)\times\prod_{i=1}^{d}\Phi\left(\frac{\tilde{r}_{i}}{\sqrt{1-r_{i}^{2}}}\right),$$ |
|
$$(40)$$ |
|
$$(41)$$ |
|
|
|
where v is a constant independent of x and ε**, and with** |
|
|
|
$$\operatorname{cond-EL}(^{p})\left(\boldsymbol{x}\right):={\begin{cases}s\left((z_{\mathrm{min}}-a)\Phi(z_{\mathrm{min}})+\phi(z_{\mathrm{min}})\right)&{\mathrm{if~}}p=1,\\ s^{2}\left((1+z_{\mathrm{min}}^{2}-2a z_{\mathrm{min}})\Phi(z_{\mathrm{min}})+(z_{\mathrm{min}}-2a)\phi(z_{\mathrm{min}})\right)&{\mathrm{if~}}p=2.\end{cases}}$$ |
|
$$\left(42\right)$$ |
|
|
|
The expression when the constraint on the second order derivatives is not considered can be recovered by making τ:i tend to infinity. In this case, a **tends to 0 and** śd i"1 Φ |
|
ˆ |
|
?τ:i 1´r 2 i |
|
˙**tends to 1.** |
|
In summary, the approximation provided by Eq. (40) is based on the following four assumptions: |
|
- the off-diagonal terms of B |
|
2Y:N **can be neglected,** |
|
- the value of ε is sufficiently small for fBYN pxqpy9q to be approximated by fBYN pxqp0q **for any** y9 in Epx, εq, |
|
- the components of D2YN pxq conditioned by the event pBYN pxq " 0, YN pxq " yq **are statistically** |
|
independent, |
|
- the first order Taylor expansion of the function Φ provided in Eq. (37) holds. |
|
|
|
## Numerical Assessment Of The Proposed Approximation |
|
|
|
In order to numerically evaluate the quality of the approximation of deriv-EIN **, we implement a second** |
|
Monte-Carlo based approximation : |
|
|
|
$$\text{deriv-El}_{N}(\mathbf{x})\approx\widehat{d}(\mathbf{x}):=\text{Vol}(\mathcal{E}(\mathbf{x},\varepsilon))f_{\mathcal{U}_{N}(\mathbf{x})}(\mathbf{0})\frac{1}{M}\sum_{m=1}^{M}1_{Y_{m}\leqslant y_{\text{min}}}1_{\widehat{\mathbf{Y}}_{m}\in\mathcal{M}^{+}(\widehat{\mathbf{x}})}(y_{\text{min}}-Y_{m}),\tag{43}$$ |
|
|
|
where !Ym,Y: m |
|
)M |
|
m"1 gathers M **independent realizations of** pYN pxq, B |
|
2YN pxqq|BYN pxq " 0**. The term** |
|
VolpEpx, εqqfBYN pxqp0q **is explicitly calculated in the same way in both approximations.** |
|
Focusing on the test functions listed in Section 4.2 in dimensions d P t2, 3, 5u**, and limiting ourselves to** N P t2d, 5d, 10du, we uniformly sample 1000 points x1*, . . . ,* x**1000** in r0, 1s d**. We compute the coefficient** |
|
of determination R2 between the values of !LikelyMinpxj q ˆ **cond-EI**ppqpxj q |
|
)M |
|
j"1 |
|
(the proposed and fast approximation of deriv-EIN **) and the values of** !dppxj q |
|
)M |
|
j"1 |
|
(the time consuming Monte-Carlo approximation of deriv-EIN ). The mean value and standard deviation of these coefficients **of determination obtained** |
|
when repeating 10 times the whole process are finally summarized in Table 1. In each case, we observe high coefficients of determination, which justifies the use of **the fast approximation.** |
|
|
|
## B Expression Of The Analytical Test Functions |
|
|
|
The functions y 1D and y 2D **which were considered in Section 4.1 have the following expressions (notice the** |
|
offset made such that the minimum of the functions is 0): |
|
|
|
$$y_{0}^{\mathrm{1D}}:\left\{\begin{array}{c c c}{{[0,1]}}&{{\rightarrow}}&{{\mathbb{R}}}\\ {{x}}&{{\mapsto}}&{{\cos(6\pi x+0.4)+(x-0.5)^{2}}}\end{array}\right.$$ $$y^{\mathrm{1D}}(x)=y_{0}^{\mathrm{1D}}(x)-\operatorname*{min}_{z\in[0,1]}y_{0}^{\mathrm{1D}}(z),\ x\in[0,1],$$ |
|
$$y_{0}^{20}:\left\{\begin{array}{r c l}{{[0,1]^{2}}}&{{\to}}&{{\mathbb{R}}}\\ {{(x_{1},x_{2})}}&{{\to}}&{{10+x_{1}+\left(15x_{2}-{\frac{5(15x_{1}-5)^{2}}{(4\pi)^{2}}}+{\frac{5(15x_{1}-5)}{\pi}}-6\right)^{2}+10\cos(15x_{1}-5)\left(1-{\frac{1}{5\pi}}\right)^{2}}\end{array}\right.$$ |
|
|
|
| d | θ | N | mean of R2 | standard deviation of R2 | |
|
|-----|-----|-----|--------------|----------------------------| |
|
| 2 | 0.2 | 4 | 0.94 | 0.04 | |
|
| 2 | 0.5 | 4 | 0.96 | 0.03 | |
|
| 2 | 0.2 | 10 | 0.94 | 0.02 | |
|
| 2 | 0.5 | 10 | 0.95 | 0.02 | |
|
| 2 | 0.2 | 20 | 0.95 | 0.02 | |
|
| 2 | 0.5 | 20 | 0.98 | 0.02 | |
|
| 3 | 0.2 | 6 | 0.96 | 0.02 | |
|
| 3 | 0.5 | 6 | 0.96 | 0.06 | |
|
| 3 | 0.2 | 15 | 0.95 | 0.01 | |
|
| 3 | 0.5 | 15 | 0.98 | 0.02 | |
|
| 3 | 0.2 | 30 | 0.96 | 0.02 | |
|
| 3 | 0.5 | 30 | 0.98 | 0.01 | |
|
| 5 | 0.2 | 10 | 0.93 | 0.04 | |
|
| 5 | 0.5 | 10 | 0.97 | 0.03 | |
|
| 5 | 0.2 | 25 | 0.92 | 0.02 | |
|
| 5 | 0.5 | 25 | 0.96 | 0.03 | |
|
| 5 | 0.2 | 50 | 0.94 | 0.01 | |
|
| 5 | 0.5 | 50 | 0.95 | 0.06 | |
|
|
|
Table 1: Coefficients of determination R2 between the proposed, fast, approximation of deriv-EIN **and a** |
|
Monte-Carlo approximation. |
|
|
|
$$y^{\mathrm{2D}}(\mathbf{x})=y_{0}^{\mathrm{2D}}(\mathbf{x})-\operatorname*{min}_{\mathbf{z}\in[0,1]^{2}}y_{0}^{\mathrm{2D}}(\mathbf{z}),\ \mathbf{x}\in[0,1]^{2}.$$ |
|
|
|
## C Generation Of Test Functions As Realizations Of A Gaussian **Process** |
|
|
|
We describe here how deterministic functions are defined as particular realizations of a centered Gaussian process Z indexed by x P X, where X **is a compact subset of** R |
|
d and the covariance function of Z **is denoted** |
|
by C**. The process consists of the following steps.** |
|
1. We start by generating a design of experiments X :" tx1*, . . . ,* xN u of large size N **covering as** |
|
much as possible X. |
|
|
|
2. We then project Z on this design of experiments, and we note z " pz1*, . . . , z*N q the vector containing the values of Z realized at x1*, . . . ,* xN . |
|
|
|
3. By Gaussian conditioning, the function x ÞÑ rpxq T R |
|
´1z **can thus be seen as the continuous** |
|
extension of a realization of Z whose projection in X is equal to z**, where for all** x P X, |
|
|
|
$$\mathbf{r}(\mathbf{x})={\left(\begin{array}{l}{C(\mathbf{x},\mathbf{x}_{1})}\\ {\vdots}\\ {C(\mathbf{x},\mathbf{x}_{N})}\end{array}\right)},\ \ \mathbf{R}:={\left[\begin{array}{l l l}{C(\mathbf{x}_{1},\mathbf{x}_{1})}&{\cdots}&{C(\mathbf{x}_{1},\mathbf{x}_{N})}\\ {\vdots}&{\cdots}&{\vdots}\\ {C(\mathbf{x}_{N},\mathbf{x}_{1})}&{\cdots}&{C(\mathbf{x}_{N},\mathbf{x}_{N})}\end{array}\right]}$$ |
|
|
|
It is clear that this type of construction strongly depends on the dimension of X**. Indeed, the larger the** |
|
size of X, the closer the constructed function will look like a particular realization of Z**. In addition, the** larger the input space dimension, d, the more points will be needed in X **for the continuous extension to be** |
|
relevant. For the examples treated in Section 4.2, X **is defined as the concatenation of a two-level factorial** |
|
design (in order to cover all the vertices of X " r0, 1s d**) and a space filling LHS design of size 100** ˆ d. |