Abstract
Efficient global optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO bound constrained problems, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black-box optimization methods.




Similar content being viewed by others
Data availability
The authors confirm that all data generated or analysed during this study are included in the paper.
Notes
Importantly, TURBO uses a simple decrease rule of the objective function, which turns to be insufficient to ensure convergence to a stationary point with GP models.
References
Anagnostidis, S.-K., Lucchi, A., Diouane, Y.: Direct-search for a class of stochastic min-max problems. In: International Conference on Artificial Intelligence and Statistics, pp. 3772–3780 (2021)
Audet, C., Dennis, J.E., Jr.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17, 188–217 (2006)
Audet, C., Dennis, J.E., Jr.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20, 445–472 (2009)
Audet, C., Dzahini, K.J., Kokkolaras, M., Le Digabel, S.: Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates. Comput. Optim. Appl. 19, 1–34 (2021)
Audet, C., Hare, W.: Derivative-Free and Blackbox Optimization. Springer, Cham (2017)
Audet, C., Le Digabel, S., Rochon Montplaisir, V., Tribes, C.: Algorithm 1027: NOMAD version 4: nonlinear optimization with the mads algorithm. ACM Trans. Math. Softw. 48, 1–22 (2022)
Audet, C., Dennis, J.E., Jr.: Analysis of generalized pattern searches. SIAM J. Optim. 13, 889–903 (2002)
Auger, A., Finck, S., Hansen, N., Ros, R.: BBOB 2009: Comparison tables of all algorithms on all noiseless functions. Technical Report RT-0383, INRIA, April (2010)
Bajer, L., Pitra, Z., Repický, J., Holena, M.: Gaussian process surrogate models for the CMA evolution strategy. Evol. Comput. 27, 665–697 (2019)
Bergou, E., Diouane, Y., Kungurtsev, V., Royer, C.W.: A stochastic Levenberg-Marquardt method using random models with complexity results. SIAM-ASA J. Uncertain. Quant. 10, 507–536 (2022)
Blanchet, J., Cartis, C., Menickelly, M., Scheinberg, K.: Convergence rate analysis of a stochastic trust region method via supermartingales. INFORMS J. Optim. 1, 92–119 (2019)
Booker, A.J., Dennis, J.E., Jr., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Multidiscipl. Optim. 17, 1–13 (1998)
Bouhlel, M.A., Bartoli, N., Regis, R.G., Otsmane, A., Morlier, J.: Efficient global optimization for high-dimensional constrained problems by using the kriging models combined with the partial least squares method. Eng. Optim. 50, 2038–2053 (2018)
Brochu, E., Cora, V. M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, (2010)
Brockhoff, D.: Online description of the BBOB functions. https://coco.gforge.inria.fr/ (2006)
Bull, A.D.: Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879–2904 (2011)
Chen, R., Menickelly, M., Scheinberg, K.: Stochastic optimization using trust-region method and random models. Math. Program. 169, 447–487 (2018)
Clarke, F. H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983). Reissued by SIAM, Philadelphia (1990)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2009)
Diouane, Y.: A merit function approach for evolution strategies. EURO J. Comput. Optim. 9, 100001 (2021)
Diouane, Y., Gratton, S., Vicente, L.N.: Globally convergent evolution strategies. Math. Program. 152, 467–490 (2015)
Diouane, Y., Gratton, S., Vicente, L.N.: Globally convergent evolution strategies for constrained optimization. Comput. Optim. Appl. 62, 323–346 (2015)
Diouane, Y., Lucchi, A., Patil, V.: A globally convergent evolutionary strategy for stochastic constrained optimization with applications to reinforcement learning. In: International Conference on Artificial Intelligence and Statistics, pp. 3772–3780 (2022)
Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., Poloczek, M.: Scalable global optimization via local Bayesian optimization. In: Advances in Neural Information Processing Systems
Fang, K.-T., Li, R., Sudjianto, A.: Design and Modeling for Computer Experiments. CRC Press, London (2005)
Forrester, A.I.J., Sóbester, A., Keane, A.J.: Multi-fidelity optimization via surrogate modelling. Philos. Trans. A. Math. Phys. Eng. Sci. 463, 3251–3269 (2007)
Frazier, P. I.: A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)
Gratton, S., Vicente, L.N.: A merit function approach for direct search. SIAM J. Optim. 24, 1980–1998 (2014)
Hansen, N., Auger, A., Ros, R., Finck, S., Pošík, P.: Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. In: Annual Conference Companion on Genetic and Evolutionary Computation, pp. 1689–1696 (2010)
Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw. 36, 114–144 (2021)
Hutter, F., Hoos, H. H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: International Conference on Learning and Intelligent Optimization, pp. 507–523 (2011)
Huyer, W., Neumaier, A.: Global optimization by multilevel coordinate search. J. Global Optim. 14, 331–355 (1999)
Jahn, J.: Introduction to the Theory of Nonlinear Optimization. Springer, Berlin (1996)
Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13, 455–492 (1998)
Kandasamy, K., Schneider, J., Póczos, B.: High dimensional Bayesian optimisation and bandits via additive models. In: International Conference on Machine Learning, pp. 295–304 (2015)
Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: New perspectives on some classical and modern methods. SIAM Rev. 45, 385–482 (2003)
Le Digabel, S.: Algorithm 909: Nomad: Nonlinear optimization with the mads algorithm. ACM Trans. Math. Softw. 37, 44 (2011)
Le Digabel, S., Wild, S.M.: A taxonomy of constraints in simulation-based optimization. Technical Report G-2015-57, Les cahiers du GERAD (2015)
McLeod, M., Roberts, S., Osborne, M. A.: Optimization, fast and slow: optimally switching between local and Bayesian optimization. In: International Conference on Machine Learning, pp. 3443–3452 (2018)
Mockus, J.: Bayesian Approach to Global Optimization: Theory and Applications. Springer Science & Business Media, Berlin (2012)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)
Oh, Ch. Y., Gavves, E., Welling, M.: BOCK: Bayesian optimization with cylindrical kernels. In: International Conference on Machine Learning, pp. 3868–3877 (2018)
Picheny, V., Casadebaig, P., Trépos, R., Faivre, R., Da Silva, D., Vincourt, P., Costes, E.: Using numerical plant models and phenotypic correlation space to design achievable ideotypes. Plant, Cell Environ. 40, 1926–1939 (2017)
Picheny, V., Ginsbourger, D.: Noisy Kriging-based optimization methods: A unified implementation within the DiceOptim package. Comput. Stat. Data Anal. 71, 1035–1053 (2014)
Picheny, V., Gramacy, R. B., Wild, S., Le Digabel, S.: Bayesian optimization under mixed constraints with a slack-variable augmented lagrangian. In: Advances in Neural Information Processing Systems, pp. 1435–1443 (2016)
Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscipl. Optim. 48, 607–626 (2013)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)
Regis, R.G.: Trust regions in Kriging-based optimization with expected improvement. Eng. Optim. 48, 1037–1059 (2016)
Rios, L., Sahinidis, N.: Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Global Optim. 56, 1247–1293 (2013)
Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by Kriging-based metamodeling and optimization. J. Stat. Softw. 51 (2012)
Schonlau, M., Welch, W. J., Jones, D. R.: Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pp. 11–25 (1998)
Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., De Freitas, N.: Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 104, 148–175 (2015)
Siivola, E., Vehtari, A., Vanhatalo, J., González, J., Andersen, M. R.: Correcting boundary over-exploration deficiencies in Bayesian optimization with virtual derivative sign observations. In: IEEE International Workshop on Machine Learning for Signal Processing, pp. 1–6 (2018)
Snoek, J., Larochelle, H., Adams, R. P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012)
Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Gaussian process optimization in the bandit setting: No regret and experimental design. In: International Conference on Machine Learning (2010)
Stein, M.L.: Interpolation of Spatial Data: Some Theory for Kriging. Springer Science & Business Media, Berlin (2012)
Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimization. J. Global Optim. 39, 197–219 (2007)
Vazquez, E., Bect, J.: Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. and Inference 140, 3088–3095 (2010)
Vicente, L.N., Custódio, A.L.: Analysis of direct searches for discontinuous functions. Math. Program. 133, 299–325 (2012)
Wang, Z., Hutter, F., Zoghi, M., Matheson, D., de Feitas, N.: Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Intell. Res. 55, 361–387 (2016)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A pseudo-code of the TREGO algorithm

B Functions of the BBOB noiseless testbed
C Complementary experimental results
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Diouane, Y., Picheny, V., Riche, R.L. et al. TREGO: a trust-region framework for efficient global optimization. J Glob Optim 86, 1–23 (2023). https://doi.org/10.1007/s10898-022-01245-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-022-01245-w