Skip to main content
Log in

TREGO: a trust-region framework for efficient global optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Efficient global optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO bound constrained problems, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black-box optimization methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

The authors confirm that all data generated or analysed during this study are included in the paper.

Notes

  1. Importantly, TURBO uses a simple decrease rule of the objective function, which turns to be insufficient to ensure convergence to a stationary point with GP models.

  2. https://cran.r-project.org/package=DiceOptim

  3. https://secondmind-labs.github.io/trieste/

References

  1. Anagnostidis, S.-K., Lucchi, A., Diouane, Y.: Direct-search for a class of stochastic min-max problems. In: International Conference on Artificial Intelligence and Statistics, pp. 3772–3780 (2021)

  2. Audet, C., Dennis, J.E., Jr.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17, 188–217 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Audet, C., Dennis, J.E., Jr.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20, 445–472 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Audet, C., Dzahini, K.J., Kokkolaras, M., Le Digabel, S.: Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates. Comput. Optim. Appl. 19, 1–34 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  5. Audet, C., Hare, W.: Derivative-Free and Blackbox Optimization. Springer, Cham (2017)

    Book  MATH  Google Scholar 

  6. Audet, C., Le Digabel, S., Rochon Montplaisir, V., Tribes, C.: Algorithm 1027: NOMAD version 4: nonlinear optimization with the mads algorithm. ACM Trans. Math. Softw. 48, 1–22 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  7. Audet, C., Dennis, J.E., Jr.: Analysis of generalized pattern searches. SIAM J. Optim. 13, 889–903 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Auger, A., Finck, S., Hansen, N., Ros, R.: BBOB 2009: Comparison tables of all algorithms on all noiseless functions. Technical Report RT-0383, INRIA, April (2010)

  9. Bajer, L., Pitra, Z., Repický, J., Holena, M.: Gaussian process surrogate models for the CMA evolution strategy. Evol. Comput. 27, 665–697 (2019)

    Article  Google Scholar 

  10. Bergou, E., Diouane, Y., Kungurtsev, V., Royer, C.W.: A stochastic Levenberg-Marquardt method using random models with complexity results. SIAM-ASA J. Uncertain. Quant. 10, 507–536 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  11. Blanchet, J., Cartis, C., Menickelly, M., Scheinberg, K.: Convergence rate analysis of a stochastic trust region method via supermartingales. INFORMS J. Optim. 1, 92–119 (2019)

    Article  MathSciNet  Google Scholar 

  12. Booker, A.J., Dennis, J.E., Jr., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Multidiscipl. Optim. 17, 1–13 (1998)

    Google Scholar 

  13. Bouhlel, M.A., Bartoli, N., Regis, R.G., Otsmane, A., Morlier, J.: Efficient global optimization for high-dimensional constrained problems by using the kriging models combined with the partial least squares method. Eng. Optim. 50, 2038–2053 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Brochu, E., Cora, V. M., De Freitas, N.: A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, (2010)

  15. Brockhoff, D.: Online description of the BBOB functions. https://coco.gforge.inria.fr/ (2006)

  16. Bull, A.D.: Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879–2904 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Chen, R., Menickelly, M., Scheinberg, K.: Stochastic optimization using trust-region method and random models. Math. Program. 169, 447–487 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Clarke, F. H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983). Reissued by SIAM, Philadelphia (1990)

  19. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2009)

    Book  MATH  Google Scholar 

  20. Diouane, Y.: A merit function approach for evolution strategies. EURO J. Comput. Optim. 9, 100001 (2021)

    Article  MathSciNet  Google Scholar 

  21. Diouane, Y., Gratton, S., Vicente, L.N.: Globally convergent evolution strategies. Math. Program. 152, 467–490 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Diouane, Y., Gratton, S., Vicente, L.N.: Globally convergent evolution strategies for constrained optimization. Comput. Optim. Appl. 62, 323–346 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Diouane, Y., Lucchi, A., Patil, V.: A globally convergent evolutionary strategy for stochastic constrained optimization with applications to reinforcement learning. In: International Conference on Artificial Intelligence and Statistics, pp. 3772–3780 (2022)

  24. Eriksson, D., Pearce, M., Gardner, J., Turner, R. D., Poloczek, M.: Scalable global optimization via local Bayesian optimization. In: Advances in Neural Information Processing Systems

  25. Fang, K.-T., Li, R., Sudjianto, A.: Design and Modeling for Computer Experiments. CRC Press, London (2005)

    Book  MATH  Google Scholar 

  26. Forrester, A.I.J., Sóbester, A., Keane, A.J.: Multi-fidelity optimization via surrogate modelling. Philos. Trans. A. Math. Phys. Eng. Sci. 463, 3251–3269 (2007)

    MathSciNet  MATH  Google Scholar 

  27. Frazier, P. I.: A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)

  28. Gratton, S., Vicente, L.N.: A merit function approach for direct search. SIAM J. Optim. 24, 1980–1998 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  29. Hansen, N., Auger, A., Ros, R., Finck, S., Pošík, P.: Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. In: Annual Conference Companion on Genetic and Evolutionary Computation, pp. 1689–1696 (2010)

  30. Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw. 36, 114–144 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  31. Hutter, F., Hoos, H. H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: International Conference on Learning and Intelligent Optimization, pp. 507–523 (2011)

  32. Huyer, W., Neumaier, A.: Global optimization by multilevel coordinate search. J. Global Optim. 14, 331–355 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  33. Jahn, J.: Introduction to the Theory of Nonlinear Optimization. Springer, Berlin (1996)

    Book  MATH  Google Scholar 

  34. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13, 455–492 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  35. Kandasamy, K., Schneider, J., Póczos, B.: High dimensional Bayesian optimisation and bandits via additive models. In: International Conference on Machine Learning, pp. 295–304 (2015)

  36. Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: New perspectives on some classical and modern methods. SIAM Rev. 45, 385–482 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  37. Le Digabel, S.: Algorithm 909: Nomad: Nonlinear optimization with the mads algorithm. ACM Trans. Math. Softw. 37, 44 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Le Digabel, S., Wild, S.M.: A taxonomy of constraints in simulation-based optimization. Technical Report G-2015-57, Les cahiers du GERAD (2015)

  39. McLeod, M., Roberts, S., Osborne, M. A.: Optimization, fast and slow: optimally switching between local and Bayesian optimization. In: International Conference on Machine Learning, pp. 3443–3452 (2018)

  40. Mockus, J.: Bayesian Approach to Global Optimization: Theory and Applications. Springer Science & Business Media, Berlin (2012)

    MATH  Google Scholar 

  41. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  42. Oh, Ch. Y., Gavves, E., Welling, M.: BOCK: Bayesian optimization with cylindrical kernels. In: International Conference on Machine Learning, pp. 3868–3877 (2018)

  43. Picheny, V., Casadebaig, P., Trépos, R., Faivre, R., Da Silva, D., Vincourt, P., Costes, E.: Using numerical plant models and phenotypic correlation space to design achievable ideotypes. Plant, Cell Environ. 40, 1926–1939 (2017)

    Article  Google Scholar 

  44. Picheny, V., Ginsbourger, D.: Noisy Kriging-based optimization methods: A unified implementation within the DiceOptim package. Comput. Stat. Data Anal. 71, 1035–1053 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  45. Picheny, V., Gramacy, R. B., Wild, S., Le Digabel, S.: Bayesian optimization under mixed constraints with a slack-variable augmented lagrangian. In: Advances in Neural Information Processing Systems, pp. 1435–1443 (2016)

  46. Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidiscipl. Optim. 48, 607–626 (2013)

    Article  Google Scholar 

  47. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  48. Regis, R.G.: Trust regions in Kriging-based optimization with expected improvement. Eng. Optim. 48, 1037–1059 (2016)

    Article  MathSciNet  Google Scholar 

  49. Rios, L., Sahinidis, N.: Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Global Optim. 56, 1247–1293 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  50. Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by Kriging-based metamodeling and optimization. J. Stat. Softw. 51 (2012)

  51. Schonlau, M., Welch, W. J., Jones, D. R.: Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pp. 11–25 (1998)

  52. Shahriari, B., Swersky, K., Wang, Z., Adams, R.P., De Freitas, N.: Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE 104, 148–175 (2015)

    Article  Google Scholar 

  53. Siivola, E., Vehtari, A., Vanhatalo, J., González, J., Andersen, M. R.: Correcting boundary over-exploration deficiencies in Bayesian optimization with virtual derivative sign observations. In: IEEE International Workshop on Machine Learning for Signal Processing, pp. 1–6 (2018)

  54. Snoek, J., Larochelle, H., Adams, R. P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012)

  55. Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Gaussian process optimization in the bandit setting: No regret and experimental design. In: International Conference on Machine Learning (2010)

  56. Stein, M.L.: Interpolation of Spatial Data: Some Theory for Kriging. Springer Science & Business Media, Berlin (2012)

    Google Scholar 

  57. Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimization. J. Global Optim. 39, 197–219 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  58. Vazquez, E., Bect, J.: Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J. Stat. Plan. and Inference 140, 3088–3095 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  59. Vicente, L.N., Custódio, A.L.: Analysis of direct searches for discontinuous functions. Math. Program. 133, 299–325 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  60. Wang, Z., Hutter, F., Zoghi, M., Matheson, D., de Feitas, N.: Bayesian optimization in a billion dimensions via random embeddings. J. Artif. Intell. Res. 55, 361–387 (2016)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youssef Diouane.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A pseudo-code of the TREGO algorithm

figure a

B Functions of the BBOB noiseless testbed

Table 2 Functions of the BBOB noiseless testbed, divided in groups

C Complementary experimental results

Fig. 5
figure 5

Effect of changing parameters of the TREGO algorithm, averaged by function groups for \(n=5\). Run length is \(30\times n\)

Fig. 6
figure 6

Comparison of TREGO with state-of-the-art optimization algorithms on separable (left) and unimodal with high conditioning functions (right), for \(n=5\) (top) and \(n=10\) (bottom). Run length = \(50\times n\)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Diouane, Y., Picheny, V., Riche, R.L. et al. TREGO: a trust-region framework for efficient global optimization. J Glob Optim 86, 1–23 (2023). https://doi.org/10.1007/s10898-022-01245-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-022-01245-w

Keywords