Abstract
We present an evolutionary optimizer incorporating knowledge transfer through forward and inverse surrogate models for solving multiobjective problems, within a stringent computational budget. Forward knowledge transfer is employed to fully exploit solution-evaluation datasets from related tasks by building Bayesian forward multitask surrogate models that map points from decision to objective space. Inverse knowledge transfer via Bayesian inverse multitask models makes possible the creation of high-quality solution populations in decision space by mapping back from preferred points in objective space. In contrast to prior work, the proposed method can improve the overall convergence performance to multiple Pareto sets by fully exploiting information available for diverse multiobjective problems. Empirical studies conducted on benchmark and real-world multitask multiobjective optimization problems demonstrate the faster convergence rate and enhanced inverse modeling accuracy of our algorithm compared to state-of-the-art algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The training process of the MTGP model can be found in [19].
- 2.
References
Bali, K.K., Gupta, A., Ong, Y.S., Tan, P.S.: Cognizant multitasking in multiobjective multifactorial evolution: MO-MFEA-II. IEEE Trans. Cybern. 51(4), 1784–1796 (2021)
Bali, K.K., Ong, Y.S., Gupta, A., Tan, P.S.: Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II. IEEE Trans. Evol. Comput. 24(1), 69–83 (2019)
Bonilla, E.V., Chai, K., Williams, C.: Multi-task gaussian process prediction. Adv. Neural Inf. Process. Syst. 20 (2007)
Branke, J., Deb, K., Miettinen, K., Słowiński, R. (eds.): Multiobjective Optimization: Interactive and Evolutionary Approaches. Springer, Berlin, Heidelberg (2008)
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. KDD ’16, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939785
Cheng, F.Y., Li, X.S.: Generalized center method for multiobjective engineering optimization. Eng. Optim. 31(5), 641–661 (1999).https://doi.org/10.1080/03052159908941390
Cheng, R., Jin, Y., Narukawa, K., Sendhoff, B.: A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling. IEEE Trans. Evol. Comput. 19(6), 838–856 (2015). https://doi.org/10.1109/TEVC.2015.2395073
Choong, H.X., Ong, Y.S., Gupta, A., Chen, C., Lim, R.: Jack and masters of all trades: one-pass learning sets of model sets from large pre-trained models. IEEE Comput. Intell. Mag. 18(3), 29–40 (2023). https://doi.org/10.1109/MCI.2023.3277769
Coello, C.A.C.: Evolutionary Algorithms for Solving Multi-objective Problems. Springer, Cham (2007). https://doi.org/10.1007/978-0-387-36797-2
Da, B., Gupta, A., Ong, Y.S.: Curbing negative influences online for seamless transfer evolutionary optimization. IEEE Trans. Cybern. 49(12), 4365–4378 (2018)
Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable multi-objective optimization test problems. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02, Cat. No.02TH8600, vol. 1, pp. 825–830 (2002). https://doi.org/10.1109/CEC.2002.1007032
Deb, K., Jain, H.: An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans. Evol. Comput. 18(4), 577–601 (2014). https://doi.org/10.1109/TEVC.2013.2281535
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
Dellnitz, M., Schütze, O., Hestermeyer, T.: Covering Pareto sets by multilevel subdivision techniques. J. Optim. Theory Appl. 124, 113–136 (2005)
Emmerich, M., Giannakoglou, K., Naujoks, B.: Single and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006). https://doi.org/10.1109/TEVC.2005.859463
Feng, L., Gupta, A., Tan, K.C., Ong, Y.S.: Evolutionary Multi-Task Optimization: Foundations and Methodologies. Springer, Singapore (2023). https://doi.org/10.1007/978-981-19-5650-8
Feng, L., Zhou, L., Zhong, J., Gupta, A., Ong, Y.S., Tan, K.C.: Evolutionary multitasking via explicit autoencoding. IEEE Trans. Cybern. 49(9), 3457–3470 (2018)
Feurer, M., Hutter, F.: Hyperparameter optimization. In: Hutter, F., Kotthoff, L., Vanschoren, J. (eds.) Automated Machine Learning: Methods, Systems, Challenges, pp. 3–33. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05318-5_1
Gardner, J., Pleiss, G., Weinberger, K.Q., Bindel, D., Wilson, A.G.: GPyTorch: blackbox matrix-matrix Gaussian process inference with GPU acceleration. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018)
Giagkiozis, I., Fleming, P.J.: Pareto front estimation for decision making. Evol. Comput. 22(4), 651–678 (2014)
Gupta, A., Ong, Y.S., Feng, L.: Multifactorial evolution: toward evolutionary multitasking. IEEE Trans. Evol. Comput. 20(3), 343–357 (2016)
Gupta, A., Ong, Y.S., Feng, L.: Insights on transfer optimization: because experience is the best teacher. IEEE Trans. Emerg. Topics Comput. Intell. 2(1), 51–64 (2017)
Gupta, A., Ong, Y.S., Feng, L., Tan, K.C.: Multiobjective multifactorial optimization in evolutionary multitasking. IEEE Trans. Cybern. 47(7), 1652–1665 (2016)
Gupta, A., Ong, Y.S., Shakeri, M., Chi, X., NengSheng, A.Z.: The blessing of dimensionality in many-objective search: an inverse machine learning insight. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 3896–3902 (2019). https://doi.org/10.1109/BigData47090.2019.9005525
Gupta, A., Zhou, L., Ong, Y.S., Chen, Z., Hou, Y.: Half a dozen real-world applications of evolutionary multitasking, and more. IEEE Comput. Intell. Mag. 17(2), 49–66 (2022). https://doi.org/10.1109/MCI.2022.3155332
Ishibuchi, H., Masuda, H., Tanigaki, Y., Nojima, Y.: Modified distance calculation in generational distance and inverted generational distance. In: Gaspar-Cunha, A., Henggeler Antunes, C., Coello, C.C. (eds.) Evolutionary Multi-Criterion Optimization, pp. 110–125. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-15892-1_8
Kim, Y., Pan, Z., Hauser, K.: MO-BBO: multi-objective bilevel Bayesian optimization for robot and behavior co-design. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 9877–9883 (2021).https://doi.org/10.1109/ICRA48506.2021.9561846
Knowles, J.: Parego: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10(1), 50–66 (2006). https://doi.org/10.1109/TEVC.2005.851274
Lai, G., Liao, M., Li, K.: Empirical studies on the role of the decision maker in interactive evolutionary multi-objective optimization. In: 2021 IEEE Congress on Evolutionary Computation (CEC), pp. 185–192 (2021).https://doi.org/10.1109/CEC45853.2021.9504980
Lin, X., Yang, Z., Zhang, X., Zhang, Q.: Pareto set learning for expensive multi-objective optimization. In: Advances in Neural Information Processing Systems. vol. 35, pp. 19231–19247. Curran Associates, Inc. (2022)
Lin, X., Zhen, H.L., Li, Z., Zhang, Q.F., Kwong, S.: Pareto multi-task learning. Adv. Neural Inf. Process. Syst. 32 (2019)
Liu, J., Gupta, A., Ong, Y.S.: Inverse transfer multiobjective optimization. arXiv preprint arXiv:2312.14713 (2023)
Ma, J., Zhao, Z., Yi, X., Chen, J., Hong, L., Chi, E.H.: Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1930–1939. KDD ’18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3220007
Ma, P., Du, T., Matusik, W.: Efficient continuous pareto exploration in multi-task learning. In: III, H.D., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 119, pp. 6522–6531. PMLR (2020)
Paria, B., Kandasamy, K., Póczos, B.: A flexible framework for multi-objective Bayesian optimization using random scalarizations. In: Adams, R.P., Gogate, V. (eds.) Proceedings of The 35th Uncertainty in Artificial Intelligence Conference. Proceedings of Machine Learning Research, vol. 115, pp. 766–776. PMLR (2020). https://proceedings.mlr.press/v115/paria20a.html
Pfisterer, F., Schneider, L., Moosbauer, J., Binder, M., Bischl, B.: YAHPO gym - an efficient multi-objective multi-fidelity benchmark for hyperparameter optimization. In: Proceedings of the First International Conference on Automated Machine Learning. Proceedings of Machine Learning Research, vol. 188, pp. 31–39. PMLR (2022)
Ponweiser, W., Wagner, T., Biermann, D., Vincze, M.: Multiobjective optimization on a limited budget of evaluations using model-assisted s-metric selection. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds) Parallel Problem Solving from Nature – PPSN X. PPSN 2008. Lecture Notes in Computer Science, vol. 5199. Springer, Berlin, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87700-4_78
Seeger, M.: Gaussian processes for machine learning. Int. J. Neural Syst. 14(02), 69–106 (2004)
Sinha, A., Korhonen, P., Wallenius, J., Deb, K.: An interactive evolutionary multi-objective optimization algorithm with a limited number of decision maker calls. Eur. J. Oper. Res. 233(3), 674–688 (2014). https://doi.org/10.1016/j.ejor.2013.08.046
Tan, C.S., Gupta, A., Ong, Y.S., Pratama, M., Tan, P.S., Lam, S.K.: Pareto optimization with small data by learning across common objective spaces. Sci. Rep. 13(1), 7842 (2023). https://doi.org/10.1038/s41598-023-33414-6
Tanabe, R., Ishibuchi, H.: An easy-to-use real-world multi-objective optimization problem suite. Appl. Soft Comput. 89, 106078 (2020). https://doi.org/10.1016/j.asoc.2020.106078
Van Veldhuizen, D.A., Lamont, G.B.: Multiobjective evolutionary algorithm research: a history and analysis. Technical report, Citeseer (1998)
Vanschoren, J., van Rijn, J.N., Bischl, B., Torgo, L.: Openml: Networked science in machine learning. SIGKDD Explor. Newsl. 15(2), 49–60 (2014). https://doi.org/10.1145/2641190.2641198
Wei, T., Liu, J., Gupta, A., Tan, P.S., Ong, Y.S.: Bayesian forward-inverse transfer for multiobjective optimization - supplementary materials (2024). https://doi.org/10.5281/zenodo.11665260
Wei, T., Wang, S., Zhong, J., Liu, D., Zhang, J.: A review on evolutionary multitask optimization: trends and challenges. IEEE Trans. Evol. Comput. 26(5), 941–960 (2022). https://doi.org/10.1109/TEVC.2021.3139437
Wei, T., Zhong, J.: Towards generalized resource allocation on evolutionary multitasking for multi-objective optimization. IEEE Comput. Intell. Mag. 16(4), 20–37 (2021). https://doi.org/10.1109/MCI.2021.3108310
Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007)
Zhang, Q., Liu, W., Tsang, E., Virginas, B.: Expensive multiobjective optimization by MOEA/D with Gaussian process model. IEEE Trans. Evol. Comput. 14(3), 456–474 (2010). https://doi.org/10.1109/TEVC.2009.2033671
Zitzler, E., Thiele, L.: Multiobjective optimization using evolutionary algorithms — a comparative case study. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.P. (eds.) PPSN V, pp. 292–301. Springer, Berlin, Heidelberg (1998). https://doi.org/10.1007/BFb0056872
Acknowledgement
This research is partly supported by the National Research Foundation, Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No.: AISG2-GC-2023-010, “Design Beyond What You Know”: Material-Informed Differential Generative AI (MIDGAI) for Light-Weight High-Entropy Alloys and Multi-functional Composites (Stage 1a)”, the Distributed Smart Value Chain programme which is funded under the Singapore RIE2025 Manufacturing, Trade and Connectivity (MTC) Industry Alignment Fund-Pre-Positioning, (Award No: M23L4a0001), and the Centre for Frontier AI Research (CFAR) under Agency for Science, Technology and Research (A*STAR), and the College of Computing and Data Science, Nanyang Technological University.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wei, T., Liu, J., Gupta, A., Tan, P.S., Ong, YS. (2024). Bayesian Forward-Inverse Transfer for Multiobjective Optimization. In: Affenzeller, M., et al. Parallel Problem Solving from Nature – PPSN XVIII. PPSN 2024. Lecture Notes in Computer Science, vol 15151. Springer, Cham. https://doi.org/10.1007/978-3-031-70085-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-70085-9_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70084-2
Online ISBN: 978-3-031-70085-9
eBook Packages: Computer ScienceComputer Science (R0)