Florent Duchaine, Thierry Morel, and Gicquel L.Y.M. Computationalfluid-dynamics-based kriging optimization tool for aeronautical combustion chambers. AIAA Journal, 47:631-645, 03 2009.
E.G. Talbi. Metaheuristics: From Design to Implementation. Wiley Series on Parallel and Distributed Computing. Wiley, 2009.
R. Duvigneau and M. Visonneau. Hybrid genetic algorithms and artificial neural networks for complex design optimization in cfd. International Journal for Numerical Methods in Fluids, 44(11):1257-1278, 2004. doi: http://dx.doi.org/10.1002/fld.688.
Yaochu Jin. Surrogate-Assisted evolutionary computation: Recent advances and future challenges. Swarm and Evolutionary Computation, 1(2):61-70, 2011. doi: https://doi.org/10.1016/j.swevo.2011.05.001.
Jose Hugo Barron-Zambrano Alan Diaz-Manriquez, Gregorio Toscano and Edgar Tello-Leal. A review of surrogate assisted multiobjective evolutionary algorithms. Computational Intelligence and Neuroscience, 2016:14, 2016. doi: https://doi.org/10.1155/2016/9420460.
Kalyanmoy Deb and Pawan Nain. An Evolutionary Multi-objective Adaptive Meta-modeling Procedure Using Artificial Neural Networks. In Shengxiang Yang, Yew-Soon Ong, and Yaochu Jin, editors, Evolutionary Computation in Dynamic and Uncertain Environments, volume 51, chapter 13, pages 297-322. Springer, Berlin, Heidelberg, 2007. doi: http://dx.doi.org/10.1007/978-3-540-49774-5\ 13.
Antonio Gaspar-Cunha and Armando Vieira. A multi-objective evolutionary algorithm using neural networks to approximate fitness evaluations. International Journal of Computers, Systems and Signals, 6:18-36, 01 2005.
Juliane Muller and Christine A. Shoemaker. Influence of ensemble surrogate models and sampling strategy on the solution quality of algorithms for computationally expensive black-box global optimization problems. Journal of Global Optimization, 60(2):123-144, Oct 2014. doi: https://doi.org/10.1007/s10898-014-0184-0.
Carlo Poloni, Andrea Giurgevich, Luka Onesti, and Valentino Pediroda. Hybridization of a multi-objective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics. Computer Methods in Applied Mechanics and Engineering, 186(2):403-420, 2000. doi: https://doi.org/10.1016/S0045-7825(99) 00394-1.
A. Syberfeldt, H. Grimm, A. Ng, and R. I. John. A parallel surrogateassisted multi-objective evolutionary algorithm for computationally expensive optimization problems. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pages 3177-3184, June 2008. doi: https://doi.org/10.1109/CEC.2008. 4631228.
Miha Mlakar, Dejan Petelin, Tea Tusar, and Bogdan Filipic. Gp-demo: Differential evolution for multiobjective optimization based on gaussian process models. European Journal of Operational Research, 243(2):347-361, 2015. doi: https://doi.org/10.1016/j.ejor.2014.04.011.
S. Huband, P. Hingston, L. Barone, and L. While. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5):477-506, Oct 2006. doi: https://doi.org/10.1109/TEVC.2005.861417.
T. Okabe, Y. Jin, and B. Sendhoff. A critical survey of performance indices for multi-objective optimisation. In Evolutionary Computation, 2003. CEC 03. The 2003 Congress on, volume 2, pages 878-885 Vol. 2, Dec 2003. doi: https://doi.org/10.1109/CEC.2003.1299759.
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation, 6(2):182-197, Apr 2002. doi: https: //doi.org/10.1109/4235.996017.
Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989. doi: https://doi.org/10.1016/0893-6080(89) 90020-8.
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. Evol. Comput., 8(2):173-195, June 2000. doi: http://dx.doi.org/10.1162/106365600568202.
Francesco Biscani, Dario Izzo, and Marcus Martens. esa/pagmo2: pagmo 2.6, November 2017. doi: https://doi.org/10.5281/zenodo.1054110.
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249-256, Chia Laguna Resort, Sardinia, Italy, 13-15 May 2010. PMLR.
A. Colin Cameron and Frank A.G. Windmeijer. An r-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics, 77(2):329-342, 1997. doi: https://doi.org/10.1016/S0304-4076(96)01818-0.
Leo Breiman. Random forests. Mach. Learn., 45(1):5-32, October 2001. doi: https://doi.org/10.1023/A:1010933404324.