Abstract
Bayesian optimization (BO) is a popular black-box function optimization method, which makes sequential decisions based on a Bayesian model, typically a Gaussian process (GP), for the function. To ensure the quality of the model, transfer learning approaches have been developed to automatically design GP priors by learning from data on "training" functions. These training functions are typically required to have the same domain as the "test" function (black-box function to be optimized). In this paper, we introduce MPHD, a model pre-training method on heterogeneous domains, which uses a neural net mapping from domain descriptions to specifications of a hierarchical GP. MPHD can be seamlessly integrated with BO to transfer knowledge across heterogeneous search spaces. Our theoretical and empirical results demonstrate the validity of MPHD and its superior performance on challenging black-box function optimization tasks.
Authors
Zhou Fan, Xinran Han, Zi Wang
Venue
Transactions on Machine Learning Research (TMLR)