Zum Inhalt springen

Stochastic gradient descent for Gaussian process inference

Guest lecture by José Miguel Hernández Lobato

April 10, 2024, 11:00 AM – 12:00 PM
Ernst-Abbe-Platz 2, 07743 Jena
Room: 3319

Gaussian processes are a flexible framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems. In general, this has a cubic cost in dataset size and is sensitive to conditioning. We explore stochastic gradient algorithms as a computationally efficient method of approximately solving these linear systems. Experimentally, stochastic gradient descent achieves state-of-the-art performance on large-scale regression tasks. Its uncertainty estimates match the performance of significantly more expensive baselines on a large-scale Bayesian optimization task. On a molecular binding affinity prediction task, our method places Gaussian processes with a Tanimoto kernel on par with state-of-the-art graph neural networks.

José Miguel Hernández Lobato is a University Lecturer in Machine Learning, Visiting Researcher at Microsoft Research Cambridge and Fellow at the Alan Turing Institute. Before, he did postdoctoral research at the University of Cambridge and at Harvard University. His research is in probabilistic machine learning, with interests in Bayesian deep learning, Bayesian optimization, automatic chemical design, reinforcement learning, and compression. His research has been used commercially by companies such as Infosys, Tencent, Siemens, Samsung, and Microsoft. He is a frequent Area Chair of UAI, IJCAI, ICML, AISTATS and AAAI and reviewer for NeurIPS, ICLR, and the Journal of Machine Learning Research. He is a member of the ELLIS society and the director of the ELLIS unit Cambrdige