I am a senior researcher at Microsoft Research AI4Science where I work on machine learning methods for chemistry with Frank Noé and Jan Hermann, focusing on Quantum Monte Carlo. I also have a strong interest in Bayesian experimental design and active learning. I am driven by the desire to understand how machine learning can help us to solve critical problems in the sciences and to build new, sustainable technology. Previously, I did my PhD in Statistical Machine Learning at the University of Oxford, supervised by Yee Whye Teh and Tom Rainforth in the Computational Stats and Machine Learning Group in the Department of Statistics. Before starting my PhD, I studied mathematics at Cambridge where my Director of Studies was Julia Gog.
A large part of my PhD work was on Bayesian experimental design: how do we design experiments that will be most informative about the process being investigated? One approach is to optimize the Expected Information Gain (EIG), which can be seen as a mutual information, over the space of possible designs. In my work, I have developed variational methods to estimate the EIG, stochastic gradient methods to optimize over designs, and how to obtain unbiased gradient estimators of EIG. In more recent work, we have studied policies that can choose a sequence of designs automatically. These two talks (Corcoran Memorial Prize Lecture and SIAM minisymposium) offers introductions to experimental design and my research in this area.
I am also keen on open-source code: highlights include experimental design tools in deep probabilistic programming language Pyro, forward Laplacians, Redis<->Python interfacing, reproducing SimCLR.
Recent work
Highly Accurate Real-space Electron Densities with Neural Networks
Lixue Cheng, P. Bernát Szabó, Zeno Schätzle, Derk Kooi, Jonas Köhler, Klaas J. H. Giesbertz, Frank Noé, Jan Hermann, Paola Gori-Giorgi, Adam Foster. Preprint.Amortized Active Causal Induction with Deep Reinforcement Learning
Yashas Annadani, Panagiotis Tigas, Stefan Bauer, Adam Foster. NeurIPS 2024 (to appear).Making Better Use of Unlabelled Data in Bayesian Active Learning
Freddie Bickford Smith, Adam Foster, Tom Rainforth. AISTATS 2024.Concepts in Modern Bayesian Experimental Design (Corcoran Memorial Prize)
Stochastic-gradient Bayesian Optimal Experimental Design with Gaussian Processes
Adam Foster. Blog post.Folx - Forward Laplacian for JAX
Nicholas Gao, Jonas Köhler, Adam Foster.Prediction-Oriented Bayesian Active Learning
Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, Tom Rainforth. AISTATS 2023.Modern Bayesian Experimental Design
Tom Rainforth, Adam Foster, Desi R Ivanova, Freddie Bickford Smith. Statistical Science.CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design
Desi R Ivanova, Joel Jennings, Tom Rainforth, Cheng Zhang, Adam Foster. ICML 2023.Differentiable Multi-Target Causal Bayesian Experimental Design
Yashas Annadani, Panagiotis Tigas, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, Stefan Bauer. ICML 2023.Optimising adaptive experimental designs with RL
Adam Foster. Blog post.ICML 2022 Long Presentation: Contrastive Mixture of Posteriors
Efficient Real-world Testing of Causal Decision Making via Bayesian Experimental Design for Contextual Optimisation
Desi R. Ivanova, Joel Jennings, Cheng Zhang, Adam Foster. ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World.Deep Adaptive Design and Bayesian reinforcement learning
Adam Foster. Blog post.Learning Instance-Specific Augmentations by Capturing Local Invariances
Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim. ICML 2023.