I am a senior researcher at Microsoft Research AI4Science where I work on machine learning methods for chemistry with Frank NoƩ and Jan Hermann, focusing on QMC. I also have a strong interest in Bayesian experimental design and active learning.
I am driven by the desire to understand how machine learning can help us to solve critical problems in the sciences and to build new, sustainable technology.
Previously, I did my PhD in Statistical Machine Learning at the University of Oxford, supervised by Yee Whye Teh and Tom Rainforth in the Computational Stats and Machine Learning Group in the Department of Statistics.
Before starting my PhD, I studied mathematics at Cambridge where my Director of Studies was Julia Gog.
A large part of my PhD work was on Bayesian experimental design: how do we design experiments that will be most informative about the process being investigated? One approach is to optimize the Expected Information Gain (EIG), which can be seen as a mutual information, over the space of possible designs. In my work, I have developed variational methods to estimate the EIG, stochastic gradient methods to optimize over designs, and how to obtain unbiased gradient estimators of EIG. In more recent work, we have studied policies that can choose a sequence of designs automatically. This talk offers a 30 minute introduction to experimental design and my research in this area. To use Bayesian experimental design in practice, we have developed a range of tools in deep probabilistic programming language Pyro: our aim is to allow automatic experimental design for any Pyro model.
Since EIG is a mutual information, I am also interested in the intersection between information theory and machine learning. This led me to study contrastive representation learning and the role of invariance in these methods, as well as reproducing SimCLR in PyTorch.
Recent work
Prediction-Oriented Bayesian Active Learning
Freddie Bickford Smith, Andreas Kirsch, Sebastian Farquhar, Yarin Gal, Adam Foster, Tom Rainforth. AISTATS 2023.Modern Bayesian Experimental Design
Tom Rainforth, Adam Foster, Desi R Ivanova, Freddie Bickford Smith. Statistical Science (to appear).CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design
Desi R Ivanova, Joel Jennings, Tom Rainforth, Cheng Zhang, Adam Foster. ICML 2023 (to appear).Differentiable Multi-Target Causal Bayesian Experimental Design
Yashas Annadani, Panagiotis Tigas, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, Stefan Bauer. ICML 2023.Optimising adaptive experimental designs with RL
Adam Foster. Blog post.ICML 2022 Long Presentation: Contrastive Mixture of Posteriors
Efficient Real-world Testing of Causal Decision Making via Bayesian Experimental Design for Contextual Optimisation
Desi R. Ivanova, Joel Jennings, Cheng Zhang, Adam Foster. ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World.Deep Adaptive Design and Bayesian reinforcement learning
Adam Foster. Blog post.Learning Instance-Specific Augmentations by Capturing Local Invariances
Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim. ICML 2023.BALD and BED: Connecting Bayesian active learning by disagreement and Bayesian experimental design
Adam Foster. Blog post.Bayesian experimental design for model selection: variational and classification approaches
Adam Foster. Blog post.DPhil Thesis: Variational, Monte Carlo and Policy-Based Approaches to Bayesian Experimental Design
Adam Foster. University of Oxford.Deep End-to-end Causal Inference
Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang. Preprint.Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods
Desi R. Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, Tom Rainforth. NeurIPS 2021.ICML 2021 Long Presentation: Deep Adaptive Design