I am a senior researcher at Microsoft Research AI4Science where I work on machine learning methods for chemistry with Frank Noé and Jan Hermann, focusing on QMC. I also have a strong interest in Bayesian experimental design and active learning.
I am driven by the desire to understand how machine learning can help us to solve critical problems in the sciences and to build new, sustainable technology.
Previously, I did my PhD in Statistical Machine Learning at the University of Oxford, supervised by Yee Whye Teh and Tom Rainforth in the Computational Stats and Machine Learning Group in the Department of Statistics.
Before starting my PhD, I studied mathematics at Cambridge where my Director of Studies was Julia Gog.
A large part of my PhD work was on Bayesian experimental design: how do we design experiments that will be most informative about the process being investigated? One approach is to optimize the Expected Information Gain (EIG), which can be seen as a mutual information, over the space of possible designs. In my work, I have developed variational methods to estimate the EIG, stochastic gradient methods to optimize over designs, and how to obtain unbiased gradient estimators of EIG. In more recent work, we have studied policies that can choose a sequence of designs automatically. This talk offers a 30 minute introduction to experimental design and my research in this area. To use Bayesian experimental design in practice, we have developed a range of tools in deep probabilistic programming language Pyro: our aim is to allow automatic experimental design for any Pyro model.
Since EIG is a mutual information, I am also interested in the intersection between information theory and machine learning. This led me to study contrastive representation learning and the role of invariance in these methods, as well as reproducing SimCLR in PyTorch.
Recent work
Optimising adaptive experimental designs with RL
Adam Foster. Blog post.ICML 2022 Long Presentation: Contrastive Mixture of Posteriors
Efficient Real-world Testing of Causal Decision Making via Bayesian Experimental Design for Contextual Optimisation
Desi R. Ivanova, Joel Jennings, Cheng Zhang, Adam Foster. ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World.Deep Adaptive Design and Bayesian reinforcement learning
Adam Foster. Blog post.BALD and BED: Connecting Bayesian active learning by disagreement and Bayesian experimental design
Adam Foster. Blog post.Bayesian experimental design for model selection: variational and classification approaches
Adam Foster. Blog post.DPhil Thesis: Variational, Monte Carlo and Policy-Based Approaches to Bayesian Experimental Design
Adam Foster. University of Oxford.Deep End-to-end Causal Inference
Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang. Preprint.Implicit Deep Adaptive Design: Policy-Based Experimental Design without Likelihoods
Desi R. Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, Tom Rainforth. NeurIPS 2021.ICML 2021 Long Presentation: Deep Adaptive Design
On Contrastive Representations of Stochastic Processes
Emile Mathieu, Adam Foster, Yee Whye Teh. NeurIPS 2021.Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness
Adam Foster, Árpi Vezér, Craig A Glastonbury, Páidí Creed, Sam Abujudeh, Aaron Sim. ICML 2022 (long presentation).Minisymposium on Model-Based Optimal Experimental Design SIAM CSE 21: Stochastic Gradient BOED
Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design
Adam Foster, Desi R. Ivanova, Ilyas Malik, Tom Rainforth. ICML 2021 (long presentation).Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs
Takashi Goda, Tomohiko Hironaka, Wataru Kitade, Adam Foster. SIAM Journal on Scientific Computing.