This is the long presentation that I delivered at ICML 2022 for the paper Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness. The project was born out of several “grand challenges” in computational biology and the analysis of single-cell RNA-seq data. One challenge is integrating different datasets that exhibit some kind of technical variation. Another is predicting the effects of drugs and gene knock-outs on certain cells. In this project, we were able to translate these problems into a representation learning setting and then focus on computational methods to enforce the all-important contraint: z independent of c.
This is the long presentation that I delivered with Desi Ivanova at ICML 2021 for the paper Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design. It offers a 25 minute introduction to the DAD method of training a design policy for fast, adaptive experimental design.
This is the talk, based on our paper A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments that I delivered as an invited speaker at the Minisymposium on Model-Based Optimal Experimental Design at SIAM CSE 21. In the talk, I cover the basics of experimental design with Expected Information Gain (EIG), and then turn to the question of how to efficiently optimize this quantity over a large continuous design space without resorting to inefficient methods like Bayes Opt.
This is the talk that I delivered to AISTATS 2020 for the paper A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments. The talk offers a short primer on Bayesian Experimental Design, before launching into the key problem of optimizing Expected Information Gain using stochastic gradient methods.
This is the spotlight talk that I delivered at NeurIPS 2019 for the paper Variational Bayesian Optimal Experimental Design (my talk starts at 14:18). It offers a 5 minute overview of Bayesian Experimental Design and variational methods for estimating Expected Information Gain.