My helpful screenshot I am a final year PhD student in Statistical Machine Learning at the University of Oxford, supervised by Yee Whye Teh and Tom Rainforth in the Computational Stats and Machine Learning Group in the Department of Statistics. Before starting my PhD, I studied mathematics at Cambridge where my Director of Studies was Julia Gog.

I have a broad range of interests in statistical machine learning. A large part of my work in Oxford has been on Bayesian experimental design: how do we design experiments that will be most informative about the process being investigated? One approach is to optimize the Expected Information Gain (EIG), which can be seen as a mutual information, over the space of possible designs. In my work, I have developed variational methods to estimate the EIG, stochastic gradient methods to optimize over designs, and how to obtain unbiased gradient estimators of EIG. In more recent work, we have studied policies that can choose a sequence of designs automatically. This talk offers a 30 minute introduction to experimental design and my research in this area. To use Bayesian experimental design in practice, we have developed a range of tools in deep probabilistic programming language Pyro: our aim is to allow automatic experimental design for any Pyro model.

Since EIG is a mutual information, I am also interested in the intersection between information theory and machine learning. This led me to study contrastive representation learning and the role of invariance in these methods, as well as reproducing SimCLR in PyTorch.

Within the OxCSML group, I have been fortunate enough to be introduced to a wide range of new ideas in machine learning. We run a reading group on the role of symmetry and equivariance in deep learning. We have also read the latest research in reinforcement learning, deep generative models and metalearning, among other topics.

Future plans

I am currently searching for jobs! I am on the look out for opportunities to use machine learning, make a contribution, learn, create, and share what I already know. Having spent some time working on Bayesian Experimental Design, I know there are a number of exciting directions that research in the field could go—natural language and language models, the connection to reinforcement learning, implicit models, and improving our theoretical understanding—as well as potential applications in science, education, politics and biotech to name a few. I am also keen to use deep learning and probabilistic modelling more broadly to tackle interesting problems.

Recent work