Two applications of the variational form of Bayes theorem

Håvard Rue (KAUST)

Thursday 7th July, 2022 15:00-16:00 Maths 311B

Abstract

In this talk I will discuss the variational form of Bayes theorem by Zellner (1988). This result gives a rationale for the variational (approximate) inference scheme, which is not always that clear in modern presentations. I will discuss two applications of this results. First, I will show how to do a low-rank mean correction within the INLA framework (with amazing results), which is essential for the next generation of the R-INLA software currently in development. I will also discuss ongoing work on correcting marginal variances using the same idea. In the second part, I will introduce the Bayesian learning rule, which unify many machine-learning algorithms from fields such as optimization, deep learning, and graphical models. This includes classical algorithms such as ridge regression, Newton’s method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic gradient descent, RMSprop, and Dropout.

The first part of the talk is based on recent and ongoing research at KAUST (arxiv.org/abs/2111.12945) while the second part is based upon arxiv.org/abs/2107.04562 with Dr. Mohammad Emtiyaz Khan, RIKEN Center for AI Project, Tokyo.

 

Add to your calendar

Download event information as iCalendar file (only this event)