Generative Quantile Regression with Variability Penalty

Ray Bai (University of South Carolina)

Friday 9th December, 2022 15:00-16:00 Zoom

Abstract

Quantile regression and conditional density estimation can often reveal structure that is missed by mean regression, such as heterogeneous subpopulations (i.e. multimodality) and skewness. In this talk, we introduce a deep learning generative model for simultaneous quantile regression called Penalized Generative Quantile Regression (PGQR). Our approach simultaneously generates samples from a large number of random quantile levels, thus allowing us to infer the conditional density of a response variable given a set of covariates. Our method also employs a novel variability penalty to avoid the common problem of vanishing variance in deep generative models. Furthermore, we introduce a new family of neural networks called partial monotonic neural networks (PMNN) to circumvent the problem of crossing quantile planes. A major benefit of PGQR is that our method can be fit using a single optimization, thus bypassing the need to repeatedly train the model at multiple quantile levels or use computationally expensive cross-validation to tune the penalty parameter. We illustrate the efficacy of PGQR through extensive simulation studies and analysis of real datasets.

Add to your calendar

Download event information as iCalendar file (only this event)