Deep Gaussian Processes learn probabilistic data representations for supervised learning by cascading multiple Gaussian Processes. While this model family promises flexible predictive distributions, exact inference is not tractable. Approximate inference techniques trade off the ability to closely resemble the posterior distribution against speed of convergence and computational efficiency. We propose a novel Gaussian variational family that allows for retaining covariances between latent processes while achieving fast convergence by marginalising out all global latent variables.
Lindinger, J., Reeb, D., Lippert, C., & Rakitsch, B. (2020). Beyond the Mean-Field: Structured Deep Gaussian Processes Improve the Predictive Uncertainties. Neural Information Processing Systems. https://arxiv.org/abs/2005.11110