Probabilistic graphical models (PGM) provide a framework to reason based on available information. This model-based approach uses graphs to express conditional dependence between random variables. PGM can be used to describe and represent interpretable models and manipulate them. Further, inference in the model allows to reach conclusions and make informed decisions based on observable information. The models can be learned automatically from data, allowing models to be constructed also in large, complex scenarios, where experts fail. Many popular models are special cases of PGMs, e.g. hidden Markov models (HMMs), conditional random fields (CRFs), Markov networks, Bayesian networks, etc.
In this seminar we want to dive into the various topics of PGMs. Each student will focus on one particular topic and present the theory behind it. In a second, practical part two teams of two students will each implement one concrete model, one learning and one inference algorithm.
- Introduction II (Bayesian Networks, Undirected Models)
- Bayesian Network Learning
- Learning Undirected Model
- Exact Inference
- Approximate Inference
- Hidden Markov Models
- Conditional Random Fields