Generative models, demystified
Generative & probabilistic models learn full probability distributions. That means they can synthesise molecules, climate scenarios, or legal cases—and also tell us how confident they are, what they have memorised, and which causal story might have produced the data.
Our research tackles questions such as:
- How can we keep probabilistic inference exact while scaling to rich, structured data?
- What guardrails ensure diffusion and large generative models stay safe, fair, and controllable?
- How do causal representations and simulators let us reason about interventions, not just correlations?