Research Theme

Generative & Probabilistic Models

We design expressive generative models with tractable inference, calibrated sampling, and causal awareness for complex data and simulations.

AIML Lab logo

Generative models, demystified

Generative & probabilistic models learn full probability distributions. That means they can synthesise molecules, climate scenarios, or legal cases—and also tell us how confident they are, what they have memorised, and which causal story might have produced the data.

Our research tackles questions such as:

  • How can we keep probabilistic inference exact while scaling to rich, structured data?
  • What guardrails ensure diffusion and large generative models stay safe, fair, and controllable?
  • How do causal representations and simulators let us reason about interventions, not just correlations?

Quick facts

  • Model families: Probabilistic circuits, diffusion & flow models, causal simulators, probabilistic programs.
  • Data we tackle: Multimodal sensor data, molecules, video, socio-technical systems.
  • Shared artefacts: Open-source libraries, evaluation harnesses, safety benchmarks, generative design toolkits.

Tractable probabilistic circuits

We craft circuit-based representations with guaranteed inference, enabling certified uncertainty estimates for complex, structured data.

Responsible diffusion & synthesis

Memorisation audits, fairness constraints, and alignment strategies keep high-capacity generators creative without violating trust.

Causal generative reasoning

We connect generative models with causal discovery and intervention planning so stakeholders can ask “what if?” and get reliable answers.

Key Publications

Recent conference and journal work on diffusion safety, probabilistic circuits, and causal reasoning for generative models.