Research Theme

Responsible & Trustworthy AI

We build theory, methods, and tools to keep machine learning dependable under uncertainty, aligned with human values, and accountable in the field.

AIML Lab logo

Trustworthy AI, spelled out

Trustworthy AI is everything that happens after a model leaves the lab: safety arguments, monitoring dashboards, alignment safeguards, and the paperwork that lets regulators and users sleep at night. We design the guardrails that keep systems reliable under stress.

Our research tackles problems such as:

  • What evidence convinces auditors, clinicians, or citizens that a model can be trusted?
  • How do we detect and mitigate harmful behaviour before it reaches production users?
  • Which governance workflows help organisations update models without losing accountability?

Quick facts

  • Focus areas: Safety assurance, robustness, human alignment, governance.
  • Tooling: Stress-test harnesses, alignment playbooks, evidence reporting pipelines, monitoring dashboards.
  • Partners: Automotive, climate tech, healthcare providers, legal and compliance teams.

Evidence Pipelines

We blend statistical stress tests with neuro-symbolic verification to surface residual risk, document failure modes, and translate them into compliance-ready artefacts.

Operational Safeguards

Interactive dashboards combine drift detection, behaviour tracing, and counterfactual simulation so teams can intervene before small deviations escalate.

Ethics in the Loop

Co-design with legal, clinical, and civic partners ensures model updates respect societal values, preserve recourse, and keep human oversight meaningful.

Key Publications

Selected recent work on safety, alignment, and governance. The grid updates automatically as we publish more.