Assurance of AI-Enabled Systems

This position paper presents DNV’s perspective on how to build justified confidence in AI-enabled systems operating in high-risk, real-world environments. It presents our views on the new risks and challenges AI introduces as well as the potential solutions for assurance of AI-enabled systems. We posit that assurance methods are available to manage AI risks. However, they need to be matched with operational approaches, such as MLOps, to be effective for AI-enabled systems. We point to a new paradigm: assurance as a continuous, adaptive, system-wide, and evidence-based process.

Request a copy

How can we ensure that AI-enabled systems are safe, reliable, and trustworthy when they operate under uncertainty, evolve over time, and interact with people and complex environments?

In this position paper, DNV explores how artificial intelligence fundamentally changes the risk landscape and challenges conventional assurance practices. Industrial AI is often embedded in systems that affect safety, security, fairness, and societal trust, from autonomous transport and energy systems to healthcare and critical infrastructure.

The paper presents a novel perspective on AI risks and assurance of AI‑enabled systems which matches existing assurance methods and practices with operational approaches for AI. This sets the foundation for a new paradigm: assurance as a continuous, adaptive, and evidence‑based process. It explains how risks emerge not only from AI components themselves, but from interactions across technical, operational, contextual, and governance levels.

The paper outlines:

  • Why AI introduces systemic and dynamic risks that cannot be managed through static compliance or one‑time assessments.
  • How uncertainty, learning, frequent updates, and human–AI interaction challenge traditional safety and risk management methods.
  • A structured framework for achieving trustworthy AI, covering the AI component, the AI‑enabled system, the operational context, and governance.
  • How modern assurance practices – including assurance cases, uncertainty‑aware risk modelling, continuous monitoring, and MLOps – can be combined into an integrated lifecycle approach.
  • The implications of evolving regulation, such as the EU AI Act, and why proactive assurance is a strategic enabler for innovation and trust.

Download the position paper to learn how continuous, context‑aware assurance can provide justified confidence in AI‑enabled systems, so that organizations can deploy AI responsibly, safely, and at scale across industries.