DNV, the risk management and assurance company, has published a suite of recommended practices (RPs) that will enable companies operating critical devices, assets, and infrastructure to safely apply artificial intelligence (AI).
Høvik, Norway, October 31, 2023 - High-quality AI systems require strong building blocks, that is, data, sensors, algorithms, and digital twins. The nine new or updated RPs cover each of these digital building blocks. DNV’s strong sector knowledge of the maritime, energy, and healthcare sector, among others, enables it to understand not just how AI works, but how it interacts with other systems in complex infrastructure and assets.
The advent of AI requires a new approach to risk. Whereas conventional mechanical or electric systems degrade over years, AI-enabled systems change within milliseconds. Consequently, a conventional certificate provided by DNV, which normally has a three- to five-year validity, could be invalidated with each collected data point. This necessitates a different assurance methodology and a thorough understanding of the intricate interplay between system and AI, allowing for a proper assessment of failure modes as well as potential for real-world performance enhancement.
“Many of our customers are investing significant amounts in AI and AI readiness, but often struggle to demonstrate trustworthiness of the emerging solutions to key stakeholders. This is the trust gap that DNV seeks to close with the launch of these recommended practices and which we are publishing ahead of the imminent European Union Artificial Intelligence Act,” says Remi Eriksen, DNV Group President and CEO.
The European Union Artificial Intelligence Act will be the world’s first AI law. The law defines AI very broadly, covering essentially any data-driven system that is deployed in the EU, irrespective of where it is developed or sources its data. These recommended practices can be used as a basis for companies to ensure they meet relevant requirements.
The RPs bridge the gap between the generically written law and affected stakeholders, providing a practical interpretation of the EU AI Act. They do so through a claims-and-evidence-based approach addressing four key challenges:
- They take a systems approach to capture emergent properties and behaviours arising from how AI components interact with other components, humans, and the environment.
- To account for the dynamic nature of AI-enabled systems and their environments, the assurance happens continuously or at least with the same frequency as the system changes.
- The assurance process includes the mapping of stakeholders and their wide-ranging concerns, to identify competing interests and facilitate compromises.
- To promote collaboration between actors responsible for different parts, the RPs use modular assurance claims allowing each actor to assure their own parts, thus enabling assurance of an entire system based on the assurance modules and their interdependencies.