Insights, resources, and advice on trustworthiness and compliance for AI designers and developers and organizations deploying or using AI systems.
As AI ushers in a new era of productivity and capabilities, it also poses new risks that must be managed in new ways. AI relies on data, which itself can change, leading to results which may be difficult to understand. AI is also ‘socio-technical’ and is affected by a complex and dynamic interplay of human behavioural and technical factors. DNV can help you develop the new risk approach that AI needs – both to ensure compliance with emerging regulations and to manage risks dynamically – to access the benefits of AI more rapidly, fully, and confidently.
Recommended practices
Our resources at your disposal include our Recommend Practice (RP) on AI-enabled Systems, that addresses quality assurance of AI-enabled systems and compliance with the upcoming EU AI Act. Other recommended practices developed by DNV cover the building blocks of AI systems – data quality, sensors, algorithms, simulations models, and digital twins – that we have developed through our extensive work with digitalization projects at asset-heavy and risk-intensive businesses worldwide. Cross-cutting all those digital building blocks is cyber-security, where DNV offers world-leading industrial cyber security services.
- Group
DNV's director of AI research on the EU AI Act
Watch the video
- Group
The EU AI Act and your company
The use of artificial intelligence (AI) in the European Union will be regulated by the EU AI Act, the world’s first comprehensive AI law. With a broad definition of AI, many businesses will be affected and should start preparing for compliance.
- Group
Act now: DNV'S Recommended Practice on AI-enabled Systems
The time to act is now! This is the clear message from DNV Digital Assurance Director, Frank Børre Pedersen. While the EU AI Act is only expected to be enacted at the end of 2023 and come into full force two years after that, organizations should already be planning now for its consequences.
- Group
Building trust in AI
What are the trust gaps to fill as the integration of AI becomes prevalent across industries
- Group
Beyond words?
The possibilities, limitations, and risks of large language models
- Group
Adoption of AI in healthcare
What to consider to facilitate safe and widespread adoption of AI-based tools in healthcare
- Group
The Ecosystem of Trust (EoT)
An ecosystems approach to the identification of stakeholders and their trust needs when deploying autonomous technologies
- Group
Creating a secure and trustworthy digital world
Organizations that struggle to demonstrate the trustworthiness of AI to their stakeholders, can close the trust gap with DNV’s new services and a set of recommended practices, for the safe application of industrial AI and other digital solutions.