Skip to content

Towards trustworthy industrial AI systems

By Mark Irvine

DNV recently released a position paper giving perspectives on what needs to be considered in developing verification processes and the assurance of AI systems in industrial contexts. Below is a summary of its main points.

The trustworthiness of AI systems is not very different from that of a leader or an expert to whom, or an organization to which, we delegate our authority to make decisions or provide recommendations to reach a particular goal. AI systems should be subjected to the same quality assurance methods and principles we use for any other technology.

The rigour with which we evaluate an expert’s recommendation depends on the importance of her recommendation and its context . This means that the rigour and efforts required to build trust in the deployment of a specific AI system will depend on the severity and probability of potential consequences. 

The deployment of AI systems in society introduces complexity and creates digital risks. While complexity in traditional mechanical systems is naturally limited by physical constraints and the laws of nature, complexity in integrated, software-driven systems – which do not necessarily follow well-established engineering principles – seems to easily exceed human comprehension. This increased complexity, driven by digitalization, is deepened by the integration of AI technologies introducing new risks and opening substantive trust gaps. 

As a contribution to the global debate on trust in AI, we put forward characterization of trustworthy industrial AI systems, with a focus on the integration of AI into existing cyber-physical systems and other digital assets. Further, we discuss how AI-enabled digital assets require assurance of development and deployment processes as well as product assurance of the digital asset itself. 

As recommended by the European Commission, we define AI as “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” 

We define trustworthy AI systems as those that display the following characteristics: 

  • Legitimacy 
  • Ability to perform and capacity to verify delegated tasks 
  • Appropriate human-machine interdependency 
  • Clearly defined purpose 
  • Transparent impact on relevant stakeholders 

Legitimacy: First and foremost, an AI system should be legitimate. Its legitimacy depends on issues related to algorithm and model training, data governance, the suitability of the chosen AI algorithm for the problem to be solved, and the context of this problem. It is essential to establish that the AI system's residual risk is acceptable to all stakeholders, regardless of the system's benefits, such as cost-efficiency. Ultimately, the legitimacy of deploying AI methods and tools will depend both on the system being fit-for-purpose and on risk management being placed at the core. 

Ability to perform and capacity to verify delegated tasks: Similar to leaders, experts or organizations, and following established quality assurance and performance principles, it is necessary to establish that AI systems are competent and have the ability to do the work delegated to them. This entails ensuring that their design, deployment and operational performance are of sufficient quality and robustness. Even though many AI algorithms are of a black box nature, transparency can be improved through explainability. Lastly, the combination of all these criteria needs to generate appropriate evidence for the eventual verification of trustworthiness. 

Appropriate human-machine interdependency: Human-to-machine and machine-to-machine interactions and interdependencies deserve close scrutiny. A rapidly increasing number of functions in many cyber-physical systems (from cars, ships and airplanes to infrastructure such as energy systems and pipelines) are already being elevated to higher levels of autonomy. It is critical to map and understand the agents and roles involved in the development, deployment, use and maintenance of an AI system, as well as the external stakeholders affected by the AI system in operation. Transparent and understandable communication between all these types of agents and stakeholders, including machine-to-machine interaction, is key for ensuring the trustworthiness of AI. 

Clearly defined purpose: The motive and purpose of deploying an industrial AI system need to be disclosed in order to ensure trustworthiness. This disclosure includes revealing the potential benefits and risks for all stakeholders. The motive and purpose of the AI system also need to be assessed in relation to corporate accountability processes. 

Transparent impact on relevant stakeholders: The trustworthiness of AI systems must also be judged by looking at the impact they may have. A large share of the ethical considerations relates to the possible impact of AI systems on people’s rights to privacy, non-discrimination, unbiased decision-making, etc. In industrial safety-critical application contexts, the deployment of AI systems could be subject to the same impact assessment methods that are common for other technologies. Establishing the impact of a particular AI system presupposes ascribing responsibility to different agents and distinguishing between intentional and unintentional actions. Lastly, the impact of an AI system would have to be monitored continuously and throughout the system's lifecycle. 

We propose these characteristics of Trustworthy Industrial AI Systems as best practices emerging from our ongoing work on the assurance of digital assets. 

In DNV, we are taking steps to provide assurance of digital assets, including those that incorporate AI systems. We are engaged in multiple collaborations and partnerships with academia and industry, both building new knowledge and learning from use cases. We are also already providing the market with services focused on data and machine learning assurance. 


References 

AI Now Institute 2019 Report, New York. AI Now Institute 
Bradshaw, J. M., Hoffman, R. R., Woods, D. D., & Johnson, M. (2013). The seven deadly myths of” autonomous systems”. IEEE Intelligent Systems, 28(3), 54-61.
Botsman, R. (2017). Who Can You Trust?: How Technology Brought Us Together and Why It Might Drive Us Apart. Public Affairs.
Glomsrud, J.A. and Xie, J. (2019). A Structured STPA Safety and Security Co-analysis Framework for Autonomous Ships. Proceedings of the 29th European Safety and Reliability Conference. Doi: 10.3850/978-981-11-2724-3_ 0105-cd.
Jordan. M. 2018. Artificial Intelligence — The Revolution Hasn’t Happened Yet. Medium.
O’Neill, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Nueva York, NY: Crown Publishing Group.
Webb, A. (2019). The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Hachette UK.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.