Implementing Trustworthy and Responsible AI for critical infrastructure:
The role of stakeholders in fostering and maintaining trust
Tita Alissa Bach
Artificial Intelligence (AI) systems are transforming the way we live and work. They help organizations, businesses, and governmental agencies run more efficiently, making faster decisions, and even transforming everyday services. AI systems are already embedded in high-stakes, safety-critical environments, supporting doctors in diagnosing illnesses [1], assisting caregivers for the elderly [2, 3], deciding who receives certain types of medical treatment [4], powering self-driving vehicles [5], managing hospital cleaning and airport maintenance [6], and helping cities handle waste [4]. Beyond industrial applications, AI systems also shape major life decisions, influencing who gets hired [7] and promoted [8], and who qualifies for loans or life insurance [9].
But while AI systems offer incredible benefits, they also introduce serious risks. When not designed or monitored properly, AI systems can cause real harm.
AI systems’ failures in high-stakes contexts can have devastating real-world consequences for stakeholders, communities, and organizations. In industries such as energy, maritime, or healthcare, errors or biases in AI systems can lead to critical safety risks, operational disruptions, financial losses, or unfair treatment of individuals. Such situations can cause severe hardship, reputational damage, and, in some cases, lasting harm to people’s lives. While these risks are not unique to AI, the speed, scale, and opacity with which AI systems can operate can amplify the impact when something goes wrong. These examples highlight why managing both existing and newly introduced risks associated with AI systems is essential.’ In our article, Painting the AI Risk Picture, AI Risks are understood as System Risk with AI minus System Risks without AI [10].
Trustworthy and Responsible AI
As AI adoption grows, AI systems need to demonstrate that they are trustworthy and responsibly managed. Trustworthy and Responsible AI must serve as the foundation for ethical use, transparency, and strong human-AI collaboration. Trustworthy AI addresses the ‘what’: the technical and social qualities that must be present for an AI system to earn confidence – for example, safety, performance, robustness, transparency, privacy, cybersecurity, and meaningful human oversight. Responsible AI, on the other hand, defines the ‘why’: the ethical and societal purpose that guides AI use towards the public good, grounded in principles such as beneficence, non-maleficence, autonomy, and justice,.
Together, these perspectives establish a holistic foundation: trustworthy AI ensures that AI systems function as intended, while responsible AI ensures that their impacts align with societal values and ethical expectations. By integrating both, organizations can build and use AI systems that not only work as intended but also deserve stakeholder trust.
Importantly, in addition to regulatory compliance, Trustworthy and Responsible AI practices can help foster and maintain stakeholder trust, mitigate risks, and ensure AI systems align with societal as well as business values. Implementing Trustworthy and Responsible AI is particularly crucial in critical infrastructure – and even more so when it involves high-impact systems and potentially severe consequences in case of failures. Beyond being a societal necessity, Trustworthy and Responsible AI also offers a competitive advantage by helping businesses earn and maintain stakeholder trust.
Trustworthy and Responsible AI practices are defined as a set of practices to ensure that AI-enabled systems are developed, designed, deployed, used, and managed in a way that upholds the legitimate interests, fundamental rights, and justified trust of stakeholders by ensuring both technical reliability and alignment with ethical and societal values in the settings where the AI systems are (to be) deployed. [11]
Understanding stakeholder dynamics to foster trust
Real-world examples across industries and regions show why fostering Trustworthy and Responsible AI is essential. It provides the foundation for meaningful human-AI interaction, enabling collaboration that is both effective and trustworthy. In general, AI systems can only reach their full potential when this collaboration is carefully established and continuously improved.
Trustworthy and Responsible AI should be treated as prerequisite infrastructure, especially when AI systems are integrated into high-risk settings affecting health, safety, or fundamental rights. In such high-risk settings, Trustworthy and Responsible AI must be established before people are expected to engage with AI systems. Without this foundation, there is a risk of harm, bias, or system failure for users and other stakeholders.
To explore what it takes to implement Trustworthy and Responsible AI successfully, DNV conducted a comprehensive systematic literature review of real-world empirical studies [12]. One key finding is the importance of identifying and involving all relevant stakeholders from the outset, and understanding how AI outputs impact them in different ways. In industrial settings, the users of AI systems are not always those who are directly affected by the outcomes. While some AI users are directly affected by its benefits and risks, such as passengers in self-driving cars, this is not always the case. For example, doctors use AI systems to assist in diagnosing patients, but it is the patients who experience the direct impact of the AI’s outcomes. Operations engineers serve as another example: they may use AI systems to optimize production parameters, but it is the downstream workers or equipment that experience the practical consequences of those decisions, such as changes in workload, process stability, or maintenance needs. Identifying and categorizing stakeholders based on how they are impacted by AI outputs helps uncover the dynamics within the industrial AI ecosystem. This, in turn, enables Trustworthy and Responsible AI practices to be more effectively tailored to the specific roles, responsibilities, and risks faced by different stakeholder groups.
Here is a simple illustration using healthcare as an example:
1. Primary stakeholders (e.g. patients): These are the individuals who are directly affected by the outcomes of AI systems, such as patients receiving medical diagnoses. If an AI system makes an error, it can seriously impact the health or lives of patients, but they do not interact directly with the AI system itself. For patients, trusting AI systems and doctors is crucial because they rely on the doctors who use AI systems to make decisions on their behalf. Trustworthy and Responsible AI means that patients’ legitimate interests and fundamental rights are being protected while their doctors use AI systems to make the best clinical decisions.
2. User stakeholders (e.g. doctors): Doctors are the users who interact directly with AI systems. They rely on AI systems to help diagnose patients, interpret medical images, or recommend treatments, and they benefit from the AI systems’ support. Unlike patients, however, they are not directly affected if an AI system makes an error. Their role is essential, as they must critically assess AI outputs and make the final decisions.
3. Non-user stakeholders (e.g. AI developers and regulators): These stakeholders do not interact with AI systems directly, nor do they experience the benefits or risks of AI systems firsthand. However, they play a critical role in creating, monitoring, and regulating the technology to ensure it is safe, effective, and responsible. AI developers build the systems, while regulators ensure that they meet legal and ethical standards.
Let us use the maritime industry as another example of critical infrastructure.
- Primary stakeholders (e.g. passengers): These are the individuals directly affected by AI systems used in maritime operations, such as AI-enabled navigation systems or cargo management tools. If an AI system malfunctions or makes an error – such as miscalculating a ship’s position or selecting an inefficient route – it can lead to accidents, delays, or safety risks. Passengers do not interact directly with the AI systems, but their safety and well-being depend on the systems’ accuracy and reliability.
- User stakeholders (e.g. onshore route planners, remote operators, fleet managers): These are the people who interact directly with marine AI systems in day-to-day operations, using them to navigate vessels, optimize routes, or manage cargo. While they benefit from AI systems’ efficiency and support in decision-making, they are not directly affected if the AI makes an error, unless it impacts the operation of the ship or management of cargo. However, they must closely monitor the AI’s output and use their expertise to make final decisions, especially in complex or high-risk scenarios.
- Non-user stakeholders (e.g. AI developers, classification societies, and regulators): These stakeholders do not interact directly with AI systems, but they play a crucial role in designing, developing, and regulating the technology. AI developers create the algorithms that power navigation systems, predictive maintenance tools, and other AI-driven applications, while regulators create and ensure that these systems meet safety standards and legal requirements.
In the maritime industry, each stakeholder group plays a vital role in ensuring AI is used effectively and operates safely. Passengers depend on reliable AI and its effective oversight by human operators; operators use it to help make real-time best decisions; and developers and regulators ensure that AI systems are safe and compliant with standards.
These examples illustrate an AI ecosystem and its dynamics as each stakeholder group plays a unique but interconnected role. To ensure that AI systems are developed, used, and managed in a way that is not only trustworthy and responsible but also effective, it is crucial that all three groups be involved in the process from development through deployment and post-monitoring.
Guidance to implement Trustworthy and Responsible AI
Ensuring that AI systems are both trustworthy and responsible means going beyond intention and into action. While principles and frameworks are essential, they become meaningful only when translated into actionable practices across the entire AI lifecycle.
To operationalize these ideals, in addition to our study of empirical real-world cases [12], we interviewed 19 industry experts in Trustworthy and Responsible AI, such as disciplines leads and practitioners specializing in Responsible and ethical AI, to explore what practices are being adopted or overlooked. They were in the following industries: consulting (N=1), AI startup (N=3), AI technology (N=2), healthcare (N=3), public administration/government (N=4), logistics and postal services (N=1), TIC (N=1), fishing and aquaculture (N=1), non-profit organizations (N=2), and telecommunication (N=1).
We have synthesized the findings into best practices and defined 11 key guiding questions designed to support the step-by-step implementation of Trustworthy and Responsible AI practices. These questions are intentionally sequenced to help practitioners focus on one aspect at a time, ensuring that each foundational element is addressed before moving on to the next. The goal is to provide practical, actionable guidance for integrating these practices more effectively into real-world projects.
By integrating these Trustworthy and Responsible AI key guiding questions into the development, use, and post-monitoring of AI systems, organizations can ensure that AI systems serve as a tool for positive transformation while mitigating risks and earning their stakeholders’ trust.
Reference list
1. Fan, X., et al., Utilization of Self-Diagnosis Health Chatbots in Real-World Settings: Case Study. J Med Internet Res, 2021. 23(1): p. e19928.
2. Sinclair, A.J., A.J. Girling, and A.J. Bayer, Cognitive dysfunction in older subjects with diabetes mellitus: impact on diabetes self-management and use of care services. All Wales Research into Elderly (AWARE) Study. Diabetes Research and clinical practice, 2000. 50(3): p. 203.
3. Kim, J.-W., et al., A care robot with ethical sensing system for older adults at home. Sensors, 2022. 22(19): p. 7515.
4. Kang, E.Y. and S.E. Fox. Stories from the Frontline: Recuperating Essential Worker Accounts of AI Integration. in Proceedings of the 2022 ACM Designing Interactive Systems Conference. 2022.
5. Chu, M., et al. Work with AI and Work for AI: Autonomous Vehicle Safety Drivers’ Lived Experiences. in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.
6. Fox, S.E., et al., Patchwork: the hidden, human labor of AI integration within essential work. Proceedings of the ACM on Human-Computer Interaction, 2023. 7(CSCW1): p. 1-20.
7. Li, L., et al. Algorithmic hiring in practice: Recruiter and HR Professional's perspectives on AI use in hiring. in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 2021.
8. Houser, K.A., Can AI solve the diversity problem in the tech industry: Mitigating noise and bias in employment decision-making. Stan. Tech. L. Rev., 2019. 22: p. 290.
9. Maier, M., et al., Improving the accuracy and transparency of underwriting with AI to transform the life insurance industry. AI Magazine, 2020. 41(3): p. 78-93.
10. Painting the AI Risk Picture. [cited 2025; Available from: https://www.dnv.com/research/future-of-digital-assurance/painting-the-ai-risk-picture/.
11. DNV. DNV-RP-0671 Assurance of AI-enabled systems. 2023 [cited 2024; Available from: https://standards.dnv.com/hearing/5BDF814775DB435C8AF32242F73FCB71/01.
12. Bach, T.A., et al., Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature review. AI and Ethics, 2025: p. 1-48.