Skip to content

Building trust in AI

Perspectives from different industries

DISCLAIMER: Statements in this article do not necessarily represent the positions of DNV or any of the NorwAI partners. This article is the authors’ summary and interpretation of the discussions that were held at two workshops with NorwAI partners in the spring/summer of 2023.


Trust is not easy

For a technology to be accepted, used, and beneficial for society, it must be trusted. This is especially true if the technology is supposed to have a human-like intelligence. Trust is the firm belief in the reliability, truth, or ability of someone or something and underlies all socio-economic relations. Trust in AI matters because investments, societal acceptance, political support, knowledge development, and innovation all depend on it. Accordingly, the Norwegian Center for AI Innovation (NorwAI) gives high priority to this topic and focuses on understanding the trust needs of industries, as well as the conditions for establishing trust in safe and responsible AI. This includes ensuring privacy-preservation in AI technologies, creating guidelines for sustainable and beneficial use of AI, developing principles for explainable and transparent AI, and developing principles for the independent assurance of AI-enabled systems. 

Trust is the central characteristic of contracts, business relations, and technological innovation. Therefore, trust in AI has to be adapted to the particularities of specific contexts, locations, and organizational cultures. As part of NorwAI’s ongoing research and innovation, we carried out a series of co-production and co-design workshops diving into the particularities of the media, banking, and Industry 4.0 sectors. Each workshop aimed to gather knowledge about industry-specific trust needs, concerns, and innovation opportunities. An interdisciplinary science-based industry perspective was used to frame discussions and process information. Finding commonalities and differences on trust among these industries is crucial because each industry has its unique culture, innovation processes, and technical needs. Our findings from these workshops highlight that industries and policymakers must actively cultivate trust in AI by concretely demonstrating  and communicating trustworthiness to the public and relevant stakeholders. 


Shared needs 

User acceptance. The landscape of AI governance is rapidly evolving, with both nations and industry bodies developing frameworks for trustworthy and responsible AI. In this context, it is a business advantage to anticipate and stay ahead of regulatory changes, because compliance with relevant regulations and best practices is a ticket to trade. Yet the industry participants in the workshops expressed that regulation is not the main driver for pursuing trustworthy AI. Above all, they associated trust with user acceptance. ‘Without trust, everything breaks,’ one participant said. Trust leads to more use and enables scaling up AI technologies. 

Alignment of objectives and values. Trust between stakeholders is fundamentally the belief that other actors are open and honest about their motivations. Trust is further facilitated when there is alignment between the objectives and values of the involved stakeholders, e.g. business partners, customers, and the wider public. Ethical issues, such as fairness and the impact of AI on the workforce and environment, are important even in a business-to-business context because they reflect the values of a company. This highlights the importance of stakeholder engagement and involvement in development and implementation of AI in products and services in all sectors. 

Technical robustness and human oversight. Several participants in the workshops stated that technical robustness is key to fostering trust in AI, specifically reliability and technical validations. Moreover, participants from across industries agreed that trust is related to human oversight in assuring models and algorithms and emphasized the importance of transparency and explainability of AI systems. 

AI literacy. Participants from across all the represented industries hinted at AI knowledge gaps between organizational levels and between managers and those actually working with AI. Managers are often sceptical of technologies they do not understand, which can be a hindrance to AI adoption. At the same time, employees not working with AI or having little AI competence may resist AI solutions that impact how they perform their jobs. Based on this, raising AI literacy appears to be key to enabling wider AI adoption. 

Inside trust and outside trust. It was pointed out that creating acceptance and trust in AI within an organization is very different from gaining trust from customers. Many customers may prefer ‘human brands’ (i.e. brands not associated with AI), and therefore it is safest for companies to stick to familiar approaches and technologies. As a result, AI adoption will not happen in companies unless there is clear evidence that AI can increase profits.

 

My trust needs and your trust needs 

While many trust needs and concerns are shared across industries, the workshop discussions highlighted certain trust needs and concerns of particular relevance to each sector. 

Media  

Participants from the media sector said that transparency, accuracy, and truthfulness are vital for fostering user trust. They raised the problem of ‘fake news’ and concerns about AI-generated misinformation. In journalism, it is not good enough to be correct 90% of the time, so journalists always need to be in the loop to check information for hallucinations, bias, and incorrectness of AI. One participant expressed that ‘journalists don’t like to be told how to do their job’. AI was mainly perceived as a useful tool to improve productivity rather than a threat to journalists’ jobs.  

In the workshop with media partners, the participants discussed how content owners could be incentivized to share data with AI developers. As producers of content, media companies are naturally concerned that AI models could violate copyrights. For example, plagiarism is a growing concern as the use of large language models that produce content without referencing (credible) sources is becoming increasingly widespread. Content providers also fear that their revenue streams could be threatened by AI models leaking content that resides behind paywalls. On the other side of the table, AI developers and AI service providers want access to content behind paywalls to improve their models. Beyond security assurances from AI developers, one possible incentive for media companies to share data is early access to AI models (e.g. the right to use AI models before competitors). Improved searchability of content could be another possible benefit in the future, but this was not seen as a major incentive by the media participants. 

Banking  

In the banking industry, trust is of utmost importance because the main asset of banks is customer trust. Banks have detailed data about their customers, possibly more than any other industry or even governments. In particular, Norwegian banks enjoy a very high level of trust from consumers, and this is something they want to maintain.  

Banks are already mature in using AI for fraud detection, credit risk assessments, and so on. However, banking is highly regulated, and ‘the closer you get to the money, the stricter it is,’ as one participant expressed it. Some of the workshop participants mentioned existing financial regulations and GDPR as hurdles to more AI adoption in the sector. Other workshop participants speculated that upcoming AI regulations may have a limited impact on banking precisely because the sector is already accustomed to strict requirements concerning data, privacy, and transparency. The threshold to adopt AI is lower in the corporate market, where there are fewer privacy issues.  

AI has the potential to improve efficiency in banking by automating processes. However, banks need to be fair and able to provide reasons for their decisions, and this sets requirements to AI model fairness, transparency, reliability, and explainability. While the requirements for explainability can be a hurdle, it can also be an opportunity. For example, explainable AI tools could help bank advisors give better and more personal justifications for decisions than they can today. This has the potential to give customers more trust in their bank. 

AI also has the potential to enhance the customers’ banking experience and provide them with better personalized advice. For example, one participant outlined how the bank of the future may take the form of a personal AI assistant with a language and style tailored to each individual customer. Nevertheless, such personalization of bank services raises concerns about data privacy, especially if data are combined across different segments of the bank, such as accounts, lending, insurance, and property. It also raises concerns that banks may use their detailed knowledge to manipulate customers in unethical ways. This accentuates the need for banks to involve customers and public stakeholders in the AI transformation.  

The banking workshop participants agreed that AI transparency, compliance with relevant regulations, and human oversight are key factors to foster and maintain customer trust. AI application in banking should be negotiated with regulators and the whole industry to create a basis for trust.  

Industry 4.0  

For industries such as energy, AI can be a tool for automation and decision support. One participant stated that AI adoption often feels inevitable because of the high pressure to improve efficiency. One of the major concerns when deploying AI is whether the AI models will be accurate enough to be relied upon. This is because wrong or suboptimal actions can have dangerous and expensive consequences or could negatively affect the reputation of a company. Even if an AI model is claimed to be accurate today, can it be trusted for the future or when conditions change? There are challenges associated with predicting the future behaviour of an AI model, as well as uncertainties surrounding the degree of explainability of this behaviour. Data quality can also be a problem when developing AI models. One workshop participant stressed the importance of being able to estimate AI model uncertainty and to evaluate the effects of such uncertainty on systems and decisions.   

The Industry 4.0 representatives in the workshops said that there is a lot of fear of transferring too many responsibilities to AI, and that there must always be a human backup if the AI fails. This implies that companies need to retain human competence even in tasks that they delegate to AI. As more responsibility is given to AI, knowing how to assign accountability for the AI technologies’ actions can be a challenge, and this is something regulators and industry actors may battle with. 

Industry actors need to establish trust in their own products and systems, but they also need to build trust with business partners and the public. The trust needs may depend on where you are in the value chain (development, implementation, etc.), which could mean that AI developers and industry actors do not always see the later trust needs during initial phases of AI development. More cooperation, development, and sharing of best practices were mentioned by workshop participants as possible ways to integrate a trust mindset throughout AI life cycles and value chains. Participants agreed that trustworthy AI is something that must be demonstrated but claimed that this is rather complex and costly to do. Another concern was that it can be difficult to implement AI governance, especially in large organizations. 


What and where are the opportunities for trustworthy AI 

In addition to exploring trust needs and concerns related to AI, the workshops explored potential solutions and innovation possibilities to build more trust in AI. These include the development of explainable AI methods, risk management approaches, systematic evaluation of uncertainty impacts, digital maturity assessments, and education and capacity-building initiatives. The creation of best practices, guidance, and standardization is essential in developing a common understanding and fostering trust. Engaging stakeholders, making value choices explicit, and implementing centralized AI governance were identified as crucial steps towards building trust. 


Conclusion 

Building trust in AI is essential for scaling up innovation and ensuring widespread acceptance and adoption of AI. The workshops organized by NorwAI provided valuable insights into industry-specific trust needs, challenges, and potential opportunities for innovation. It is important to note that trust needs are not solvable through technical advancements only; equally important, these must be addressed through appropriate governance mechanisms, organizational and behavioural change, stakeholder engagement, and education initiatives. With these combinations of actions and mechanisms, trust in AI can be fostered and expanded across all necessary organizational and societal layers, like a ripple of water, leading to enhanced innovation and the safeguarding of societal needs.