Skip to content

User Trust in AI-Enabled Systems

User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering the adoption and use of AI.

Fostering and maintaining user trust is the key to achieving trustworthy AI and unlocking the potential of AI for society. We conducted this research by providing an overview of user trust definitions, user trust influencing factors, and methods to measure user trust in AI-enabled systems. 

User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI-enabled systems. Different contexts and various characteristics of both the users and the systems are also found to influence user trust, highlighting the importance of selecting and tailoring features of the system according to the targeted user group’s characteristics. Importantly, socio-ethical considerations can pave the way in making sure that the environment where user-AI interactions happen is sufficiently conducive to establish and maintain a trusted relationship. 

This work was supported by AI-Mind that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 964220.