Recent research (Bach et al., 2022) has found that trust in AI is influenced by the features of AI and the characteristics of its users.
Technical and design features that facilitate trust:
- AI systems designed to assist and communicate with users, like chatbots and smart speakers, can benefit from incorporating human-like features (such as smiling or showing interest), non-verbal communication like body language and eye contact, satisfying users by fulfilling their tasks, providing text-based responses, and using a lower pitch in spoken AI systems.
- For AI systems relying on automated algorithms, factors influencing trust include providing explanations and transparency about how the algorithms work, the reliability and actions of the AI, risk factors associated with predictive models, and the correctness of AI predictions.
User characteristics and their Impact on trust:
- Individual differences exist in how people trust AI, including their previous positive experiences with AI, intention to use it, perception of technical and information quality relevance, and perceived usefulness.
- Mismatches between user expectations and experiences can pose a risk to trust. The quality of a user’s interaction and the time spent with AI both influence trust, with high cognitive demand reducing trust.
- Interestingly, the more frequently people use AI, the more they tend to trust it.
In conclusion, building trust in AI is essential for its successful integration into various aspects of our lives. Understanding the complex psychology of trust in the context of AI and addressing issues like transparency, explanation, and user experience are critical steps toward fostering confidence and adoption.
As AI continues to evolve, striking a balance between functionality, transparency, and reliability will be key to earning and maintaining the trust of individuals, organizations, and societies at large.
Here at Immersive Labs, we help organizations to continuously build and prove their cyber workforce resilience, including managing the potential impact of AI. Get in touch with us to find out more.
Visit our Resources Page to learn more.
Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 1-16.