Artificial Intelligence (AI) has the potential to revolutionize society, but concerns about bias, lack of ethics, ‘black box’ algorithms, and the tendency to mislead users toward erroneous decisions raise doubts about its trustworthiness.

As AI becomes more prevalent, an essential factor that must be addressed is trust. Trust is the foundation of human relationships, and understanding the psychology of trust in the context of AI is critical for shaping its function within organizations. Exploring issues of trust is important, as it allows us to understand not only rejection of AI by individuals or organizations, but also its misuse (e.g., employees uploading sensitive corporate information) and abuse (e.g., attackers using AI to craft persuasive phishing emails).

The complexity of trust in human nature

Trust is a complex psychological concept deeply ingrained in human nature. When we interact with others, we often rely on trust to establish meaningful connections and cooperative relationships. In childhood, we build trust toward those closest to us and then later, we understand how much and whom we should trust. We tend to trust individuals who consistently demonstrate competence in their actions, uphold their commitments with integrity, and show empathy by understanding and responding to our needs.

Challenges building trust in AI

Building trust in AI is challenging. Unlike humans, AI lacks consciousness, emotions, and personal accountability, making it difficult to apply the same trust markers we use in human interactions. AI operates on algorithms and data, making decisions based on patterns rather than a genuine understanding of human emotions or intentions.

Additionally, AI’s lack of transparency can lead to trust issues. Black-box algorithms –  which provide results without explaining the reasoning behind them – can leave users feeling uncertain about the AI’s decision-making process. Trust requires understanding, and when we cannot comprehend how AI arrives at a conclusion, our trust in its accuracy and fairness diminishes.

Influencing trust with AI

Recent research (Bach et al., 2022) has found that trust in AI is influenced by the features of AI and the characteristics of its users.

Technical and design features that facilitate trust:

  • AI systems designed to assist and communicate with users, like chatbots and smart speakers, can benefit from incorporating human-like features (such as smiling or showing interest), non-verbal communication like body language and eye contact, satisfying users by fulfilling their tasks, providing text-based responses, and using a lower pitch in spoken AI systems.
  • For AI systems relying on automated algorithms, factors influencing trust include providing explanations and transparency about how the algorithms work, the reliability and actions of the AI, risk factors associated with predictive models, and the correctness of AI predictions.

User characteristics and their Impact on trust:

  • Individual differences exist in how people trust AI, including their previous positive experiences with AI, intention to use it, perception of technical and information quality relevance, and perceived usefulness.
  • Mismatches between user expectations and experiences can pose a risk to trust. The quality of a user’s interaction and the time spent with AI both influence trust, with high cognitive demand reducing trust.
  • Interestingly, the more frequently people use AI, the more they tend to trust it.

In conclusion, building trust in AI is essential for its successful integration into various aspects of our lives. Understanding the complex psychology of trust in the context of AI and addressing issues like transparency, explanation, and user experience are critical steps toward fostering confidence and adoption.

As AI continues to evolve, striking a balance between functionality, transparency, and reliability will be key to earning and maintaining the trust of individuals, organizations, and societies at large.

Here at Immersive Labs, we help organizations to continuously build and prove their cyber workforce resilience, including managing the potential impact of AI. Get in touch with us to find out more.

Visit our Resources Page to learn more.

References

Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2022). A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, 1-16.

Check Out Immersive Labs in the News.

Published

October 25, 2023

WRITTEN BY

John Blythe