The influence of mental state attributions on trust in large language models

Colombatto, C., Birch, J.ORCID logo & Fleming, S. M. (2025). The influence of mental state attributions on trust in large language models. Communications Psychology, 3(1). https://doi.org/10.1038/s44271-025-00262-1
Copy

Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for ‘experience’ (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of ‘intelligence’ (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users’ attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs—with different aspects of mental state attribution predicting people’s trust in these systems.

picture_as_pdf

subject
Published Version
Creative Commons: Attribution 4.0

Download

Export as

EndNote BibTeX Reference Manager Refer Atom Dublin Core JSON Multiline CSV
Export