This dissertation investigates the role of AI, particularly Large Language Models, in influencing risk-taking behaviours in a decision-making context, hypothesizing a diffusion of responsibility in human-AI interactions. A Randomized-Control Trial was employed, with participants completing a risk elicitation task – the Bomb Risk Elicitation Task – across two sequential rounds. Participants were either assisted by an AI-powered chatbot during the task or placed in a control group without AI assistance. Measures such as Trust and Attitudes towards AI, and general risk aversion were collected, to serve as control variables. Participant’s locus of control was also measured to test the diffusion of responsibility hypothesis. A total of 138 participants completed an online experiment. Results indicate that AI assistance had a significant effect on participants’ risk preferences, particularly in the second round of the task. Notably, the outcome of the first round showed to be an important factor in this dynamic. Among those who did not have a successful outcome in the first round, participants in the control group exhibited greater risk aversion in the subsequent round, a pattern that was not observed in the AI-assisted group. Further analyses indicated that trust in AI and an external locus of control marginally moderated this effect, pointing to a diffusion of responsibility with the AI. Additional findings suggest the rational effect AI assistance had on participants. Particularly, the proportion of risk-neutral participants increased from 6% in the control group to 28% in the treatment group, indicating an approximation of rational decisionmaking with AI assistance. The findings suggest that AI assistance can alter risk preferences, potentially through mechanisms of increased confidence or diffusion of responsibility. This dissertation contributes to our understanding of human-AI interaction and highlights the need for further studies to disentangle these effects and explore their implications for decision-making in high-stakes environments.
Date of Award | 15 Nov 2024 |
---|
Original language | English |
---|
Awarding Institution | - Universidade Católica Portuguesa
|
---|
Supervisor | Filipa de Almeida (Supervisor) |
---|
- Artificial intelligence
- Human-AI interaction
- Risk-taking behaviour
- Diffusion of responsibility
- AI-assisted decision-making
- Mestrado em Psicologia na Gestão e Economia
AI and decision-making under risk: a behavioural study exploring how large language models may affect our risk preferences
Seabra, L. B. D. (Student). 15 Nov 2024
Student thesis: Master's Thesis