Articles | Open Access | https://doi.org/10.54640/gn0njh67

Delegating to AI: How Perceived Losses Influence Human Decision-Making Autonomy

Abstract

As artificial intelligence (AI) becomes increasingly integrated into decision-making processes, understanding the psychological factors that shape human willingness to delegate tasks to AI is critical. This study explores how perceived losses—such as diminished control, accountability, or personal value—affect individuals’ autonomy in decision-making when interacting with AI systems. Through a series of behavioral experiments and surveys, findings reveal that higher perceptions of loss significantly reduce the likelihood of AI delegation, even when efficiency or accuracy is improved. The results also indicate that trust in AI and perceived competence partially mediate this relationship. These insights have implications for AI interface design, organizational decision policies, and ethical considerations in human-AI collaboration.

Keywords

AI delegation, decision-making autonomy, perceived loss

References

Armantier, O., & Boly, A. (2015). Framing of incentives and effort provision. International Economic Review, 56(3), 917–938.

Baird, A., & Maruping, L. M. (2021). The next generation of research on IS use: A theoretical framework of delegation to and from agentic IS artifacts. MIS Quarterly, 45(1), 315–341.

Canessa, N., Crespi, C., Baud-Bovy, G., Dodich, A., Falini, A., Antonellis, G., & Cappa, S. F. (2017). Neural markers of loss aversion in resting-state brain activity. Neuroimage, 146, 257–265.

Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Beware explanations from AI in health care. Science, 373(6552), 284–286.

Bansal, D., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019a). Beyond accuracy: The role of mental models in human-AI team performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 2–11.

Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2019b). Updates in human-AI teams: Understanding and addressing the performance/compatibility trade-off. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 2429–2437.

Bauer, K., & Gill, A. (2024). Mirror, mirror on the wall: Algorithmic assessments, transparency, and self-fulfilling prophecies. Information Systems Research, 35(1), 226–248.

Bauer, K., von Zahn, M., & Hinz, O. (2023). Expl(AI)ned: The impact of explainable artificial intelligence on users’ information processing. Information Systems Research, 34(4), 1582–1602.

Beam, E. A., Masatioglu, Y., Watson, T., & Yang, D. (2023). Loss aversion or lack of trust: Why does loss framing work to encourage preventive health behaviors? Journal of Behavioral and Experimental Economics, 104, 1–17.

Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34.

Breiter, H. C., Aharon, I., Kahneman, D., Dale, A., & Shizgal, P. (2001). Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron, 30(2), 619–639.

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton, New York.

Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. Quarterly Journal of Economics, 140(2), 889–942.

Burton, J. W., Stein, M. K., & Jensen, T. B. (2019). A systematic review of algorithm aversion in augmented decision making. Behavioral Decision Making, 33(2), 220–239.

Bussone, A., Stumpf, S., & O’Sullivan, D. (2015). The role of explanations on trust and reliance in clinical decision support systems. 2015 International Conference on Healthcare Informatics (Dallas), 160–169.

Camerer, C. (2000). Prospect theory in the wild. In D. Kahneman & A. Tversky (Eds.), Choices, Values, and Frames (pp. 288–300). Russell Sage, New York.

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.

Cheng, L., & Chouldechova, A. (2023). Overcoming algorithm aversion: A comparison between process and outcome control. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (ACM, Hamburg, Germany), 1–27.

Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 115–1170.

Durso, F. T., Hackworth, C. A., Truitt, T. R., Crutchfield, J., Nikolic, D., & Manning, C. A. (1998). Situation awareness as a predictor of performance en route air traffic controllers. Air Traffic Control Quarterly, 6(1), 1–20.

Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32–64.

Endsley, M. R. (1988). Situation awareness global assessment technique (SAGAT). Proceedings of the IEEE 1988 National Aerospace and Electronics Conference (IEEE, Piscataway, NJ), 789–795.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Delegating to AI: How Perceived Losses Influence Human Decision-Making Autonomy. (2025). The Pinnacle Research Journal of Scientific and Management Sciences, 2(06), 1-5. https://doi.org/10.54640/gn0njh67