How does user trust influence the adoption of AI-powered information systems? A structural equation modeling approach

Authors

DOI:

https://doi.org/10.70577/ASCE/395.413/2025

Keywords:

User trust; Perceived usefulness; AI adoption; SEM Model; Algorithmic Transparency.

Abstract

This study employs Structural Equation Modeling (SEM) to analyze the factors influencing the adoption of AI systems, highlighting the central role of user trust (β = 0.49) and perceived usefulness (β = 0.089) as key predictors. The results validate the Technology Acceptance Model (TAM) but extend its framework by demonstrating that trust acts as a critical mediator, especially in contexts of technological uncertainty. Perceived security showed a moderate effect (β = 0.23), relevant in sensitive applications, while usability had a minimal impact (β = 0.04), suggesting that users prioritize reliability over ease of use. Organizational variables (e.g., size) had a marginaleffect, emphasizing the predominance of individual factors. The findings underscore the need for transparency and explainability (XAI)-centered designs to strengthen trust and facilitate adoption.

Downloads

Download data is not yet available.

References

Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: Progress, challenges, and future directions. Humanities and Social Sciences Communications, 11(1), 1568. https://doi.org/10.1057/s41599-024-04044-8 DOI: https://doi.org/10.1057/s41599-024-04044-8

Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (Xai): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805. https://doi.org/10.1016/j.inffus.2023.101805 DOI: https://doi.org/10.1016/j.inffus.2023.101805

Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S. (2024). A systematic literature review of user trust in ai-enabled systems: An hci perspective. International Journal of Human–Computer Interaction, 40(5), 1251–1266. https://doi.org/10.1080/10447318.2022.2138826 DOI: https://doi.org/10.1080/10447318.2022.2138826

Chen, Y., Prentice, C., Weaven, S., & Hisao, A. (2022). The influence of customer trust and artificial intelligence on customer engagement and loyalty – The case of the home-sharing industry. Frontiers in Psychology, 13, 912339. https://doi.org/10.3389/fpsyg.2022.912339 DOI: https://doi.org/10.3389/fpsyg.2022.912339

Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273 DOI: https://doi.org/10.3389/fhumd.2024.1421273

Contextualizing end-user needs: How to measure the trustworthiness of an ai system. (2023, July 17). https://insights.sei.cmu.edu/blog/contextualizing-end-user-needs-how-to-measure-the-trustworthiness-of-an-ai-system/

Ding, Y., & Najaf, M. (2024). Interactivity, humanness, and trust: A psychological approach to AI chatbot adoption in e-commerce. BMC Psychology, 12(1), 595. https://doi.org/10.1186/s40359-024-02083-z DOI: https://doi.org/10.1186/s40359-024-02083-z

Gerlich, M. (2024). Exploring motivators for trust in the dichotomy of human—Ai trust dynamics. Social Sciences, 13(5), 251. https://doi.org/10.3390/socsci13050251 DOI: https://doi.org/10.3390/socsci13050251

Kaneko, S., & Yamada, S. (2024). Predicting trust dynamics with dynamic sem in human-ai cooperation. arXiv. https://doi.org/10.48550/ARXIV.2407.01752

Kumar, S., & Bargavi, Dr. S. K. M. (2024). Trust’s significance in human-ai communication and decision-making. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT, 08(02), 1–10. https://doi.org/10.55041/IJSREM28468 DOI: https://doi.org/10.55041/IJSREM28468

Li, Y., Wu, B., Huang, Y., & Luan, S. (2024). Developing trustworthy artificial intelligence: Insights from research on interpersonal, human-automation, and human-AI trust. Frontiers in Psychology, 15, 1382693. https://doi.org/10.3389/fpsyg.2024.1382693 DOI: https://doi.org/10.3389/fpsyg.2024.1382693

Mahdavi, M., & Frings, D. (2024). Trust in AI applications and intention to use them in cardiac care among cardiologists in the UK: A Structural Equation Modeling Approach. https://doi.org/10.21203/rs.3.rs-4114716/v1 DOI: https://doi.org/10.21203/rs.3.rs-4114716/v1

Marmolejo-Ramos, F., Marrone, R., Korolkiewicz, M., Gabriel, F., Siemens, G., Joksimovic, S., Yamada, Y., Mori, Y., Rahwan, T., Sahakyan, M., Sonna, B., Meirmanov, A., Bolatov, A., Som, B., Ndukaihe, I., Arinze, N. C., Kundrát, J., Skanderová, L., Ngo, V.-G., … Tejada, J. (2025). Factors influencing trust in algorithmic decision-making: An indirect scenario-based experiment. Frontiers in Artificial Intelligence, 7, 1465605. https://doi.org/10.3389/frai.2024.1465605 DOI: https://doi.org/10.3389/frai.2024.1465605

Mehrotra, S., Centeio Jorge, C., Jonker, C. M., & Tielman, M. L. (2023). Building appropriate trust in ai: The significance of integrity-centered explanations. In P. Lukowicz, S. Mayer, J. Koch, J. Shawe-Taylor, & I. Tiddi (Eds.), Frontiers in Artificial Intelligence and Applications. IOS Press. https://doi.org/10.3233/FAIA230121 DOI: https://doi.org/10.3233/FAIA230121

Oyekunle, D., Matthew, U. O., Preston, D., & Boohene, D. (2024). Trust beyond technology algorithms: A theoretical exploration of consumer trust and behavior in technological consumption and ai projects. Journal of Computer and Communications, 12(06), 72–102. https://doi.org/10.4236/jcc.2024.126006 DOI: https://doi.org/10.4236/jcc.2024.126006

Pasipamire, N., & Muroyiwa, A. (2024). Navigating algorithm bias in AI: Ensuring fairness and trust in Africa. Frontiers in Research Metrics and Analytics, 9, 1486600. https://doi.org/10.3389/frma.2024.1486600 DOI: https://doi.org/10.3389/frma.2024.1486600

Pathak, A., & Bansal, V. (2024). AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents. Computers in Human Behavior: Artificial Humans, 2(2), 100094. https://doi.org/10.1016/j.chbah.2024.100094 DOI: https://doi.org/10.1016/j.chbah.2024.100094

Touameur, O., & Harrag, F. (2023). Advancing trust in ai algorithms: A state-of-the-art examination of non-knowledge aware and knowledge-aware aware approaches. 2023 2nd International Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI), 1–6. https://doi.org/10.1109/EICEEAI60672.2023.10590431 DOI: https://doi.org/10.1109/EICEEAI60672.2023.10590431

Westover, J. (2024). Ai and trust in organizations. Human Capital Leadership Review, 13(3). https://doi.org/10.70175/hclreview.2020.13.3.7 DOI: https://doi.org/10.70175/hclreview.2020.13.3.7

Zhou, T., & Lu, H. (2025). The effect of trust on user adoption of AI-generated content. The Electronic Library, 43(1), 61–76. https://doi.org/10.1108/EL-08-2024-0244 DOI: https://doi.org/10.1108/EL-08-2024-0244

Published

2025-07-11

How to Cite

Gavilánez Alvarez, O. D., Cruz Garzón, J. J., & Inca Balseca, C. L. (2025). How does user trust influence the adoption of AI-powered information systems? A structural equation modeling approach. ASCE, 4(3), 395–413. https://doi.org/10.70577/ASCE/395.413/2025

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)