LLM Agents & Trust- Role of Deterministic Code : Design Science Research for Trust in Human-AI Interaction
Virpiö, Miika (2025)
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025090124327
https://urn.fi/URN:NBN:fi:amk-2025090124327
Tiivistelmä
Relentless diffusion of Artificial Intelligence (AI) into society, powered by research and development of Large Language Models (LLM), is causing ripples of change. LLM has replaced classical Machine Learning (ML) and AI Agents have transformed the digital environment for developers and end-users alike. Hype has abandoned trustworthy deterministic code for the magic of probabilistic LLM generation with a cost of incorrect outputs and hallucinations. In a digital environment already riddled with targeted messaging and nudging, calibrating trust for novel tools like LLM Agents is hard. This is especially true for high-stakes domain, like Finance, were decisions based on hallucinations can have major repercussions. To find trust, this thesis explores the technology of LLM Agents to identify causes for untrustworthy behavior, tools and mechanisms for improvement and the role of deterministic code as grounding factor. Research is simultaneous to thesis for MBA in same topic by the same author, both using Design Science Research (DSR) methodology. Experiments in DSR Design Cycles cover retrieval strategies, memory systems, fine-tuning and prompt masking. Findings point to importance of probabilism-determinism balance and careful cross-discipline prompt engineering of system instructions, but many others appear in the dramatic recounting of experimentation. As the main DSR research tool and contribution, The Artifact is shared in GitHub.