Integrating Dynamic AI Dialogue Systems for Non-Player Characters in Unreal Engine : Practical Implementation of Convai Framework
Skodova, Jarmila (2025)
Skodova, Jarmila
2025
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025060520972
https://urn.fi/URN:NBN:fi:amk-2025060520972
Tiivistelmä
The purpose of this thesis was to implement a dynamic AI-powered dialogue system within Unreal Engine as part of a larger project commissioned by Hamk Tech. The overarching goal was to create a production-ready training tool with broad applicability in training, educational and healthcare virtual environments. The research questions were grounded in interaction design principles and the technical requirements of deploying dialogue systems. In the future, the project might be improved by implementing custom components, adopting newer models and transitioning from cloud-based to local deployment to enhance performance and privacy.
This thesis is functional in nature, written alongside the creation of a working prototype. It begins by reviewing the theoretical foundations of dialogue systems, covering key components such as automatic speech recognition, large language model reasoning, neural text-to-speech, facial animation, memory architectures, narrative integration, and blueprint system. Afterwards, it outlines the practical implementation of these components within the Convai platform and their integration into Unreal Engine. The development process utilized the Scrum framework and was structured around four main stages: research, AI character configuration, integration with Unreal Engine, and iterative testing. The system was evaluated using functional and user acceptance testing, with participants providing feedback on speech quality, response naturalness, and overall believability of the character.
In terms of outcomes, a functional AI NPC was successfully implemented and integrated into a virtual nursing environment. The system supports real-time voice interaction, contextual memory, and environmental awareness, meeting the success criteria defined for each research question. Although
developed as a prototype, the project has proven the feasibility of deploying production-scale AI dialogue systems in interactive 3D environments.
Overall, the project demonstrated how modern AI technologies, cloud services, and game engines can be combined to create believable, responsive and context-aware virtual characters suitable for use in education and healthcare.
This thesis is functional in nature, written alongside the creation of a working prototype. It begins by reviewing the theoretical foundations of dialogue systems, covering key components such as automatic speech recognition, large language model reasoning, neural text-to-speech, facial animation, memory architectures, narrative integration, and blueprint system. Afterwards, it outlines the practical implementation of these components within the Convai platform and their integration into Unreal Engine. The development process utilized the Scrum framework and was structured around four main stages: research, AI character configuration, integration with Unreal Engine, and iterative testing. The system was evaluated using functional and user acceptance testing, with participants providing feedback on speech quality, response naturalness, and overall believability of the character.
In terms of outcomes, a functional AI NPC was successfully implemented and integrated into a virtual nursing environment. The system supports real-time voice interaction, contextual memory, and environmental awareness, meeting the success criteria defined for each research question. Although
developed as a prototype, the project has proven the feasibility of deploying production-scale AI dialogue systems in interactive 3D environments.
Overall, the project demonstrated how modern AI technologies, cloud services, and game engines can be combined to create believable, responsive and context-aware virtual characters suitable for use in education and healthcare.