The Role of Large Language Models in Creating Dialog Systems
Skalozub, Egor (2025)
Skalozub, Egor
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202503053708
https://urn.fi/URN:NBN:fi:amk-202503053708
Tiivistelmä
This thesis examines the role of large language models (LLMs) in the creation and development of dialogue systems, highlighting their transformative potential and associated challenges. The study’s objectives included analyzing the evolution of dialogue systems, exploring the concept of meta-control for LLM-driven interactions, evaluating practical applications, and proposing an ethical integration framework.
The research methodology combined a literature review, comparative analysis of traditional and LLM-based dialogue systems, experimental prototyping, and expert insights. Key findings revealed that LLMs enhance conversational quality by improving context awareness, natural language generation, and adaptability. However, they also present challenges such as computational demands, bias, privacy concerns, and unpredictable outputs. The proposed framework introduces a meta-control layer to regulate dialogue flow, ensure ethical practices, and maintain user trust.
Experimental results demonstrated the effectiveness of meta-control mechanisms in reducing errors and enhancing system coherence, particularly when addressing diverse user scenarios. Despite these advancements, limitations such as computational overhead and interpretability persist, underscoring the need for further research into scalable, transparent, and fair LLM implementations.
The study concludes that while LLMs significantly advance dialogue systems, their integration requires careful design, robust control mechanisms, and adherence to ethical guidelines. Recommendations for future research include refining meta-control strategies, enhancing model transparency, and addressing domain-specific challenges.
The research methodology combined a literature review, comparative analysis of traditional and LLM-based dialogue systems, experimental prototyping, and expert insights. Key findings revealed that LLMs enhance conversational quality by improving context awareness, natural language generation, and adaptability. However, they also present challenges such as computational demands, bias, privacy concerns, and unpredictable outputs. The proposed framework introduces a meta-control layer to regulate dialogue flow, ensure ethical practices, and maintain user trust.
Experimental results demonstrated the effectiveness of meta-control mechanisms in reducing errors and enhancing system coherence, particularly when addressing diverse user scenarios. Despite these advancements, limitations such as computational overhead and interpretability persist, underscoring the need for further research into scalable, transparent, and fair LLM implementations.
The study concludes that while LLMs significantly advance dialogue systems, their integration requires careful design, robust control mechanisms, and adherence to ethical guidelines. Recommendations for future research include refining meta-control strategies, enhancing model transparency, and addressing domain-specific challenges.