Bridging Theory, AI, and Practice: measuring the impact of UI design choices
Kuusisto, Kasperi (2026)
Kuusisto, Kasperi
2026
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202603315342
https://urn.fi/URN:NBN:fi:amk-202603315342
Tiivistelmä
This thesis investigated how an existing user interface can be evaluated and improved by combining established usability theory, AI‑generated insights, and empirical user testing. The study focused on the web hotel interface of SpeedZone, an Estonian‑based web hosting provider, and examined how specific design changes influenced usability, user satisfaction, and user behavior. The work was motivated by the increasing importance of intuitive and efficient interfaces in modern digital services and by the growing role of artificial intelligence in design workflows. A further motivation was to explore how strongly overall user experience is shaped by the structure and clarity of the user interface itself, and to what extent small, targeted UI changes can meaningfully influence perceived usability.
The theoretical framework drew from user experience, user interface design, usability principles, human-computer interaction, and emerging research on AI‑assisted evaluation. These perspectives provided the criteria for defining what a “good” user interface is, emphasizing clarity, consistency, cognitive simplicity, and alignment with user expectations. Based on these criteria, several potential improvements were identified, of which two were implemented for testing: one grounded in established usability theory and one derived from AI‑generated suggestions. The empirical research consisted of interviews and task‑based testing with twelve participants, divided evenly so that six interacted with the current interface and six with the improved version. Data was collected through observation and participant feedback, both qualitative and quantitative, and analyzed using a mixed‑methods approach.
The findings showed that clarity, structure, and predictable design patterns remain central to effective user interfaces. Participants responded positively to improvements that enhanced information flow and reduced cognitive load, while more subtle visual additions, such as trust badges, had little observable impact. AI‑generated suggestions supported the ideation process but required critical interpretation and validation through user testing. The results demonstrated that theory‑based improvements aligned most strongly with actual user behavior, while AI recommendations were more variable. The study also revealed differences between beginner and advanced users, highlighting the potential value of adaptive interfaces that adjust to user expertise.
The thesis concludes that combining theory, AI, and empirical testing provides a robust and evidence‑based approach to UI improvement. In doing so, it also demonstrates a practical method for conducting user interface evaluation that integrates expert principles, AI‑assisted analysis, and structured user testing. The work offers practical recommendations for interface design and outlines opportunities for future research, including broader participant groups, real‑world testing environments, longitudinal studies, and more extensive design interventions.
The theoretical framework drew from user experience, user interface design, usability principles, human-computer interaction, and emerging research on AI‑assisted evaluation. These perspectives provided the criteria for defining what a “good” user interface is, emphasizing clarity, consistency, cognitive simplicity, and alignment with user expectations. Based on these criteria, several potential improvements were identified, of which two were implemented for testing: one grounded in established usability theory and one derived from AI‑generated suggestions. The empirical research consisted of interviews and task‑based testing with twelve participants, divided evenly so that six interacted with the current interface and six with the improved version. Data was collected through observation and participant feedback, both qualitative and quantitative, and analyzed using a mixed‑methods approach.
The findings showed that clarity, structure, and predictable design patterns remain central to effective user interfaces. Participants responded positively to improvements that enhanced information flow and reduced cognitive load, while more subtle visual additions, such as trust badges, had little observable impact. AI‑generated suggestions supported the ideation process but required critical interpretation and validation through user testing. The results demonstrated that theory‑based improvements aligned most strongly with actual user behavior, while AI recommendations were more variable. The study also revealed differences between beginner and advanced users, highlighting the potential value of adaptive interfaces that adjust to user expertise.
The thesis concludes that combining theory, AI, and empirical testing provides a robust and evidence‑based approach to UI improvement. In doing so, it also demonstrates a practical method for conducting user interface evaluation that integrates expert principles, AI‑assisted analysis, and structured user testing. The work offers practical recommendations for interface design and outlines opportunities for future research, including broader participant groups, real‑world testing environments, longitudinal studies, and more extensive design interventions.
