Development of web application for practicing Finnish language writing skills with the help of LLMs
Bacherikov, Dmitrii (2024)
Bacherikov, Dmitrii
2024
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024051411629
https://urn.fi/URN:NBN:fi:amk-2024051411629
Tiivistelmä
This thesis examines the potential of Large Language Models (LLMs), specifically standard and fine-tuned ChatGPT models, to support practicing Finnish language writing skills for the YKI test preparation. The primary objective was to explore how these models can generate realistic test scenarios and provide evaluative feedback to users, thereby simulating the test environment for effective practice. Additionally, the thesis is aimed to develop and deploy a user-friendly application integrating ChatGPT API with a React frontend and a FastAPI backend, hosted on AWS using Beanstalk to facilitate seamless interaction between users and the AI model.
Methodologically, the study employed a mixed-method approach involving the design, development, deployment and iterative testing of the application. The performance of the ChatGPT models was assessed through a series of evaluations using both a training set and a validation set to fine-tune the model's response accuracy and relevancy. Key findings suggest that while standard models provide a robust foundation for task generation, fine-tuned models significantly enhance the specificity and usefulness of the feedback provided to users, mimicking evaluation by a human.
The deployment of the application demonstrated the practical viability of using LLMs in educational purposes, particularly in language proficiency assessments. The study also discusses potential improvements and future directions for the application. Overall, this research underscores the transformative potential of LLMs in language education and test preparation, offering extensive opportunities for both learners and educators in the field of language proficiency.
Methodologically, the study employed a mixed-method approach involving the design, development, deployment and iterative testing of the application. The performance of the ChatGPT models was assessed through a series of evaluations using both a training set and a validation set to fine-tune the model's response accuracy and relevancy. Key findings suggest that while standard models provide a robust foundation for task generation, fine-tuned models significantly enhance the specificity and usefulness of the feedback provided to users, mimicking evaluation by a human.
The deployment of the application demonstrated the practical viability of using LLMs in educational purposes, particularly in language proficiency assessments. The study also discusses potential improvements and future directions for the application. Overall, this research underscores the transformative potential of LLMs in language education and test preparation, offering extensive opportunities for both learners and educators in the field of language proficiency.