Hyppää sisältöön
    • Suomeksi
    • På svenska
    • In English
  • Suomi
  • Svenska
  • English
  • Kirjaudu
Hakuohjeet
JavaScript is disabled for your browser. Some features of this site may not work without it.
Näytä viite 
  •   Ammattikorkeakoulut
  • Jyväskylän ammattikorkeakoulu
  • Opinnäytetyöt (Avoin kokoelma)
  • Näytä viite
  •   Ammattikorkeakoulut
  • Jyväskylän ammattikorkeakoulu
  • Opinnäytetyöt (Avoin kokoelma)
  • Näytä viite

Fine-Tuning Techniques of LLM

Tarkiainen, Suvi (2025)

 
Avaa tiedosto
Tarkiainen_Suvi.pdf (5.817Mt)
Lataukset: 


Tarkiainen, Suvi
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Näytä kaikki kuvailutiedot
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025120833753
Tiivistelmä
Large Language Models (LLMs) based on transformer architectures have become challenging to full fine-tune for specific tasks as their size grows, which can limit their use or fine-tuning in real-world scenarios with limited computational resources. Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA), have been developed to resolve these challenges. The study aimed to examine recently published PEFT techniques through evaluating their computational requirements, their influence on model performance, and how they changed bias in generated outputs. Additional objectives included exploring how dataset size affected fine-tuning outcomes and whether different optimizers could reduce computational cost.

The experiment was conducted by fine-tuning the base instruction-tuned LLaMA-type model using LoRA, combined with quantized model loading and an instruction-tuned conversational dataset. Because both the base model and the fine-tuning data were instruction-focused, improvements in output quality were limited, as the base model had already been exposed to similar content on a larger scale. After fine-tuning, the fine-tuned model was evaluated against the base model to compare overall performance and to measure changes in bias.

The results showed that the base model was preferred in quantitative and qualitative evaluations, indicating that fine-tuning with PEFT method may not provide advantages when both the model and dataset are already instruction-tuned. Bias evaluation demonstrated clearer benefits: biased output became less frequent and less severe, and the overall tone shifted toward more neutral or positive sentiment.

The study found that PEFT based fine-tuning could reduce harmful or systematic bias even when it did not enhance general output quality. The findings highlighted the importance of dataset and model selection, as well as computational constraints when applying PEFT methods, offering insight into adapting LLMs under limited resources.
Kokoelmat
  • Opinnäytetyöt (Avoin kokoelma)
Ammattikorkeakoulujen opinnäytetyöt ja julkaisut
Yhteydenotto | Tietoa käyttöoikeuksista | Tietosuojailmoitus | Saavutettavuusseloste
 

Selaa kokoelmaa

NimekkeetTekijätJulkaisuajatKoulutusalatAsiasanatUusimmatKokoelmat

Henkilökunnalle

Ammattikorkeakoulujen opinnäytetyöt ja julkaisut
Yhteydenotto | Tietoa käyttöoikeuksista | Tietosuojailmoitus | Saavutettavuusseloste