The Impact of AI on Unit Testing within CICD Processes for Software Development
Al-Asili, Akram (2024)
Al-Asili, Akram
2024
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024111728498
https://urn.fi/URN:NBN:fi:amk-2024111728498
Tiivistelmä
This paper aims to compare the efficiency of unit tests autogenerated by AI and those hand-coded to be integrated into CI processes. By analyzing test coverage, time taken, error reduction, and compliance with standards, the research further proves that unit tests developed by AI have a much better performance than the ordinary manual tests. The study finds that those unit tests generated with the AI tool achieve an average line coverage of 97% versus 77%. Similarly, branch and complexity coverage are significantly higher for AI-generated. Execution times were relatively shorter for AI-generated tests where it took only 6 seconds with more tests executed compared to 14 seconds for the manual tests. Some areas still need improvement. However, error reduction and standards compliance benefited from the AI-generated tests as these align with coding conventions and more than handwritten tests do.
The results indicate that the use of AI tools can enable CI by offering reliable, efficient, and consistent test coverage, which contributes to the creation of high-quality software products. The study also reveals the need to choose and implement AI models with high-quality input data as well as the need to continually monitor and fine-tune AI models. While AI-generated tests have certain benefits, their impact is very much dependent on the performance of the AI systems.
The results indicate that the use of AI tools can enable CI by offering reliable, efficient, and consistent test coverage, which contributes to the creation of high-quality software products. The study also reveals the need to choose and implement AI models with high-quality input data as well as the need to continually monitor and fine-tune AI models. While AI-generated tests have certain benefits, their impact is very much dependent on the performance of the AI systems.