AI Driven Code Review System: Leveraging Artificial Intelligence for Enhanced code Quality Assessment and Bug Detection
Rehman, Abdul (2025)
Rehman, Abdul
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025060319591
https://urn.fi/URN:NBN:fi:amk-2025060319591
Tiivistelmä
This research study investigates techniques for developing AI-based instruments to overhaul program bug detection and programming quality gauging. The tools that are produced make use of static and dynamic program evaluation methods along with NLP and machine learning ability to complicate code standards and increase developer efficiency. Implementing AI applications in the code review processes, the system allows for better error detection, easier generation of better quality code, and decreased time needed for the manual evaluation.
The study explains how these technologies plug the existing gaps in the software development life cycle by automating routine operations so that the developers can concentrate on solving more complex and important problems. The study starts with identifying current weaknesses in code review methods and then addresses the aspects on which AI implementations can build upon traditional evaluation techniques. It is developing and installing effective AI models to identify defective chunks of codes, reduce and restore their functionality, respectively, and give good feedback. Additionally, the research explores the technical challenges of AI-driven solutions and developers reluctance to use AI tools in personal workflow.
The study explains how these technologies plug the existing gaps in the software development life cycle by automating routine operations so that the developers can concentrate on solving more complex and important problems. The study starts with identifying current weaknesses in code review methods and then addresses the aspects on which AI implementations can build upon traditional evaluation techniques. It is developing and installing effective AI models to identify defective chunks of codes, reduce and restore their functionality, respectively, and give good feedback. Additionally, the research explores the technical challenges of AI-driven solutions and developers reluctance to use AI tools in personal workflow.