Towards transparent and fair credit risk assessment : SHAP, LIME, and bias analysis on loan data
Peiris, Telge Methruchi Randeshi (2025)
Peiris, Telge Methruchi Randeshi
2025
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025120934198
https://urn.fi/URN:NBN:fi:amk-2025120934198
Tiivistelmä
The modern finance industry leverages artificial intelligence in assessing credit risk decisions more efficiently. However, these decisions must be interpretable, accountable, and fair to build customer trust and meet regulatory requirements .
This study aimed to develop a predictive loan approval model to manage credit risk and to identify the most optimized model through evaluation metrics and fine tuning. Explainable AI tools, specifically SHAP and LIME, were utilized to investigate transparency, while bias metrics were employed to audit the model's fairness.
The analysis demonstrated that SHAP and LIME are efficient tools for interpreting models and bias metrics to evaluate fairness. Furthermore, lending decisions could become more transparent with LIME based analysis when changes are made to features.
Despite these strengths, challenges remain in identifying the exact changeable and sensitive features that can improve LIME transparency and develop fair, interpretable and efficient AI models for finance, thereby promoting transparency and regulatory compliance.
This study aimed to develop a predictive loan approval model to manage credit risk and to identify the most optimized model through evaluation metrics and fine tuning. Explainable AI tools, specifically SHAP and LIME, were utilized to investigate transparency, while bias metrics were employed to audit the model's fairness.
The analysis demonstrated that SHAP and LIME are efficient tools for interpreting models and bias metrics to evaluate fairness. Furthermore, lending decisions could become more transparent with LIME based analysis when changes are made to features.
Despite these strengths, challenges remain in identifying the exact changeable and sensitive features that can improve LIME transparency and develop fair, interpretable and efficient AI models for finance, thereby promoting transparency and regulatory compliance.
