EXPLANABLE ARTIFICIAL INTELLIGENCE (XAI) IN FINANCIAL FRAUD DETECTION: CHALLENGES AND OPPORTUNITIES FOR INCREASING TRANSPARENCY IN BANKING SYSTEMS
DOI:
https://doi.org/10.63330/armv1n5-014Keywords:
Explainable Artificial Intelligence, XAI, Fraud Detection, Machine Learning, Algorithmic Transparency, Banking Systems, Compliance, Algorithmic GovernanceAbstract
The increasing sophistication of financial fraud has driven banking institutions to adopt artificial intelligence (AI) systems for automated detection of suspicious transactions. However, the "black-box" nature of traditional machine learning algorithms raises critical questions about transparency, regulatory compliance, and user trust. This article investigates the application of Explainable Artificial Intelligence (XAI) as a solution to increase the interpretability of financial fraud detection systems. Through a systematic literature review and analysis of practical cases, we examine how XAI techniques can balance predictive effectiveness with algorithmic transparency. The results indicate that tools such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) offer promising paths to create detection systems that are simultaneously accurate and interpretable. The study concludes that implementing XAI in banking systems not only meets growing regulatory demands but also strengthens customer trust and improves operational efficiency of compliance teams.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 José Henrique Salles Pinheiro (Autor)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.