EXPLANABLE ARTIFICIAL INTELLIGENCE (XAI) IN FINANCIAL FRAUD DETECTION: CHALLENGES AND OPPORTUNITIES FOR INCREASING TRANSPARENCY IN BANKING SYSTEMS

Authors

  • José Henrique Salles Pinheiro Autor

DOI:

https://doi.org/10.63330/armv1n5-014

Keywords:

Explainable Artificial Intelligence, XAI, Fraud Detection, Machine Learning, Algorithmic Transparency, Banking Systems, Compliance, Algorithmic Governance

Abstract

The increasing sophistication of financial fraud has driven banking institutions to adopt artificial intelligence (AI) systems for automated detection of suspicious transactions. However, the "black-box" nature of traditional machine learning algorithms raises critical questions about transparency, regulatory compliance, and user trust. This article investigates the application of Explainable Artificial Intelligence (XAI) as a solution to increase the interpretability of financial fraud detection systems. Through a systematic literature review and analysis of practical cases, we examine how XAI techniques can balance predictive effectiveness with algorithmic transparency. The results indicate that tools such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) offer promising paths to create detection systems that are simultaneously accurate and interpretable. The study concludes that implementing XAI in banking systems not only meets growing regulatory demands but also strengthens customer trust and improves operational efficiency of compliance teams.

References

ACFE - ASSOCIATION OF CERTIFIED FRAUD EXAMINERS. Report to the Nations: 2023 Global Study on Occupational Fraud and Abuse. Austin: ACFE, 2023.

ALVAREZ-MELIS, David; JAAKKOLA, Tommi S. On the robustness of interpretability methods. Proceedings of the 35th International Conference on Machine Learning, v. 80, p. 66-75, 2018.

ARRIETA, Alejandro Barredo et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, v. 58, p. 82-115, 2020.

BAHNSEN, Alejandro Correa et al. Feature engineering strategies for credit card fraud detection. Expert Systems with Applications, v. 51, p. 134-142, 2016.

BANCO CENTRAL DO BRASIL. Resolução nº 4.893, de 26 de fevereiro de 2021. Dispõe sobre a política de gerenciamento de riscos e a política de gerenciamento de capital. Brasília: BCB, 2021.

BANCO CENTRAL DO BRASIL. Relatório de Economia Bancária 2022. Brasília: BCB, 2023.

BOLTON, Richard J.; HAND, David J. Statistical fraud detection: A review. Statistical Science, v. 17, n. 3, p. 235-249, 2002.

BRASIL. Lei nº 13.709, de 14 de agosto de 2018. Lei Geral de Proteção de Dados Pessoais (LGPD). Diário Oficial da União, Brasília, DF, 15 ago. 2018.

CHEN, Zheng et al. Machine learning techniques for credit card fraud detection: A comparative study. Proceedings of the 2018 International Conference on Computer Science and Artificial Intelligence, p. 80-84, 2018.

CHOULDECHOVA, Alexandra. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, v. 5, n. 2, p. 153-163, 2017.

DOSHI-VELEZ, Finale; KIM, Been. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.

GDPR - GENERAL DATA PROTECTION REGULATION. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016. Official Journal of the European Union, L 119/1, 2018.

GUIDOTTI, Riccardo et al. A survey of methods for explaining black box models. ACM Computing Surveys, v. 51, n. 5, p. 1-42, 2018.

GUNNING, David; AHA, David W. DARPA's explainable artificial intelligence program. AI Magazine, v. 40, n. 2, p. 44-58, 2019.

LIU, Fei Tony; TING, Kai Ming; ZHOU, Zhi-Hua. Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining. IEEE, 2008. p. 413-422.

LUNDBERG, Scott M.; LEE, Su-In. A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, v. 30, p. 4765-4774, 2017.

MEHRABI, Ninareh et al. A survey on bias and fairness in machine learning. ACM Computing Surveys, v. 54, n. 6, p. 1-35, 2021.

MOLNAR, Christoph. Interpretable machine learning: A guide for making black box models explainable. 2. ed. München: Christoph Molnar, 2019.

MURDOCH, W. James et al. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, v. 116, n. 44, p. 22071-22080, 2019.

PHUA, Clifton et al. A comprehensive survey of data mining-based fraud detection research. arXiv preprint arXiv:1009.6119, 2010.

RIBEIRO, Marco Tulio; SINGH, Sameer; GUESTRIN, Carlos. "Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, p. 1135-1144, 2016.

RUDIN, Cynthia. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, v. 1, n. 5, p. 206-215, 2019.

WEST, Jevin; BHATTACHARYA, Meliha. Intelligent financial fraud detection: A comprehensive review. Computers & Security, v. 57, p. 47-66, 2016.

ZHANG, Xiang et al. Deep learning for fraud detection: A survey. IEEE Access, v. 6, p. 3097-3118, 2018.

Published

2025-07-24

How to Cite

EXPLANABLE ARTIFICIAL INTELLIGENCE (XAI) IN FINANCIAL FRAUD DETECTION: CHALLENGES AND OPPORTUNITIES FOR INCREASING TRANSPARENCY IN BANKING SYSTEMS. (2025). Aurum Revista Multidisciplinar, 1(5), 166-183. https://doi.org/10.63330/armv1n5-014