Explainable Artificial Intelligence: Analysis of Methodologies and Applications
Explainable Artificial Intelligence: Analysis of Methodologies and Applications
No Thumbnail Available
Files
Date
2025-10-22
Authors
Pezzini, Maria Cecilia
Pons, Claudia Fabiana
Journal Title
Journal ISSN
Volume Title
Publisher
Facultad de Informática, Universidad Nacional de La Plata
Abstract
Explainability is essential in healthcare, finance, and security, where black-box models can undermine trust and decisions. Recent advances in eXplainable Artificial Intelligence (XAI) across structured/tabular data, computer vision, and natural language processing are surveyed. Thirty articles (2022–2024) were selected through a structured search with explicit inclusion criteria, and emerging approaches are compared with established techniques such as LIME and SHAP, alongside rule-, logic-, and ontology-based methods. Methods are organized along key dimensions—post-hoc vs. ante-hoc, model-agnostic vs. model-specific, scope, problem type, input data, and output format—and their effectiveness and applicability are evaluated. The review highlights innovations including spatially explainable architectures (e.g., SAMCNet) and entropy-based logic explanations, and identifies persistent challenges in robustness, cross-domain generalization, and deployment. Overall, findings consolidate the evolving XAI landscape and indicate directions toward reproducible techniques that strengthen transparency, accountability, and user trust in AI systems.
Description
Keywords
artificial intelligence,
explainability,
explainable artificial intelligence,
machine learning
Citation
Pezzini, María C. & Pons, C. (2025). Explainable Artificial Intelligence: Analysis of Methodologies and Applications. In: Journal of Computer Science and Technology, 25(2), e07.