Explainable Artificial Intelligence (XAI) for Transparent Decision Systems

  • Authors

    • Dr. Leena Department of Artificial Intelligence and Data Science, Ramaiah Institute of Technology, Bangalore, India. Author
    • Ragav Chandran Department of Artificial Intelligence and Data Science, Ramaiah Institute of Technology, Bangalore, India. Author

    Published 2026-01-05

  • Explainable Artificial Intelligence, Transparency, Interpretability, Trustworthy AI, Decision Support Systems, Machine Learning Ethics

    Issue

    Section

    Articles

    How to Cite

    [1]
    Leena and R. Chandran, “Explainable Artificial Intelligence (XAI) for Transparent Decision Systems”, IJADSMC, vol. 1, no. 1, pp. 40–52, Jan. 2026, Accessed: Mar. 02, 2026. [Online]. Available: https://worldcometresearchgroup.com/index.php/ijadsmc/article/view/51
  • Abstract

    Explainable Artificial Intelligence (XAI) has become a key research focus nowadays due to the growing use of more intricate machine learning and deep learning systems in high-stakes systems. Although contemporary artificial intelligence (AI) methods show impressive prediction accuracy, the lack of transparency, a characteristic of their opaque (black-box) essence, presents serious forestalling issues in the areas of transparency, trust, accountability, and regulatory compliance. This interpretability is a disadvantage as numerous areas, like healthcare, finance, autonomous systems, and governance of the people, need AI systems to be applied in areas that are sensitive and require decision-making in a way that is comprehensible and explainable to human participants. XAI aims to solve these dilemmas by creating approaches and systems that allow human operators to comprehend, trust, and be able to handle AI-motivated decisions. XAI is not only aimed at providing explanations, but also at making these explanations meaningful, faithful to underlying model and applicable by various groups of users such as domain experts, developers, and policymakers. Enabling transparency, XAI leads to ethical AI, reduces bias and enhances debugging and model checking, and enables compliance with the developing regulatory frameworks like the General Data Protection Regulation (GDPR). This paper constitutes a thorough discussion of the XAI, as applied on transparent decision systems. It starts with a general introduction to motivation and the conceptualization of explainability in AI and goes on to provide a comprehensive literature review of model-specific and model-agnostic explainability algorithms. The suggested methodology combines both local and global explanatory approaches and transparency leadership framework. Experimental findings show the effectiveness of XAI techniques to enhance interpretability without causing a major loss in predictive accuracy. Lastly, the paper provides the practical implications, limitations, and research directions on the future of explainable and trustworthy AI systems.

  • References

    [1] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

    [2] Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.

    [3] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42.

    [4] Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43.

    [5] Molnar, C. (2022). Interpretable Machine Learning (2nd ed.). Leanpub.

    (Widely used open-access textbook on XAI)

    [6] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.

    (LIME)

    [7] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NeurIPS), 4765–4774.

    (SHAP)

    [8] Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5), 1189–1232.

    [9] Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.

    [10] Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106.

    [11] Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann.

    (Decision trees and rule extraction)

    [12] Hastie, T., & Tibshirani, R. (1986). Generalized additive models. Statistical Science, 1(3), 297–310.

    [13] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21st ACM SIGKDD, 1721–1730.

    (GAMs in practice)

    [14] Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU Journal: ICT Discoveries, 1(1).

    [15] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., et al. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

    [16] Hemish Prakashchandra Kapadia. (2024). Zero Trust Architecture in Banking Web Applications, International Journal of Current Science (IJCSPUB), 14(2), 112-118, https://rjpn.org/ijcspub/papers/IJCSP24B1354.pdf

  • Downloads