VisionCam: A Comprehensive XAI Toolkit for Interpreting Image-Based Deep Learning Models

Main Article Content

Walid Abdullah
Ahmed Tolba
Ahmed Elmasry
Nihal N. Mostafa

Abstract

Artificial intelligence (AI), a rapidly developing technology, has revolutionized various aspects of our lives. However many AI models' complex inner workings are still unknown, frequently compared to a "black box." Particularly in crucial fields, this lack of explainability (XAI) reduces responsible AI research and reduces public confidence, and is accompanied by a growing demand for transparency and interpretability in AI decision-making. In response, this paper introduces a Python Extensible Toolkit for Explainable AI (XAI), This toolkit comprises nine state-of-the-art techniques for explaining AI models (especially deep learning models) decisions in image processing: GradCAM, GradCAM++, GradCAMElementWise, HriesCAM, RespondCAM, ScoreCAM, SmoothGradCAM++, XgradCAM, and AblationCAM. Each tool offers unique insights into model decision-making processes of deep learning models that work with image data, addressing various aspects of interpretability. Through case studies, we demonstrate the toolkit's impact on improving transparency and interpretability in AI systems that analyze visual information. The source code for the VisionCam toolkit is accessible at https://github.com/VisionCAM.

Downloads

Download data is not yet available.

Article Details

How to Cite
Abdullah, W. (2024) “VisionCam: A Comprehensive XAI Toolkit for Interpreting Image-Based Deep Learning Models”, Sustainable Machine Intelligence Journal, 8, pp. (4):46–55. doi:10.61356/SMIJ.2024.8290.
Section
Original Article

How to Cite

Abdullah, W. (2024) “VisionCam: A Comprehensive XAI Toolkit for Interpreting Image-Based Deep Learning Models”, Sustainable Machine Intelligence Journal, 8, pp. (4):46–55. doi:10.61356/SMIJ.2024.8290.

Similar Articles

You may also start an advanced similarity search for this article.