Skip to main content

Overview


 

 

 

 

 

 

Submit Your Paper Overview Author Guideliness Editorial Team Indexing and Metrics Reviewers Call for Paper and News

Aims and Scope

The Explainable Artificial Intelligence in Healthcare (XAIH) journal is an international, peer-reviewed scholarly journal that focuses on advancing the understanding, development, and application of transparent and interpretable artificial intelligence (AI) systems in healthcare. As AI technologies increasingly influence clinical decision-making, diagnosis, and patient management, XAIH serves as a dedicated platform for researchers, clinicians, data scientists, and policymakers to explore how explainable AI (XAI) can ensure trust, accountability, and fairness in healthcare environments.

The journal bridges the gap between AI innovation and medical ethics, promoting interdisciplinary collaboration between computer science, medicine, data analytics, cognitive science, and bioinformatics. Through its publications, XAIH aims to advance safe, interpretable, and human-centric AI models that empower medical professionals and enhance patient care through transparency and understanding.

Note: The journal does not consider submissions with SLR type research.

The Explainable Artificial Intelligence in Healthcare (XAIH) journal aims to promote research and innovation at the intersection of AI, transparency, and healthcare. It provides a global forum for the publication of original studies, reviews, and practical applications that demonstrate how explainable and trustworthy AI models can be effectively integrated into clinical workflows, diagnostics, and healthcare management. The journal supports the ethical deployment of AI by fostering research that prioritizes interpretability, fairness, and accountability in medical decision-making systems.

Topics of interest include (but are not limited to):

  • Interpretable deep learning architectures for medical imaging and genomics.

  • Transparency, bias mitigation, and fairness in healthcare AI systems.

  • Human-AI interaction and clinical decision support systems.

  • Trust, ethics, and accountability in algorithmic healthcare.

  • Regulatory and policy frameworks for AI explainability in medicine.

  • Explainable machine learning models for clinical diagnosis and prediction.

  • Visualization, reasoning, and explainability tools for medical practitioners.

demo