How informative is your XAI? Assessing the quality of explanations through information power
A growing consensus emphasizes the efficacy of user-centered and personalized approaches within the field of explainable artificial intelligence (XAI). The proliferation of diverse explanation strategies in recent years promises to improve the interaction between humans and explainable agents. This...
Saved in:
Main Authors: | Marco Matarese, Francesco Rea, Katharina J. Rohlfing, Alessandra Sciutti |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2024.1412341/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Dual feature-based and example-based explanation methods
by: Andrei Konstantinov, et al.
Published: (2025-02-01) -
Retrospectively understanding the multifaceted interplay of COVID-19 outbreak, air pollution, and sociodemographic factors through explainable AI
by: Mohmmed Talib, et al.
Published: (2025-03-01) -
Computer-aided cholelithiasis diagnosis using explainable convolutional neural network
by: Dheeraj Kumar, et al.
Published: (2025-02-01) -
Balancing Explainability and Privacy in Bank Failure Prediction: A Differentially Private Glass-Box Approach
by: Junyoung Byun, et al.
Published: (2025-01-01) -
Integration and Application of Information and Energy Technologies in Computing Power Integrated Energy Systems
by: ZHANG Tian, GAO Jianwei, LIU Haoyu, LIU Jiangtao, TAN Qinliang
Published: (2025-02-01)