How informative is your XAI? Assessing the quality of explanations through information power
A growing consensus emphasizes the efficacy of user-centered and personalized approaches within the field of explainable artificial intelligence (XAI). The proliferation of diverse explanation strategies in recent years promises to improve the interaction between humans and explainable agents. This...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Computer Science |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fcomp.2024.1412341/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1825206629179588608 |
---|---|
author | Marco Matarese Francesco Rea Katharina J. Rohlfing Alessandra Sciutti |
author_facet | Marco Matarese Francesco Rea Katharina J. Rohlfing Alessandra Sciutti |
author_sort | Marco Matarese |
collection | DOAJ |
description | A growing consensus emphasizes the efficacy of user-centered and personalized approaches within the field of explainable artificial intelligence (XAI). The proliferation of diverse explanation strategies in recent years promises to improve the interaction between humans and explainable agents. This poses the challenge of assessing the goodness and efficacy of the proposed explanation, which so far has primarily relied on indirect measures, such as the user's task performance. We introduce an assessment task designed to objectively and quantitatively measure the goodness of XAI systems, specifically in terms of their “information power.” This metric aims to evaluate the amount of information the system provides to non-expert users during the interaction. This work has a three-fold objective: to propose the Information Power assessment task, provide a comparison between our proposal and other XAI goodness measures with respect to eight characteristics, and provide detailed instructions to implement it based on researchers' needs. |
format | Article |
id | doaj-art-7d0fc0fd55144743a7d8727626cf641a |
institution | Kabale University |
issn | 2624-9898 |
language | English |
publishDate | 2025-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Computer Science |
spelling | doaj-art-7d0fc0fd55144743a7d8727626cf641a2025-02-07T07:43:49ZengFrontiers Media S.A.Frontiers in Computer Science2624-98982025-01-01610.3389/fcomp.2024.14123411412341How informative is your XAI? Assessing the quality of explanations through information powerMarco Matarese0Francesco Rea1Katharina J. Rohlfing2Alessandra Sciutti3CONTACT Unit, Italian Institute of Technology, Genoa, ItalyCONTACT Unit, Italian Institute of Technology, Genoa, ItalyFaculty of Arts and Humanities, Paderborn University, Paderborn, GermanyCONTACT Unit, Italian Institute of Technology, Genoa, ItalyA growing consensus emphasizes the efficacy of user-centered and personalized approaches within the field of explainable artificial intelligence (XAI). The proliferation of diverse explanation strategies in recent years promises to improve the interaction between humans and explainable agents. This poses the challenge of assessing the goodness and efficacy of the proposed explanation, which so far has primarily relied on indirect measures, such as the user's task performance. We introduce an assessment task designed to objectively and quantitatively measure the goodness of XAI systems, specifically in terms of their “information power.” This metric aims to evaluate the amount of information the system provides to non-expert users during the interaction. This work has a three-fold objective: to propose the Information Power assessment task, provide a comparison between our proposal and other XAI goodness measures with respect to eight characteristics, and provide detailed instructions to implement it based on researchers' needs.https://www.frontiersin.org/articles/10.3389/fcomp.2024.1412341/fullexplainable artificial intelligenceXAI objective assessmenthuman-in-the-loopinformation powerqualitative explanations' quality |
spellingShingle | Marco Matarese Francesco Rea Katharina J. Rohlfing Alessandra Sciutti How informative is your XAI? Assessing the quality of explanations through information power Frontiers in Computer Science explainable artificial intelligence XAI objective assessment human-in-the-loop information power qualitative explanations' quality |
title | How informative is your XAI? Assessing the quality of explanations through information power |
title_full | How informative is your XAI? Assessing the quality of explanations through information power |
title_fullStr | How informative is your XAI? Assessing the quality of explanations through information power |
title_full_unstemmed | How informative is your XAI? Assessing the quality of explanations through information power |
title_short | How informative is your XAI? Assessing the quality of explanations through information power |
title_sort | how informative is your xai assessing the quality of explanations through information power |
topic | explainable artificial intelligence XAI objective assessment human-in-the-loop information power qualitative explanations' quality |
url | https://www.frontiersin.org/articles/10.3389/fcomp.2024.1412341/full |
work_keys_str_mv | AT marcomatarese howinformativeisyourxaiassessingthequalityofexplanationsthroughinformationpower AT francescorea howinformativeisyourxaiassessingthequalityofexplanationsthroughinformationpower AT katharinajrohlfing howinformativeisyourxaiassessingthequalityofexplanationsthroughinformationpower AT alessandrasciutti howinformativeisyourxaiassessingthequalityofexplanationsthroughinformationpower |