Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling

<p>AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this review and perspective paper, we suggest that, in addition to using XAI me...

Full description

Saved in:
Bibliographic Details
Main Authors: R. J. O'Loughlin, D. Li, R. Neale, T. A. O'Brien
Format: Article
Language:English
Published: Copernicus Publications 2025-02-01
Series:Geoscientific Model Development
Online Access:https://gmd.copernicus.org/articles/18/787/2025/gmd-18-787-2025.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823858543585918976
author R. J. O'Loughlin
D. Li
R. Neale
T. A. O'Brien
T. A. O'Brien
author_facet R. J. O'Loughlin
D. Li
R. Neale
T. A. O'Brien
T. A. O'Brien
author_sort R. J. O'Loughlin
collection DOAJ
description <p>AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this review and perspective paper, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can sometimes be attributed to specific components or sub-models, such as when model bias is explained by pointing to a particular parameterization. We propose three types of understanding as a basis to evaluate trust in dynamical and AI models alike: (1) instrumental understanding, which is obtained when a model has passed a functional test; (2) statistical understanding, obtained when researchers can make sense of the modeling results using statistical techniques to identify input–output relationships; and (3) component-level understanding, which refers to modelers' ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well. We demonstrate how component-level understanding has been sought and achieved via climate model intercomparison projects over the past several decades. Such component-level understanding routinely leads to model improvements and may also serve as a template for thinking about AI-driven climate science. Currently, XAI methods can help explain the behaviors of AI models by focusing on the mapping between input and output, thereby increasing the statistical understanding of AI models. Yet, to further increase our understanding of AI models, we will have to build AI models that have interpretable components amenable to component-level understanding. We give recent examples from the AI climate science literature to highlight some recent, albeit limited, successes in achieving component-level understanding and thereby explaining model behavior. The merit of such interpretable AI models is that they serve as a stronger basis for trust in climate modeling and, by extension, downstream uses of climate model data.</p>
format Article
id doaj-art-453e098731e649a5b6fe899e567bc223
institution Kabale University
issn 1991-959X
1991-9603
language English
publishDate 2025-02-01
publisher Copernicus Publications
record_format Article
series Geoscientific Model Development
spelling doaj-art-453e098731e649a5b6fe899e567bc2232025-02-11T11:00:07ZengCopernicus PublicationsGeoscientific Model Development1991-959X1991-96032025-02-011878780210.5194/gmd-18-787-2025Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modelingR. J. O'Loughlin0D. Li1R. Neale2T. A. O'Brien3T. A. O'Brien4Philosophy Department, Queens College, City University of New York, New York, NY 11367, USADepartment of Philosophy, Baruch College, City University of New York, New York, NY 10010, USANational Center for Atmospheric Research, Boulder, CO 80305, USADepartment of Earth and Atmospheric Sciences, Indiana University, Bloomington, IN 47405, USALawrence Berkeley Lab Climate and Ecosystem Sciences Division, Berkeley, CA 94720, USA<p>AI models are criticized as being black boxes, potentially subjecting climate science to greater uncertainty. Explainable artificial intelligence (XAI) has been proposed to probe AI models and increase trust. In this review and perspective paper, we suggest that, in addition to using XAI methods, AI researchers in climate science can learn from past successes in the development of physics-based dynamical climate models. Dynamical models are complex but have gained trust because their successes and failures can sometimes be attributed to specific components or sub-models, such as when model bias is explained by pointing to a particular parameterization. We propose three types of understanding as a basis to evaluate trust in dynamical and AI models alike: (1) instrumental understanding, which is obtained when a model has passed a functional test; (2) statistical understanding, obtained when researchers can make sense of the modeling results using statistical techniques to identify input–output relationships; and (3) component-level understanding, which refers to modelers' ability to point to specific model components or parts in the model architecture as the culprit for erratic model behaviors or as the crucial reason why the model functions well. We demonstrate how component-level understanding has been sought and achieved via climate model intercomparison projects over the past several decades. Such component-level understanding routinely leads to model improvements and may also serve as a template for thinking about AI-driven climate science. Currently, XAI methods can help explain the behaviors of AI models by focusing on the mapping between input and output, thereby increasing the statistical understanding of AI models. Yet, to further increase our understanding of AI models, we will have to build AI models that have interpretable components amenable to component-level understanding. We give recent examples from the AI climate science literature to highlight some recent, albeit limited, successes in achieving component-level understanding and thereby explaining model behavior. The merit of such interpretable AI models is that they serve as a stronger basis for trust in climate modeling and, by extension, downstream uses of climate model data.</p>https://gmd.copernicus.org/articles/18/787/2025/gmd-18-787-2025.pdf
spellingShingle R. J. O'Loughlin
D. Li
R. Neale
T. A. O'Brien
T. A. O'Brien
Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
Geoscientific Model Development
title Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
title_full Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
title_fullStr Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
title_full_unstemmed Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
title_short Moving beyond post hoc explainable artificial intelligence: a perspective paper on lessons learned from dynamical climate modeling
title_sort moving beyond post hoc explainable artificial intelligence a perspective paper on lessons learned from dynamical climate modeling
url https://gmd.copernicus.org/articles/18/787/2025/gmd-18-787-2025.pdf
work_keys_str_mv AT rjoloughlin movingbeyondposthocexplainableartificialintelligenceaperspectivepaperonlessonslearnedfromdynamicalclimatemodeling
AT dli movingbeyondposthocexplainableartificialintelligenceaperspectivepaperonlessonslearnedfromdynamicalclimatemodeling
AT rneale movingbeyondposthocexplainableartificialintelligenceaperspectivepaperonlessonslearnedfromdynamicalclimatemodeling
AT taobrien movingbeyondposthocexplainableartificialintelligenceaperspectivepaperonlessonslearnedfromdynamicalclimatemodeling
AT taobrien movingbeyondposthocexplainableartificialintelligenceaperspectivepaperonlessonslearnedfromdynamicalclimatemodeling