Next Article in Journal
Design and Implementation of a Self-Supervised Algorithm for Vein Structural Patterns Analysis Using Advanced Unsupervised Techniques
Previous Article in Journal
Fine-Tuning Artificial Neural Networks to Predict Pest Numbers in Grain Crops: A Case Study in Kazakhstan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncertainty in XAI: Human Perception and Modeling Approaches

1
Berliner Hochschule für Technik, Department of Data Science, Luxemburger Str. 10, 13353 Berlin, Germany
2
Einstein Center Digital Future, Wilhelmstraße 67, 10117 Berlin, Germany
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(2), 1170-1192; https://doi.org/10.3390/make6020055
Submission received: 14 April 2024 / Revised: 6 May 2024 / Accepted: 20 May 2024 / Published: 27 May 2024

Abstract

:
Artificial Intelligence (AI) plays an increasingly integral role in decision-making processes. In order to foster trust in AI predictions, many approaches towards explainable AI (XAI) have been developed and evaluated. Surprisingly, one factor that is essential for trust has been underrepresented in XAI research so far: uncertainty, both with respect to how it is modeled in Machine Learning (ML) and XAI as well as how it is perceived by humans relying on AI assistance. This review paper provides an in-depth analysis of both aspects. We review established and recent methods to account for uncertainty in ML models and XAI approaches and we discuss empirical evidence on how model uncertainty is perceived by human users of XAI systems. We summarize the methodological advancements and limitations of methods and human perception. Finally, we discuss the implications of the current state of the art in model development and research on human perception. We believe highlighting the role of uncertainty in XAI will be helpful to both practitioners and researchers and could ultimately support more responsible use of AI in practical applications.

1. Introduction

The increasing adoption of ML models across various domains necessitates interpretability and trust. While XAI approaches aim to shed light on model decision making, a crucial aspect often remains opaque: uncertainty. Ignoring uncertainty can lead to overconfident and potentially risky decisions. Conversely, effectively communicating uncertainty within XAI frameworks can enhance trust and facilitate responsible use of AI.
Uncertainty in AI systems stems from various sources. Data limitations play a significant role. Real-world data often contain noise, biases, and inherent randomness. Models trained on such data inherit these uncertainties and may struggle with generalizing to unseen situations. Additionally, the inherent complexity of some models themselves can contribute to uncertainty. Deep Learning models, for instance, can be highly non-linear, making it challenging to pinpoint the precise reasoning behind a prediction. Furthermore, the inherent complexity of the world we aim to model can also be a source of uncertainty. Real-world phenomena are often noisy and subject to unforeseen variables, which AI systems may struggle to capture perfectly. Unlike mathematics, where clear, logical steps lead to a definite outcome, many real-world domains often defy such clear-cut explanations, such that even highly experienced specialists may struggle to articulate the intricate reasoning behind their decisions, relying on a blend of intuition, experience, and incomplete information. All these uncertainty sources and their standard categorization into aleatoric vs. epistemic are discussed thoroughly in Section 3.
Uncertainty quantification has been studied extensively in ML and statistics. Established approaches include, for instance, model agnostic calibration methods such as conformal prediction, which aim to calibrate the unnormalized outputs of neural networks to represent true uncertainty estimates [1]. Additionally, techniques for specific ML model classes were proposed, such as Bayesian Neural Networks (BNNs), which model the uncertainty of individual model parameters in order to generate probability distributions for model predictions [2]. These and other established approaches will be further explored in Section 4. However, a significant research gap exists in measuring the uncertainty associated with explanations generated by XAI methods, regardless of whether they are post hoc explanations or inherently derived from interpretable models. This paper aims to address this gap by providing an overview of the concept of uncertainty in explanations, exploring methods for its modeling and measurement and discussing effective communication strategies for conveying uncertainty to the target audience.
Understanding how humans perceive and interact with uncertainty information presented through XAI is pivotal. Research in this area explores the correlation between the level of uncertainty conveyed by the XAI system and the user’s own level of uncertainty or trust in the explanation and prediction [3]. Currently, there is a lack of human studies investigating the users’ perception of explanations, when the uncertainty in those explanations is also presented to them. One hypothesis is that users are more likely to trust and accept a prediction when an XAI system effectively communicates uncertainty. On the other hand, a lack of awareness of uncertainty can cause users to over- or underestimate the reliability of model predictions, potentially leading to poor decisions. These aspects will be discussed in Section 5.
Note that this study focuses on uncertainty of explanations and how they are perceived by human subjects. A complementary interesting research angle would be explanations of model uncertainty—for instance to debug models or clean data. For the sake of clarity, we did not include this research and refer the interested reader to, e.g., [4,5,6,7].
The following sections first provide a summary of the most relevant concepts of XAI (Section 2) and uncertainty modeling in ML (Section 3.1). In Section 3.2, we motivate in more detail why uncertainty in XAI is a timely and relevant subject of research. We continue in Section 4 with a comprehensive summary of recent approaches towards uncertainty modeling in XAI, along with a detailed conceptualization of approaches and a formal treatment of modeling decisions. In Section 5, we discuss empirical evidence on the impact of uncertainty in explanations on human cognition, followed by a summarizing discussion in Section 6. To the best of our knowledge, this is the first comprehensive review that integrates both the modeling aspect of uncertainty in explanations and the human perceptual dimension of explanatory uncertainty.

2. What Is XAI?

Over the past several decades, and particularly in recent years, automated decision systems have found their way into an increasing array of high-stakes and safety-critical domains, such as self-driving cars, medical diagnosis, and legal proceedings. While these intelligent artificial agents often produce remarkably accurate results, even surpassing human performance in some cases, their sheer complexity often exceeds our ability to fully comprehend their inner workings. The substantial research and funding dedicated to AI development have yielded models of immense scale; consider, for instance, GPT 3.5 with its 175 billion parameters and its successor GPT 4, boasting a staggering 1.8 trillion parameters (https://the-decoder.com/gpt-4-architecture-datasets-costs-and-more-leaked (accessed on 19 May 2024)). At such unprecedented magnitudes, it becomes increasingly challenging to track the intricate decision-making processes employed by these systems. This opacity raises significant concerns regarding the transparency of software products developed by major technology companies and their potential impact on critical decision-making scenarios.
Fortunately, there appears to be a burgeoning consciousness and sense of urgency regarding this matter within the broader AI community and at the highest levels of governing bodies. The General Data Protection Regulation (GDPR), introduced in May 2018, marked a significant step forward by establishing the “right of explanation” for users, granting them the entitlement to obtain “meaningful explanations of the logic involved” in automated decision-making processes [8,9]. More recently, in December 2023, the EU AI Act was adopted, classifying AI applications according to their inherent risks (https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 19 May 2024)). This categorization aims to establish a framework for regulating AI development and deployment, ensuring that AI systems are designed and utilized in a responsible and ethical manner.
As for the nomenclature, a nuanced debate persists within the XAI literature regarding the precise distinction between the terms "explainability" and "interpretability". While some authors employ these terms synonymously, others advocate for a more granular differentiation, i.e., further differentiating explainability and interpretability from "transparency" [10] or “comprehensibility” [9,11,12]. In this paper, we will only be using the first two standard terms. For example, ref. [13] associates interpretability with the end-user’s ability to grasp cause-and-effect relationships within the AI system, independent of comprehending the intricate internal logic that generates specific outputs. Conversely, explainability focuses on elucidating the inner workings of the AI system itself and requires an understanding for how ML/AI systems are built. On a similar note, [10] differentiates between the terms explanation as “the information provided by a system to outline the cause and reason for a decision” and interpretation as “the understanding gained by an agent with regard to the cause for a system’s decision when presented with an explanation”.

2.1. Why XAI?

As discussed in [12], the priority of XAI is the development of human-interpretable models or techniques to render models’ predictions interpretable, particularly in sensitive domains such as healthcare, finance, or military. This focus stems from the dual needs of domain experts: effective problem-solving assisted by AI and a level of transparency that fosters trust in the provided solutions. However, examining interpretable outputs benefits not only domain specialists; the value of explanations extends beyond individual stakeholders. For end-users (possibly laymen), explanations are paramount for calibrating their trust in the AI system. Ideally, explanations should render the system’s decision-making process understandable and transparent. For AI model developers, in the case of incorrect or questionable outputs, explanations can prompt further investigation and debugging cycles to uncover hidden biases, identify blind spots, and potentially improve the system’s performance. Lastly, as noted above, the growing emphasis on explainability is (also) driven by legal and regulatory considerations. Enforcing EU’s Right of Explanation policy necessitates transparency and understandability in AI systems, ensuring accountability and mitigating potential risks for the “AI-consumers”. All in all, it appears that only trustworthy AI solutions will be widely accepted by users and industries in the long run [14]. Yet, develo** trustworthy models poses notable challenges for all the parties involved, since a reliable intelligent system is expected to fulfill several desiderata, as discussed in the following section.

2.2. Requirements of Explanations

With these various applications and purposes in mind, the XAI community has been arguing for several years what qualities good explanations should fulfill [12,15,16,17,18,19,20,21,22,23,24,25]. While some debate persists, a degree of consensus has been reached concerning the importance of several key explanation properties. Firstly, faithfulness (or fidelity) ensures that the XAI method accurately reflects the underlying model’s decision-making process. Secondly, stability necessitates consistency in the explanations generated for the same input and model across multiple executions. Thirdly, robustness dictates that slight variations in the input or model predictions should not lead to significant changes in the explanations themselves, provided the underlying model itself is also robust.
Another important desideratum of explanations is usefulness for humans in assistive decision making, often measured by the impact of explanations in annotation tasks [23]. Usefulness could be regarded as the most relevant quality criterion of explanations: the very reason why researchers develop XAI methods is to enable humans to make better AI-assisted decisions. While faithfulness, stability and robustness have the decisive advantage that they are relatively easy to measure, they are not related to how useful or comprehensible an explanation is to humans [26]. Measuring usefulness for humans requires studies with humans in the loop, which is time-consuming and cost-intensive. It is reasonable to assume that faithfulness, stability, and robustness are popular metrics just because they are easy to measure and not because they measure what XAI methods would ideally optimize.
Depending on the application case, the human subjects may be recruited either as AI practitioners or domain experts or laypersons, as further discussed in Section 2.6. The participants’ level of expertise is an important factor to consider when designing such experiments (as shown in [27]), but also their personality traits (see [28]), which adds a new dimension to the challenges of building efficient user studies.
Beyond these foundational properties, the contemporary XAI discourse increasingly emphasizes additional requirements. Fairness considerations ensure, on the one hand, that explanations are inherently unbiased and do not produce additional discriminatory correlations between the model’s predictions and its inputs. On the other hand, fair explanations should ideally inform the user about potential biases learned by the predictor. Privacy concerns necessitate that explanations do not reveal sensitive information about the model or the data it is trained on. This focus on broader ethical considerations reflects the growing interest in AI regulation, particularly within the political sphere, where policymakers strive to govern the responsible deployment of AI at scale, as mentioned at the beginning of this section.

2.3. Categorization of XAI Methods

The catalog of XAI methods has been growing steadily throughout the past years, with more and more new explainability approaches being developed every year, especially as new breakthroughs in foundation models such as text-to-image or conversational AI are achieved. Already, there have been exhaustive survey/review papers released aiming at structuring the wide array of XAI techniques available nowadays [9,11,12,13,19,29,30,31].
The taxonomy of XAI techniques can be built upon various categorisation criteria (see Figure 1). For instance, one can distinguish between methods that explain the model locally (one instance at a time), e.g., saliency maps computed via methods such as Integrated Gradients [32], or globally (attempting to explain the decision making mechanisms as a whole), e.g., Partial Dependence Plots [31]. Secondly, there are XAI methods tailored for only a specific type of model. For instance, some can only be applied to the layers of a (convolutional) neural network, such as Testing with Concept Activation Vectors (TCAV) [33]; these are called model specific. More versatile methods that can be applied to any model are referred to as model agnostic, for example Local Interpretable Model-agnostic Explanations (LIME) [34]. Thirdly, post hoc explanations are delivered after the model has been fully trained, e.g., Individual Conditional Expectation (ICE) plots [31], while inherently interpretable models are designed such that their decisions and the corresponding explanations can be understood by humans simply by inspecting model parameters, such as coefficients of parametric models, e.g., Decision Trees or intermediate layer weights, e.g., Concept Bottleneck Models [35]. Lastly, as far as the explanation format is concerned, XAI methods can be categorized into feature attribution methods (highlighting relevant features in the input for the prediction), e.g., saliency maps, feature visualization methods (visualizing in a humanly intuitive manner what features the model has learned during training), e.g., Network DisSection [36], concept-based methods (formulating explanations with the help of easily understandable concepts) such as TCAV [33], example-based methods (searching for similar samples that led to similar predictions), e.g., Representer Point Selection [37], or text-based (captioning/describing the input via text) such as NLX-GPT [38].

2.4. Predictive Performance vs. Explainability

The notion of explainability as a cornerstone of responsible AI development raises a compelling question: why not simply employ ML models that inherently possess interpretability, such as k-Nearest-Neighbors or Decision Trees? While these algorithms are much easier to understand and debug compared to Deep Neural Networks (DNNs), they often fall short in achieving the same level of predictive performance. This trade-off between performance and explainability is a well-known impediment in the field of AI, where the complexity of a model often correlates directly with its predictive performance, while its transparency inversely relates to its complexity (see Figure 2).

2.5. Human Perception and XAI

One of the most fundamental assumptions of XAI research is that explanations improve human–AI team performance. Hence, a key research question in XAI is how explanations impact human–AI collaboration. While there is a large and ever growing body of literature on this topic (see the exhaustive reviews in [40,41]), the results are often complex and suggest that the impact of explanations on human perception depends on a variety of factors.
There are a number of studies that report positive effects of explanations on human perception [23,41,42,43,44,45,46,47]. These experiments investigated tasks in various modalities and ranged from asking users to detect hate speech, predict students’ grades, predict an agent’s performance in frozen frames of video games, or decide on (virtual) patients’ insulin intake.
Other studies report the opposite effect that explanations either do not have an exclusively positive impact or maybe even only negative impact on human subjects in cognitive tasks [48,49,50,51]. Performance drops in cognitive tasks are often attributed to higher cognitive load when explanations are provided [51].
In general, negative impact of explanations on human performance in cognitive tasks can be categorized into algorithm aversion [52], meaning that subjects tend to not trust the AI, and blind trust, meaning that subjects follow even wrong predictions of an assistive AI. We return to this distinction in Section 3.2. Not all studies on human perception and XAI investigate this difference when they analyze the impact of XAI. However, there is empirical evidence suggesting that blind trust could be increased when explanations are provided. These effects can be associated with psychological traits such as risk aversion [28]. In this respect, other studies [53] have investigated how human subjects can correctly use their intuition to reduce their overreliance on AI’s assistance.

2.6. Evaluation of Explanations

Despite decades of research and a growing body of literature [22,29,54,55,56,57], the evaluation of XAI methods remains to this day a subject of active research. Two primary paradigms dominate the discourse: fully automated evaluation and human-in-the-loop (HIL) approaches. Proponents of the former approach advocate for quantitative metrics to assess explanations objectively [15,58,59]. However, defining universally applicable metrics that capture the essence of explanations and allow comparisons across diverse XAI methods proves challenging. The latter approach emphasizes the importance of human judgment through experiments involving human subjects [60,61,62,63,64]. However, scalability and reproducibility are noteworthy concerns. Additionally, reliance on intuition and potential biases in human subjects, especially when recruited through online crowdsourcing platforms, such as Amazon Mechanical Turk or Toloka, which are largely unregulated, can introduce subjectivity and limit the generalizability of findings [65]. A recent extensive survey of human-based XAI evaluations is given in [40].
In [19], the authors discuss the importance of bridging both worlds of XAI evaluation and propose a three-level categorization of interpretability evaluation: application-grounded (involving human domain experts), human-grounded (including laypersons as end-users), and functionally grounded (using proxy measures for quantifying the quality of explanations).
Regardless of the evaluation paradigm (with or without humans), several works draw attention on the perils of intermingling the evaluation of the plausibility and the evaluation of the correctness of the explanation. A plausible explanation is not guaranteed to faithfully mirror the decision making processes of the model [66]. The authors of [67] argue that “An explanation that seems unreasonable could indicate either a failure of the system to process information in a reasonable way, or it could indicate the failure of the explanation generator to create a reasonable description”. Visual inspection might not distinguish in all cases between the two possibilities.

3. What Is Uncertainty?

Uncertainty, as defined by [68], refers to a “lack of knowledge about some outcome”. In this section, we shortly review uncertainty in the context of ML and discuss the importance of considering not only uncertainty in a model’s prediction but also in its explanations.

3.1. Uncertainty in ML

The manner of expressing uncertainty varies depending on the specific ML problem being addressed (classification vs. regression, tabular vs. text, and so on). A commonly used categorization of uncertainty in the literature usually implies differentiating between aleatoric and epistemic uncertainty [69,70]. Following the notation convention in [70], let { ( x 1 , y 1 ) , , ( x n , y n ) } be the training set on which the supervised learner is induced. After training, its task will be to predict the outcome y n + 1 for a new query point x n + 1 . Let F be the functional space of all the possible predictors and f * F the perfect predictor modeling the real-world phenomena associated with the task at hand and always outputting the ground truth. Since learning f * is unrealistic, several modeling assumptions need to be made, in order to reduce the complexity of the learning process. These assumptions will delineate a hypothesis subspace H F . The optimization objective of the learning algorithm is, therefore, to find the best predictor h * H that lies closest to the true predictor f * . However, the objective to optimize is typically dependent on the available training data and, hence, only an estimation of the true objective. This is often referred to as minimizing the empirical risk  R e m p ( h ) instead of the true risk R ( h ) . Consequently, the predictor induced by the learning algorithm will also only be an estimation of the best possible predictor: h ^ h * (we direct the interested reader to Figure 4 in [70] for a concise visualization).
Notice that attempting to predict y n + 1 implies a series of approximations that all get accumulated into the uncertainty of the final model’s outcome h ^ ( x n + 1 ) = y ^ n + 1 y n + 1 = f * ( x n + 1 ) . It is at this point that we can concretely illustrate the main types of uncertainty that flow into the model’s prediction:
  • Aleatoric: Irreducible, due to the non-deterministic nature of the input/output dependency and random noise in the available data. As an example, imagine a self-driving car that relies on various sensors (cameras, LiDAR) to perceive its surroundings and make navigation decisions. Sensor data can be inherently noisy due to factors such as bad weather conditions, sensor limitations, or temporary occlusions. This noise in the input data translates to uncertainty in the car’s perception of the environment and ultimately in its predictions about safe navigation paths.
  • Epistemic: Reducible through additional information about the perfect predictor f * . It comprises two parts:
    (a)
    model uncertainty: How close is the hypothesis space with its best model choice h * to the perfect predictor f * ? It is very difficult to describe, often ignored by assuming f * is included in the hypothesis space: f * H F
    (b)
    approximation error: How close is the learned predictor h ^ to the best h * ? The error vanishes as the size of training data increases indefinitely.
In general, both aleatoric and epistemic uncertainty depend on the interaction between data and prior knowledge (used for specifying the model assumptions). The stronger the prior knowledge, the fewer data are needed to reduce uncertainty. Furthermore, the more restrictive the model assumptions, the smaller the overall uncertainty will be (see Figure 8 in [70]). In other words, more complex/flexible models tend to lead to higher degrees of uncertainty.

3.2. Uncertainty in XAI

Accounting for uncertainty in explanations is decisive for both the end-user (which may possibly be a layman in AI) and the ML expert. For the end-user, explanations lacking uncertainty information can either create a false sense of security and overreliance (leading to misuse of AI) or, on the other side of the spectrum, a sense of mistrust and reluctance (leading to disuse of AI [71], a form of algorithm aversion [52]). An explanation presented with absolute certainty, even if seemingly clear, might not reflect the true capabilities of the model. This can cause the user to blindly trust the AI’s decision, potentially overlooking biases or errors, if the explanations match the user’s expectations of the reality (cognitive bias [59]) or, the other way around, the user may choose to discard explanations completely, if they appear misleading in relation to reality. On the other hand, explanations that acknowledge and quantify their own uncertainty empower the user to make more informed decisions by understanding the limitations of the model’s reasoning. For the ML expert, uncertainty quantification plays a vital role in debugging and improving the AI system. “Unexplained explanations” could mask underlying issues within the model, such as data biases or flaws in the learning process. By identifying explanations with high uncertainty, the expert can pinpoint areas where the model might be less confident or reaching conclusions based on unreliable information. This allows for targeted troubleshooting and refinement of the AI system, ultimately leading to more robust and reliable model construction. Just as it is important to assess the confidence level of a model’s prediction to determine whether a second opinion from a human expert is warranted, it is equally critical to evaluate the uncertainty in the explanations generated for a model’s decision (some approaches aim at training models on special datasets annotated with ground-truth explanations, so that the model predicts the explanation as (part of) its objective [72,73]; in such cases, analysis of the uncertainty of the explanation would intertwine with that of the model’s prediction, as presented in the preceding subsection). Ideally, investigating the uncertainty of explanations should reveal whether the requirements discussed in Section 2.2 are fulfilled.

4. Modeling Uncertainty in XAI

Uncertainty in the explanations can be due to a variety of reasons: (aleatoric) uncertainty in the input data, uncertainty of the ML model, as well as uncertainty of the explanation method itself. Depending on the assumptions of the origin of the uncertainty, different modeling approaches can be derived, as outlined in the next section. Figure 3 sketches the general framework of explaining the prediction of an AI model. An input x is presented to the model/predictor f. The input can be of any data type (image, text, audio, and so forth), but needs to match the data format the model has seen during training. The (fully trained) model f analyzes x and delivers a prediction f ( x ) . Depending on the task at hand, the output can either be a class label in the case of classification, a real number for regression, a set of actions for an agent trained via Reinforcement Learning, and so on. Regardless of its form, it is output f ( x ) that the user needs an explanation for. Let e denote the explainer. In simple terms, e is a functional that either takes the input sample x and the model f as arguments and outputs a local explanation specific to that particular input and its corresponding prediction or the model f itself as an argument and outputs a global explanation for the overall decision-making processes. Note that the explanation is not necessarily derived directly from the prediction f ( x ) . While some (model-agnostic post hoc) XAI methods do not need the model’s parameters to construct the explanation—in the form e ( x , f ( x ) ) —e.g., LIME [34] or Shapley Additive Explanations (SHAP) [74], others do require access to the inner mechanics of f, such as gradient based feature attribution maps or prototypical networks.
In order to analyze and assess the uncertainty of an explanation, the framework in Figure 3 needs to be expanded from a pointwise explanation paradigm into a probabilistic one. The following subsections deal with the three main approaches for achieving this: propagation of uncertainty of the input data (Section 4.1), uncertainty of the prediction model (Section 4.2), or uncertainty of the explainer itself (Section 4.3).

4.1. Approach 1: Perturbed Input (Variation in x)

Figure 4 highlights the first approach for rendering a pointwise explanatory pipeline into a probabilistic one. The input x is perturbed within a small region ϵ x around x, which results in n new variants x ( i ) , i = 1 , , n . A more precise specification of ϵ x may be rather challenging. It may, e.g., reflect the different uncertainties of measurements of input attributes as well as different model and/or explanation sensitivities with respect to different input attributes. For every perturbed input, the model f delivers a prediction f ( x ( i ) ) . Likewise, for every prediction, the explainer computes an explanation e ( f , x ( i ) ) .
The literature abounds in input perturbation methods for investigating robustness and confidence of intelligent systems. For instance, one may mask/occlude certain relevant parts of the input. In the specific case of Computer Vision, this is known as pixel flip**. Several pixel flip** strategies are extensively discussed in [75], one of which we exemplify in the next section. Similar perturbations may be achieved by injecting random noise into the input or—for images only—applying geometric or morphological transformations. Adversarial attacks [76] are another established method of perturbing the input. In this case, the goal is to search for minimal modifications that alter the input such that the model’s prediction is flipped.

Pixel Flip**

Pixel Flip** [77], as the name suggests, is an occlusion strategy originating from and popular in the Computer Vision community. The idea is to mask certain regions (pixels or superpixels) within the input image and measure what effect this occlusion has on the performance of the AI system. The two most important hyperparameters of this method are (1) which regions to mask and (2) what values should replace the selected regions.
Figure 5 shows an example. The image is of a wild bee species named Andrena fulva, scraped from the iNaturalist (https://www.inaturalist.org/ (accessed on 19 May 2024)) dataset. Three standard feature attribution methods were selected: (Vanilla) Gradient-based Saliency Maps [78], the Input × Gradient method [79], and Integrated Gradients [32]. The explanations were computed w.r.t. to a ResNet50 model trained as described in [80]. By iteratively masking 0.5% of the pixels randomly (up until 3/4 of the image), 150 perturbed versions of the original input were created. For each of these versions, a separate heatmap was computed with the three XAI methods, resulting in 150 maps per method. The heatmap pixel values (normalized between 0 and 1) define a discrete distribution, out of which only five quantiles are shown. To flip the pixels, random values from a normal distribution were drawn, defined using the mean pixel values along with their standard deviations computed across the training set of the classifier.
Figure 5. Example of Random Pixel Flip** (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The first image on the left is a sample from the wild bee subset of the iNaturalist dataset. The class Andrena fulva was correctly predicted by the model. Its pixels are iteratively masked until we reach the final perturbed version shown beneath the original version. For every perturbed version (150 in total), a new saliency map is computed via the three XAI methods named on the right. For each of these methods, five quantiles are displayed from the pixel-wise distributions of the feature maps. Note the high variation in the explanation distribution for all three methods, marked by more and more prominent regions from lower to higher quantiles. A more confident explanation would, in contrast, display prominent relevant regions across all quantiles (compare Figure 6). The colormaps of the quantiles have been normalized from the lowest to the highest occurring pixel value across all saliency maps to ensure comparability between methods and quantiles.
Figure 5. Example of Random Pixel Flip** (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The first image on the left is a sample from the wild bee subset of the iNaturalist dataset. The class Andrena fulva was correctly predicted by the model. Its pixels are iteratively masked until we reach the final perturbed version shown beneath the original version. For every perturbed version (150 in total), a new saliency map is computed via the three XAI methods named on the right. For each of these methods, five quantiles are displayed from the pixel-wise distributions of the feature maps. Note the high variation in the explanation distribution for all three methods, marked by more and more prominent regions from lower to higher quantiles. A more confident explanation would, in contrast, display prominent relevant regions across all quantiles (compare Figure 6). The colormaps of the quantiles have been normalized from the lowest to the highest occurring pixel value across all saliency maps to ensure comparability between methods and quantiles.
Make 06 00055 g005
Figure 6. Example of Monte Carlo Dropout for a difficult sample of the iNaturalist dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The class Dasypoda hirtipes was correctly predicted by the model. Beneath the original image on the left side, the distribution of the 500 Softmax probabilities for the true class is given (the broader the distribution, the more uncertain the classifier). The distribution of the explanations (for all the 500 runs) at pixel level is visualized via five quantile maps. The less variation in certain regions of the quantile maps, the higher the explainer’s certainty that those regions are relevant for recognizing the class. Notice the relatively low variation in the relevant regions across all quantiles for all methods, which suggests highly confident explanations (as opposed to those shown in Figure 5). The colormaps of the quantiles have been normalized from the lowest to the highest occurring pixel value across all saliency maps to ensure comparability between methods and quantiles.
Figure 6. Example of Monte Carlo Dropout for a difficult sample of the iNaturalist dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The class Dasypoda hirtipes was correctly predicted by the model. Beneath the original image on the left side, the distribution of the 500 Softmax probabilities for the true class is given (the broader the distribution, the more uncertain the classifier). The distribution of the explanations (for all the 500 runs) at pixel level is visualized via five quantile maps. The less variation in certain regions of the quantile maps, the higher the explainer’s certainty that those regions are relevant for recognizing the class. Notice the relatively low variation in the relevant regions across all quantiles for all methods, which suggests highly confident explanations (as opposed to those shown in Figure 5). The colormaps of the quantiles have been normalized from the lowest to the highest occurring pixel value across all saliency maps to ensure comparability between methods and quantiles.
Make 06 00055 g006

4.2. Approach 2: Probabilistic Predictor (Variation in f)

The second approach is showcased in Figure 7. Multiple predictions are generated for the same constant input x. This can be achieved either by rendering the pointwise predictor f into an (approximate) BNN or into a Conformal Predictor. In both cases, similar to the first approach, the explanations are computed w.r.t. every separate prediction f ( i ) ( x ) of the predicted discrete distribution.
We note that there are many more standard ML approaches that inherently model the uncertainty of the prediction, which we will not discuss here further. Just to name a few: Gaussian Processes (http://gaussianprocess.org/gpml (accessed on 19 May 2024)) (with their heteroscedastic version [81]), Quantile Regression [82] (with an extension for conformal prediction [83]), or Gaussian Process Probes [84]. For explaining their predictions, one may consider XAI techniques such as PDP or ICE Plots [31].
The list below gives an overview of several established strategies for upgrading a point predictor into a probabilistic one (we note that some authors, e.g., [85], use the term “probabilistic predictor” to describe the usual predictor outputting a point probability for every possible outcome, reserving the term “multi-probabilistic predictor” for models outputting whole probability distributions for every outcome; throughout this paper, we exclusively use the term “pointwise predictor” to denote the former case and “probabilistic predictor” to denote the latter):

4.2.1. BNN: Monte Carlo Dropout

A popular and computationally efficient technique for transforming a point-wise predictor into a Bayesian one is Monte Carlo Dropout, proven in [87] to be a solid approximation for Bayesian inference. The core principle lies in enabling the Dropout layer during the inference stage. By executing multiple predictions for the same input sample with Dropout activated, the enhanced model generates a distribution of probabilities for each potential class. The authors in [86] leveraged this concept to generate a distribution of explanations for individual image samples classified by a standard computer vision classifier. They argued that a lower level of variance observed within the feature attribution maps across various quantiles signifies a reduced degree of uncertainty associated with the explanation (see Figure 6).

4.2.2. Conformal Predictor: Calibrated Explanations

The authors in [85] proposed an XAI method compatible with the concept of conformal prediction. The idea behind it is to extend a pointwise predictor (classifier or regressor) into a probabilistic predictor via the Venn–Abers calibration method [96]. The benefit is that the scores for the features’ relevance are accompanied by a confidence interval (see example from the wine dataset from OpenML (https://www.openml.org/search?type=data&sort=runs&id=287&status=active (accessed on 19 May 2024)) in Figure 8).
The visualization technique is inspired by LIME [34]. The underlying model for the class prediction used in Figure 8 is a Random Forest Classifier from sklearn (https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html (accessed on 19 May 2024)), while the local explanations themselves are computed via Decision Trees. The top two bars in the plot inform about the prediction of the calibrated model. The solid red line indicates a calibrated probability estimate of about 0.55 for the positive class (meaning that the wine is of high quality). Similarly, the solid blue line gives an estimate of roughly 0.45 for the negative class. The light red and light blue areas mark the uncertainty of the class prediction; the narrower the lightly colored area, the lower the prediction uncertainty. The main plot deals with the feature weights (relevance scores). Each feature is shown on the left side along with a conditional rule for defining that weight. The values taken by the features for the explained instance are given on the right side. The bars in between show the feature weights for every rule, starting from the solid black line at 0, which corresponds to the calibrated probability estimate for the positive class. There is only one feature that contributes towards the negative class: "sulphates", highlighted by the blue bar. All the other features contribute towards the positive class, with "volatile acidity" weighing the most in this respect. The light grey area depicts the same probability interval for the prediction as given by the light red and blue ares in the top two bars. As for the features, their weights are solidly coloured from the zero line to the lower bound of the weight interval and lightly coloured between the lower and upper bound of that interval. Again, a narrower lightly shaded area in the feature weights means a lower uncertainty.

4.3. Approach 3: Stochastic Explainers (Variation in e)

Incorporation of uncertainty into the explainer leads to a third approach as depicted in Figure 9. For a fixed input x and a non-probabilistic predictor f, multiple explanations are generated by a stochastic explainer. We emphasize that, for intrinsically interpretable models, e.g., ProtoPNets [97], there is no explicit distinction of the model f and the explainer e, i.e., in this case, Approach 2 and Approach 3 coincide. A (not exhaustive) list of currently available stochastic XAI methods is given in Table 1 and illustrated in the following subsections.

4.3.1. CXPlain

Causal Explanations for Model Interpretation under Uncertainty (CXPlain) [98] presents a methodology that involves training a parallel explainer model to predict the explanations for the decisions of an original model. This explainer can be any ML model, although the authors advocate for the use of a UNet [102] architecture as a strong default choice. The original model does not require re-training for CXPlain to function. For each input in the training set, the relevance scores of the features are calculated by masking individual features and evaluating the prediction error of the modified input. These generated explanations serve as ground truth for training the UNet, akin to the approach employed in semantic segmentation tasks where the true labels comprise scores for masked regions. To address uncertainty, multiple UNets are trained on the (input, explanation) pairs through the technique of bootstrap ensembles [103]. During inference, the distribution of explanations estimated by this ensemble provides a measure of uncertainty in the feature importance estimates for each sample in the test set. See Figure 10 for two examples.

4.3.2. BayesLIME

BayesLIME [99] and its modification BayLIME [100] are an extension of the popular LIME [34] method. Similar to LIME, it generates local explanations for individual predictions by fitting a simple linear model that approximates the behavior of the original black-box model around a specific instance.
In addition, BayesLIME leverages the power of Bayesian statistics to estimate a confidence interval around the feature importances. Furthermore, similar to LIME, it creates synthetic data points near the instance of interest by perturbing its features. However, instead of using fixed weights for each feature, BayesLIME assigns random weights according to a pre-defined probability distribution. This allows for more diverse perturbations and captures the uncertainty in the data. Figure 11 gives an example from the BayesLIME authors themselves on the COMPAS (https://github.com/propublica/compas-analysis (accessed on 19 May 2024)) dataset [99] and Figure 12 shows an example for image data.

4.3.3. TCAV

Testing with Concept Activation Vectors (TCAV) [33] (and its successor that automatically searches for concepts, Automated Concept-based Explanations (ACE) [101]) fits a linear classifier within a designated layer of the model. This classifier is trained to differentiate between exemplars of a given concept and randomly generated counterexamples. Consequently, the Concept Activation Vectors (CAVs) are constructed as vectors orthogonal to the decision boundaries established by the linear classifier. To quantify the model’s class prediction sensitivity towards each concept, TCAV employs directional derivatives. These derivatives are computed iteratively for the class members with respect to the CAVs, utilizing multiple partitions of the randomly generated dataset, resulting in an average TCAV score for this class member together with a distribution of scores representing the uncertainty of this concept (explanation). The TCAV score for an entire class relative to a specific concept is then defined as the proportion of class members exhibiting positive directional derivatives across all the CAVs associated with that concept. When evaluating a single sample, the score only considers positive derivatives with respect to the concept’s CAVs, without incorporating a class-wise perspective (see Figure 13).

4.3.4. CoProNN

Concept-based Prototypical Nearest Neighbors (CoProNN) (https://github.com/TeodorChiaburu/beexplainable (accessed on 19 May 2024)) generate explanations by computing similarities between the feature vectors of input images in the latent space of a network’s backbone and feature vectors of concept images. The concept dataset can be a pre-annotated one, e.g., broden [36], but the authors advocate for formulating higher level concept images by leveraging text-to-image generative models such as Stable Diffusion [107]. Along with the concept set, a random dataset partitioned into multiple subsets is transformed through the predictor’s backbone. The CoProNN-algorithm gathers the k nearest neighbors from the concept set and the random set in multiple runs (for every random partition) and delivers the averaged number of neighbors for every concept as explanations. Figure 14 shows two examples from the ImageNet [108] dataset computed for the predictions of a pretrained ResNet50 [109].

5. Human Perception and Uncertainty in XAI

The core assumption in XAI research is that explanations foster trust. The definition of trust is inherently tied to the notion of uncertainty: trusting an agent implies relying on their advice in decisions under uncertainty [110,111]. To what extent a decision is made under uncertainty depends on a variety of factors including the task itself, the volume and quality of information provided and human perceptual capacity, including personality traits [28]. In the following sections, we discuss empirical evidence related to these factors reported in the context of XAI.

5.1. Neural and Cognitive Aspects of Uncertainty

Research around human perception of uncertainty in decision making has explored the topic on different levels of abstraction. There is a substantial body of literature on the neural basis of uncertainty and how it is represented in the physiological processes underlying our cognitive processing [112,113]. Furthermore, psychologists have investigated models to conceptualize the relevant aspects involved in human perception, for instance by differentiating notions of confidence and certainty [114]. In the context of human–AI assistance, studies on the impact of uncertainty are underrepresented and often difficult to compare. This could also be due to the fact that uncertainty models in XAI (see Section 4) are only beginning to emerge. However, the growing body of literature studying the effects of fragility of explanations as well as the effect of various aspects of model transparency also allows to draw some conclusions on the impact of uncertainty of explanations.

5.2. Uncertainty via Explanation Fragility

There is ample evidence that unreliable explanations can erode user trust in the outcomes and raise doubts about the model’s reliability. While these effects are not directly related to uncertainty, they are closely related to the requirements of fidelity, stability, and robustness outlined in Section 2.2. When the inputs are subject to changes that are barely noticeable to the human observer, but have severe impact on the explanations, these explanations might appear unreliable. These effects related to the fragility of conventional XAI methods in the presence of perturbations have been investigated in various studies with and without humans in the loop.
For instance, some researchers argue that methods such as LIME introduce a certain degree of uncertainty due to factors such as randomness in the sampling procedure or variation in the sampling proximity [115]. Other researchers show that XAI methods often fail to exhibit invariance across constant shifts in the input [62]. Likewise, other studies report sensitivity of saliency methods to perceptually irrelevant perturbations in the explained image [116]. Even though the perturbed image is indistinguishable from the original image to the human eye and the model’s prediction remains unchanged, the two explanations produced differ significantly. These effects appear counterintuitive: good explanations should be invariant to constant shifts or offsets in the input.
While the above studies show that XAI methods appear to be too sensitive to perceptually irrelevant changes in the inputs, there is also evidence for the opposite effect. Some explanation methods are invariant with respect to changes in the data-generating process and, hence, lack fidelity. If XAI methods are insensitive to such changes, they will appear unreliable [63].

5.3. Effects of Communicating Uncertainty

Next to these indirect effects of uncertainty in XAI through fragility or insensitivity of explanations, communicating uncertainty has also direct effects on human observers. Even when using point estimates of explanations, uncertainty can impact human subjects. Communicating the accuracy of a model on held-out datasets to human observers in an AI-assisted annotation task can be considered a global proxy to uncertainty of individual predictions. Researchers find that communicating the predictive performance of a model is positively correlated with the trust into a model [117]. Importantly, the results of [117] also show that if the predictive performance of a model does not match the observed predictive performance, humans will adapt their behaviour and increase or decrease their trust accordingly. Such results highlight the importance of communicating uncertainty—human observers will take uncertainty into account. However, at the same time these results raise the question: would more direct communication of uncertainty, for instance uncertainty for individual predictions rather than just global metrics like predictive performance, enable humans to make even better decisions in human–AI collaboration?
These effects of communicating uncertainty of a model directly are unfortunately less clear than one might hope for. When providing the confidence (or uncertainty) of individual predictions, explanations for these predictions have been reported to lead to decreased trust in human–AI collaboration [48]. Somewhat in contrast to these negative effects, other studies report that masking pixels that associated with low uncertainty leads to higher error rates in digit recognition tasks [99]. These effects suggest that low uncertainty is a sensible objective for feature relevance and local explanations of model predictions. However, on the other hand, these experimental paradigms as used in [99] are somewhat academic in that usefulness of an explanation is purposefully degraded by masking pixels—these types of experiments can, thus, not provide direct evidence on the usefulness of explanations and communicating uncertainty thereof.

6. Discussion

As detailed in Section 4, the current approaches for modeling uncertainty in XAI can be broadly categorized into three groups: input variation, model prediction variation, and explainer variation. Perturbing the input allows for analyzing how sensitive explanations are to small changes in the data, offering insights into how aleatoric uncertainty may propagate down to the explanations. Similarly, varying model predictions measures the impact of epistemic uncertainty arising from model limitations onto the explanations. Thirdly, analyzing the distributions of explanations generated by stochastic explainers allows for assessing the explainability method’s own contribution to uncertainty.
While valuable insights can be drawn from each approach, limitations exist. Distinguishing between aleatoric and epistemic uncertainty remains a challenge. Additionally, for some intrinsically interpretable models, such as prototypical networks, it can be difficult to communicate uncertainty inherent to their predictions. This raises the question: would a combined approach, considering uncertainty in all three elements (input, model, explainer), be desirable, even necessary, for a more comprehensive analysis of explanation uncertainty? However, such a combined approach would need to address the computational complexity and potential redundancy that might arise from simultaneously exploring all sources of variation.
While these approaches deal with the abstract modeling of uncertainty in explanations, an important pragmatic question remains: how well does this modeled uncertainty align with human perception? Do these techniques effectively capture the nuances of uncertainty that humans intuitively understand and utilize in decision making? A potential gap may exist between the technical quantification of uncertainty and a user’s ability to interpret and integrate this information. To bridge this gap, XAI methods can be further improved by incorporating insights from human perception research. User studies and psychological experiments can shed light on how humans best understand and utilize uncertainty cues. This knowledge can then be used to design XAI systems that present uncertainty information in a way that is more aligned with human intuition, ultimately leading to better trust calibration and informed decision making alongside AI recommendations.
To some extent, the discussion about uncertainty in XAI also delves into the realm of ethics. Presenting explanations with high levels of uncertainty, particularly in high-stakes decision-making scenarios, necessitates careful consideration. On the one hand, complete transparency regarding limitations is crucial to avoid users from develo** misplaced trust in the AI’s recommendations. However, overwhelming users with overly complex uncertainty information could lead to confusion or so-called “paralysis by analysis”. Finding the right balance is key as there is evidence that cognitive friction due to overly complex explanations can reduce trust or usefulness of explanations [51]. XAI systems should strive to present uncertainty in a way that is clear, actionable and tailored to the specific decision-making context.

7. Conclusions

This review paper has explored the role of uncertainty in XAI. We discussed the importance of XAI and uncertainty in building trust, transparency and responsible AI. We reviewed existing approaches for modeling uncertainty in explanations, highlighting their strengths and limitations. We emphasized the role of human perception in this context.
Several promising directions appear worth exploring in future research. Can emerging techniques from other fields, such as probabilistic programming, be leveraged to enhance uncertainty quantification in XAI? Additionally, the development of standardized metrics and benchmarks for evaluating the effectiveness of uncertainty modeling approaches is much needed for advancing the field.
By continuing research in these areas, we can ensure that XAI systems not only provide explanations but also effectively communicate the inherent uncertainties associated with those explanations. This will hopefully lead to a future where humans and AI can work together more effectively and responsibly.

Author Contributions

Conceptualization, T.C., F.H. and F.B.; Methodology, T.C., F.H. and F.B.; Software, T.C.; Validation, T.C., F.H. and F.B.; Formal analysis, T.C., F.H. and F.B.; Investigation, T.C., F.H. and F.B.; Resources, T.C., F.H. and F.B.; Data curation, T.C., F.H. and F.B.; Writing—original draft preparation, T.C., F.H. and F.B.; Writing—review and editing, T.C., F.H. and F.B.; Visualization, T.C.; Supervision, F.H. and F.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets and code repository are available with the corresponding author and can be provided on a reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACEAutomated Concept-based Explanations
BNNBayesian Neural Network
CNNConvolutional Neural Network
CoProNNConcept-based Prototypical Nearest Neighbors
CXPlainCausal Explanations for Model Interpretation under Uncertainty
DNNDeep Neural Network
HILHuman-in-the-Loop
ICEIndividual Conditional Expectation
LIMELocal Interpretable Model-agnostic Explanations
MLMachine Learning
PDPPartial Dependence Plots
SHAPShapley Additive Explanations
TCAVTesting with Concept Activation Vectors
XAIExplainable Artifical Intelligence

References

  1. Angelopoulos, A.N.; Bates, S. A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification. ar** and Occlusion Strategy for Consistent XAI Benchmarks. ar**+and+Occlusion+Strategy+for+Consistent+XAI+Benchmarks&author=Bl%C3%BCcher,+S.&author=Vielhaben,+J.&author=Strodthoff,+N.&publication_year=2024&journal=ar**v" class='google-scholar' target='_blank' rel='noopener noreferrer'>Google Scholar]
  2. Xu, H.; Ma, Y.; Liu, H.; Deb, D.; Liu, H.; Tang, J.; Jain, A.K. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review. ar**v 2019, ar**v:1909.08072. [Google Scholar] [CrossRef]
  3. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, 0130140. [Google Scholar] [CrossRef]
  4. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. ar**v 2014, ar**v:1312.6034. [Google Scholar]
  5. Shrikumar, A.; Greenside, P.; Kundaje, A. Learning Important Features Through Propagating Activation Differences. ar**v 2019, ar**v:1704.02685. [Google Scholar]
  6. Chiaburu, T.; Biessmann, F.; Hausser, F. Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations. ar**v 2022, ar**v:2206.07497. [Google Scholar]
  7. Griffiths, R.R.; A Aldrick, A.; Garcia-Ortegon, M.; Lalchand, V.; Lee, A.A. Achieving robustness to aleatoric uncertainty with heteroscedastic Bayesian optimisation. Mach. Learn. Sci. Technol. 2021, 3, 015004. [Google Scholar] [CrossRef]
  8. Koenker, R.; Hallock, K.F. Quantile Regression. J. Econ. Perspect. 2001, 15, 143–156. [Google Scholar] [CrossRef]
  9. Romano, Y.; Patterson, E.; Candès, E.J. Conformalized Quantile Regression. ar**v 2019, ar**v:1905.03222. [Google Scholar]
  10. Wang, Z.; Ku, A.; Baldridge, J.; Griffiths, T.L.; Kim, B. Gaussian Process Probes (GPP) for Uncertainty-Aware Probing. ar**v 2023, ar**v:2305.18213. [Google Scholar]
  11. Lofstrom, H.; Lofstrom, T.; Johansson, U.; Sonstrod, C. Calibrated Explanations: With Uncertainty Information and Counterfactuals. ar**v 2023, ar**v:2305.02305. [Google Scholar] [CrossRef]
  12. Bykov, K.; Höhne, M.M.C.; Müller, K.R.; Nakajima, S.; Kloft, M. How Much Can I Trust You?—Quantifying Uncertainties in Explaining Neural Networks. ar**v 2020, ar**v:2006.09000. [Google Scholar]
  13. Gal, Y.; Ghahramani, Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; Balcan, M.F., Weinberger, K.Q., Eds.; Volume 48, Proceedings of Machine Learning Research. pp. 1050–1059. [Google Scholar]
  14. Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. ar**v 2017, ar**v:1612.01474. [Google Scholar]
  15. Yang, C.I.; Li, Y.P. Explainable uncertainty quantifications for deep learning-based molecular property prediction. J. Cheminform. 2023, 15, 13. [Google Scholar] [CrossRef]
  16. Blundell, C.; Cornebise, J.; Kavukcuoglu, K.; Wierstra, D. Weight Uncertainty in Neural Networks. ar**v 2015, ar**v:1505.05424. [Google Scholar]
  17. Alaa, A.M.; van der Schaar, M. Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions. ar**v 2020, ar**v:2007.13481. [Google Scholar]
  18. Graves, A. Practical Variational Inference for Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Nice, France, 2011; Volome 24. [Google Scholar]
  19. Bishop, C.M. Pattern Recognition and Machine Learning (Information Science and Statistics); Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  20. Hernández-Lobato, J.M.; Adams, R.P. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks. ar**v 2015, ar**v:1502.05336. [Google Scholar]
  21. Li, Y.; Hernandez-Lobato, J.M.; Turner, R.E. Stochastic Expectation Propagation. ar**v 2015, ar**v:1506.04132. [Google Scholar]
  22. Vovk, V.; Petej, I. Venn-Abers predictors. ar**v 2014, ar**v:1211.0025. [Google Scholar]
  23. Chen, C.; Li, O.; Tao, C.; Barnett, A.J.; Su, J.; Rudin, C. This Looks Like That: Deep Learning for Interpretable Image Recognition. ar**v 2019, ar**v:1806.10574. [Google Scholar]
  24. Schwab, P.; Karlen, W. CXPlain: Causal Explanations for Model Interpretation under Uncertainty. ar**v 2019, ar**v:1910.12336. [Google Scholar]
  25. Slack, D.; Hilgard, S.; Singh, S.; Lakkaraju, H. Reliable Post hoc Explanations: Modeling Uncertainty in Explainability. ar**v 2021, ar**v:2008.05030. [Google Scholar]
  26. Zhao, X.; Huang, W.; Huang, X.; Robu, V.; Flynn, D. BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations. ar**v 2021, ar**v:2012.03058. [Google Scholar]
  27. Ghorbani, A.; Wexler, J.; Zou, J.; Kim, B. Towards Automatic Concept-based Explanations. ar**v 2019, ar**v:1902.03129. [Google Scholar]
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ar**v 2015, ar**v:1505.04597. [Google Scholar]
  29. Efron, B. The Jackknife, the Bootstrap and Other Resampling Plans; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1982. [Google Scholar] [CrossRef]
  30. Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
  31. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2012. [Google Scholar]
  32. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. ar**v 2015, ar**v:1409.1556. [Google Scholar]
  33. Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. ar**v 2021, ar**v:2112.10752v2. [Google Scholar]
  34. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the CVPR09, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. ar**v 2015, ar**v:1512.03385. [Google Scholar]
  36. Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual, 3–10 March 2021. [Google Scholar] [CrossRef]
  37. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  38. Kepecs, A.; Mainen, Z.F. A computational framework for the study of confidence in humans and animals. Philos. Trans. R. Soc. B Biol. Sci. 2012, 367, 1322–1337. [Google Scholar] [CrossRef] [PubMed]
  39. Walker, E.Y.; Pohl, S.; Denison, R.N.; Barack, D.L.; Lee, J.; Block, N.; Ma, W.J.; Meyniel, F. Studying the neural representations of uncertainty. Nat. Neurosci. 2023, 26, 1857–1867. [Google Scholar] [CrossRef]
  40. Pouget, A.; Drugowitsch, J.; Kepecs, A. Confidence and certainty: Distinct probabilistic quantities for different goals. Nat. Neurosci. 2016, 19, 366–374. [Google Scholar] [CrossRef] [PubMed]
  41. Zhang, Y.; Song, K.; Sun, Y.; Tan, S.; Udell, M. "Why Should You Trust My Explanation”? Understanding Uncertainty in LIME Explanations. ar**v 2019, ar**v:1904.12991. [Google Scholar]
  42. Ghorbani, A.; Abid, A.; Zou, J. Interpretation of Neural Networks is Fragile. ar**v 2018, ar**v:1710.10547. [Google Scholar] [CrossRef]
  43. Yin, M.; Vaughan, J.W.; Wallach, H. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of XAI methods considering four levels of categorization.
Figure 1. Taxonomy of XAI methods considering four levels of categorization.
Make 06 00055 g001
Figure 2. Performance–explainability tradeoff as sketched in [39]: the more complex the model, the higher its predictive performance, but the lower its explainability.
Figure 2. Performance–explainability tradeoff as sketched in [39]: the more complex the model, the higher its predictive performance, but the lower its explainability.
Make 06 00055 g002
Figure 3. General framework for generating a pointwise (non-probabilistic) explanation for a predictor’s decision. Input x is fed into the model f which outputs a prediction f ( x ) . The explainer e delivers an explanation for that prediction, based on the input x and the model f. We note that this framework can be applied to both post hoc methods as well as to inherently interpretable models, which is why the explanation is written in the more general form e ( f , x ) , not e ( x , f ( x ) ) . The latter would suggest the idea of explaining the prediction in a post hoc fashion. Indeed, for inherently interpretable models, explainer e and prediction model f may be treated as a combined unit. We return to this issue in Section 4.3.
Figure 3. General framework for generating a pointwise (non-probabilistic) explanation for a predictor’s decision. Input x is fed into the model f which outputs a prediction f ( x ) . The explainer e delivers an explanation for that prediction, based on the input x and the model f. We note that this framework can be applied to both post hoc methods as well as to inherently interpretable models, which is why the explanation is written in the more general form e ( f , x ) , not e ( x , f ( x ) ) . The latter would suggest the idea of explaining the prediction in a post hoc fashion. Indeed, for inherently interpretable models, explainer e and prediction model f may be treated as a combined unit. We return to this issue in Section 4.3.
Make 06 00055 g003
Figure 4. Approach 1: Perturbed Input. Multiple perturbed inputs are fed into the model f, yielding an ensemble of predictions. For every prediction, the explanation is computed w.r.t. to the model f and the perturbed input. The resulting distribution is supposed to represent the explanation uncertainty.
Figure 4. Approach 1: Perturbed Input. Multiple perturbed inputs are fed into the model f, yielding an ensemble of predictions. For every prediction, the explanation is computed w.r.t. to the model f and the perturbed input. The resulting distribution is supposed to represent the explanation uncertainty.
Make 06 00055 g004
Figure 7. Approach 2: Probabilistic Predictor. For a fixed input x, multiple predictions are obtained by upgrading the model f into a BNN or a Conformal Predictor. For each prediction f ( i ) ( x ) , the explanation is computed. The resulting distribution is supposed to represent the explanation uncertainty.
Figure 7. Approach 2: Probabilistic Predictor. For a fixed input x, multiple predictions are obtained by upgrading the model f into a BNN or a Conformal Predictor. For each prediction f ( i ) ( x ) , the explanation is computed. The resulting distribution is supposed to represent the explanation uncertainty.
Make 06 00055 g007
Figure 8. Visualization of calibrated explanations (see repository (https://github.com/TeodorChiaburu/calibrated_explanations/tree/main/notebooks (accessed on 5 May 2024))): decision trees are used to explain the model’s prediction locally for a sample from OpenML’s wine dataset. The light red and light blue areas mark the uncertainties of the predictions (top two bars) and the feature weights (main plot).
Figure 8. Visualization of calibrated explanations (see repository (https://github.com/TeodorChiaburu/calibrated_explanations/tree/main/notebooks (accessed on 5 May 2024))): decision trees are used to explain the model’s prediction locally for a sample from OpenML’s wine dataset. The light red and light blue areas mark the uncertainties of the predictions (top two bars) and the feature weights (main plot).
Make 06 00055 g008
Figure 9. Approach 3: Stochastic explainers. Due to the non-deterministic nature of the explainer e, a discrete distribution of explanations can be generated for the same input x and model f, which is supposed to represent the explanation uncertainty.
Figure 9. Approach 3: Stochastic explainers. Due to the non-deterministic nature of the explainer e, a discrete distribution of explanations can be generated for the same input x and model f, which is supposed to represent the explanation uncertainty.
Make 06 00055 g009
Figure 10. Examples for the CXPlain method (see repository (https://github.com/TeodorChiaburu/cxplain/tree/master/examples (accessed on 5 May 2024))). The red colored attribution maps are the explanations computed by masking various regions in the original image. The blue colored plots give an estimated uncertainty via the spread over the predictions of 100 UNets trained to reproduce the attribution maps. (a) Example from MNIST [104]. (b) Example from CIFAR10 [105].
Figure 10. Examples for the CXPlain method (see repository (https://github.com/TeodorChiaburu/cxplain/tree/master/examples (accessed on 5 May 2024))). The red colored attribution maps are the explanations computed by masking various regions in the original image. The blue colored plots give an estimated uncertainty via the spread over the predictions of 100 UNets trained to reproduce the attribution maps. (a) Example from MNIST [104]. (b) Example from CIFAR10 [105].
Make 06 00055 g010
Figure 11. BayesLIME example explanations on tabular data (taken from Figure 1 in [99]). The same instance from the COMPAS dataset was explained with LIME and different amounts of perturbations. The model used was a Random Forest Classifier from sklearn. The solid lines in both plots mark the feature importance scores computed by LIME. Green bars have a positive contribution towards the positive class (recidivism within two years after release), while red bars have a negative contribution. The shaded areas depict the uncertainties computed by BayesLime. The narrower the shaded region, the lower the uncertainty in the relevance of that feature. Note that the explanation uncertainty obviously decreases by increasing the number of perturbations in LIME. (a) 100 LIME perturbations. (b) 2000 LIME perturbations.
Figure 11. BayesLIME example explanations on tabular data (taken from Figure 1 in [99]). The same instance from the COMPAS dataset was explained with LIME and different amounts of perturbations. The model used was a Random Forest Classifier from sklearn. The solid lines in both plots mark the feature importance scores computed by LIME. Green bars have a positive contribution towards the positive class (recidivism within two years after release), while red bars have a negative contribution. The shaded areas depict the uncertainties computed by BayesLime. The narrower the shaded region, the lower the uncertainty in the relevance of that feature. Note that the explanation uncertainty obviously decreases by increasing the number of perturbations in LIME. (a) 100 LIME perturbations. (b) 2000 LIME perturbations.
Make 06 00055 g011
Figure 12. BayesLIME example explanations on the iNaturalist dataset, computed for the predictions of a VGG16 network [106]. The explanations are visualized via gif images (original gif available here (https://github.com/TeodorChiaburu/Modeling-Uncertainty-Local-Explainability/blob/master/visualization/bee_30ptg_10focus.gif) (accessed on 5 May 2024)), out of which only five frames are shown here. Green superpixel regions contribute positively towards the true class and red regions contribute negatively. The method generated 20 explanations, which ensures a confidence level of 95%. Regions that are consistently highlighted across multiple frames are viewed as relevant regions for the explanation with high certainty, while regions that only appear sporadically are more uncertain.
Figure 12. BayesLIME example explanations on the iNaturalist dataset, computed for the predictions of a VGG16 network [106]. The explanations are visualized via gif images (original gif available here (https://github.com/TeodorChiaburu/Modeling-Uncertainty-Local-Explainability/blob/master/visualization/bee_30ptg_10focus.gif) (accessed on 5 May 2024)), out of which only five frames are shown here. Green superpixel regions contribute positively towards the true class and red regions contribute negatively. The method generated 20 explanations, which ensures a confidence level of 95%. Regions that are consistently highlighted across multiple frames are viewed as relevant regions for the explanation with high certainty, while regions that only appear sporadically are more uncertain.
Make 06 00055 g012
Figure 13. TCAV explanations for two samples from the iNaturalist dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The black lines through the middle of the bars encode the standard deviation of the TCAV scores across all the considered random partitions (here 30), while the bar heights are the average TCAV scores for that concept. The longer the black line, the higher the uncertainty in the relevance of that concept’s score. The correct concept for Andrena fulva (a) is “fuzzy dark orange” (the other two are wrong), identified by TCAV correctly with some degree of uncertainty. A Bombus pratorum (b) has a “fuzzy yellow-black-striped” thorax and abdomen and also a “fuzzy dark orange” needle region; its body is not at all “smooth shiny dark brown”. Both correct concepts were predicted by TCAV, yet with an obviously higher degree of uncertainty than in the first example. A diamond ♢ marks a concept relevance score of zero.
Figure 13. TCAV explanations for two samples from the iNaturalist dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The black lines through the middle of the bars encode the standard deviation of the TCAV scores across all the considered random partitions (here 30), while the bar heights are the average TCAV scores for that concept. The longer the black line, the higher the uncertainty in the relevance of that concept’s score. The correct concept for Andrena fulva (a) is “fuzzy dark orange” (the other two are wrong), identified by TCAV correctly with some degree of uncertainty. A Bombus pratorum (b) has a “fuzzy yellow-black-striped” thorax and abdomen and also a “fuzzy dark orange” needle region; its body is not at all “smooth shiny dark brown”. Both correct concepts were predicted by TCAV, yet with an obviously higher degree of uncertainty than in the first example. A diamond ♢ marks a concept relevance score of zero.
Make 06 00055 g013
Figure 14. CoProNN explanations for two samples from the ImageNet dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The visualization technique is similar as in Figure 13. Here, the bar heights mark the average percentage of neighbors found to belong to that concept set across all considered random partitions (30 in total and k = 8 neighbors), while the black lines show the standard deviation. A longer line highlights a higher uncertainty in that concept’s relevance score. The cheetah in (a) would be expected to be explained via the concept “dotted” and the tiger in (b) via “striped”. The explanation for the latter is correct and confident, while the explanation for the former is wrong and also more uncertain. A diamond ⋄ marks a concept relevance score of zero.
Figure 14. CoProNN explanations for two samples from the ImageNet dataset (see repository (https://github.com/TeodorChiaburu/beexplainable/tree/master/notebooks/XAI_Preliminary (accessed on 5 May 2024))). The visualization technique is similar as in Figure 13. Here, the bar heights mark the average percentage of neighbors found to belong to that concept set across all considered random partitions (30 in total and k = 8 neighbors), while the black lines show the standard deviation. A longer line highlights a higher uncertainty in that concept’s relevance score. The cheetah in (a) would be expected to be explained via the concept “dotted” and the tiger in (b) via “striped”. The explanation for the latter is correct and confident, while the explanation for the former is wrong and also more uncertain. A diamond ⋄ marks a concept relevance score of zero.
Make 06 00055 g014
Table 1. Overview of post hoc XAI methods with a stochastic explainer.
Table 1. Overview of post hoc XAI methods with a stochastic explainer.
NameLocal/GlobalSpecific/AgnosticModalityInput TypeTask
CXPlain (https://github.com/d909b/cxplain (accessed on 5 May 2024)) [98]localagnosticfeature attributionimage, tabular, textclassification,
regression
BayesLIME (https://github.com/dylan-slack/Modeling-Uncertainty-Local-Explainability (accessed on 5 May 2024)) [99] BayLIME (https://github.com/x-y-zhao/BayLime (accessed on 5 May 2024)) [100]localagnosticfeature attributionimage, tabularclassification
TCAV (https://github.com/tensorflow/tcav (accessed on 5 May 2024)) [33] & ACE [101]local & globalspecificconceptsimageclassification
CoProNN (https://github.com/TeodorChiaburu/beexplainable (accessed on 5 May 2024))local & globalspecificconcepts, examplesimageclassification
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chiaburu, T.; Haußer, F.; Bießmann, F. Uncertainty in XAI: Human Perception and Modeling Approaches. Mach. Learn. Knowl. Extr. 2024, 6, 1170-1192. https://doi.org/10.3390/make6020055

AMA Style

Chiaburu T, Haußer F, Bießmann F. Uncertainty in XAI: Human Perception and Modeling Approaches. Machine Learning and Knowledge Extraction. 2024; 6(2):1170-1192. https://doi.org/10.3390/make6020055

Chicago/Turabian Style

Chiaburu, Teodor, Frank Haußer, and Felix Bießmann. 2024. "Uncertainty in XAI: Human Perception and Modeling Approaches" Machine Learning and Knowledge Extraction 6, no. 2: 1170-1192. https://doi.org/10.3390/make6020055

Article Metrics

Back to TopTop