TransWikia.com

Uncertainty in connection to explainability

Data Science Asked on September 5, 2021

When I write "uncertainty" in this post I mean:

If I have a classifier into $a_1,..,a_n$ categories and for an observation $x$ I classify $x$ to $a_i$ with probability $p_i$, then the uncertainty of this decision is $1-p_i$.

I’d like to inquire about connections of this notion and that of accuracy and explainability.

For example, if I have a classifier that is "very certain" (on mean/median on the test/training set) how often is this property correlated to achieving real-time accurate predictions? What about vice-versa?

Moreover, if my classifier is "certain" how does this affect my ability to explain its decision in any sense?

I couldn’t find good resources for this notion of uncertainty and these questions so I will really appreciate some references as well!

One Answer

There is a bit of confusion, I'm afraid:

  • The definition you propose for uncertainty doesn't really represent the concept of uncertainty: if $p_i$ is the probability that $x$ belongs to category $a_i$, then $1-p_i$ is just the probability that $x$ doesn't belong to category $a_i$.
  • Yes, if the classifier assigns a very high $p_i$, say 0.99, it is supposed to mean that the classifier is very confident in its prediction. But it's also the case for a very low probability: if $p_i=0.01$, the classifier is very confident that $x$ doesn't belong to $a_i$. According to your definition, the uncertainty in this case would be very high (0.99) even though the classifier is very confident, so your definition is inconsistent.

Now the main problem: whatever you call a confidence measure based on the probability predicted by the classifier, it's not reliable. A prediction is at best an informed decision of the classifier given the data it has seen in the training set and the features of instance. But it could be a random classifier, or a majority-class classifier: in these cases the probability it "predicts" is arbitrary. Imagine you are a teacher and one of your students says "the answer of x=2+2 is x=5", I'm 100% sure". The fact that the student is "100% sure" doesn't make them right, same thing for the classifier. In other words, any reliable measure of uncertainty involves the gold-standard answer, so it's usually part of the evaluation process. That's not to say that the predicted probability is useless, but in general it has no direct link to accuracy, and it would be a mistake to interpret it in this way.

Interpretability (or explainability) is a completely different matter: the general idea is to know whether the answer predicted by a classifier can be understood by a human. Typically traditional models like Naive Bayes or Decision Tree models are more directly interpretable (at least with not too many features) than deep NN models.

Answered by Erwan on September 5, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP