TransWikia.com

Understanding lime.submodular_pick on text

Data Science Asked on July 28, 2021

I have binary document classification task (class labels 0 and 1). I use a keras network (functional API) and run the LimeTextExplainer.

from lime.lime_text import LimeTextExplainer
from lime import lime_text

explainer = LimeTextExplainer(class_names=['negative','positive'])

Running submodular_pick, i.e., feeding a list of strings (document) and my custom predict_prob function, works fine. However, how do I retrieve the most relevant features (globally)? Is it sp_obj.explanations or sp_obj.sp_explanations?? What is the difference?

from lime import submodular_pick
sp_obj = submodular_pick.SubmodularPick(explainer, document, predict_prob, sample_size=10, num_features=feature_n, num_exps_desired=2)

Is the following really the output for the negative class and positive class globally? What do the negative/positive values mean?

# negative class
[ex.as_pyplot_figure(label=0) for ex in sp_obj.sp_explanations]

# positive class
[ex.as_pyplot_figure(label=1) for exp in sp_obj.sp_explanations]

The first function for the negative class results in an error, but the picture is shown either way.. ans = self.domain_mapper.map_exp_ids(self.local_exp[label_to_use], **kwargs) KeyError: 0

My predict_prob function looks like

 def predict_prob(string):
    ''' must take list of d strings and output (d,k) numpy array
        with prediction probabilities, where k is the number of classes
    '''
    x_temp = count.transform(np.array(string)) ## transform string
    prediction = model.predict(convert_sparse_matrix_to_sparse_tensor(x_temp))
    class_zero = 1-prediction
    probability= np.append(class_zero, prediction, axis=1)

    return probability ## array [1-p, p]
```

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP