Published on

Top 12 Python Libraries for AI Explainability

Authors
  • avatar
    Name
    Nathan Peper
    Twitter

Looking into Python libraries, XAI characteristics, categories, and more advanced XAI.

SHAP (SHapley Additive exPlanations)

SHAP is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).

Quickly get started by reviewing the documentation, diving into their GitHub, or installing the package as shown below:  

pip install shap

or

conda install -c conda-forge shap

LIME (Local Interpretable Model-Agnostic Explanations) ⭐

LIME is a model-agnostic method that works by approximating the behavior of the model locally around a specific prediction. It focuses on explaining individual predictions for text classifiers or classifiers that act on tables., numpy arrays or images.

A few resources to help you get started with LIME are their arXiv paper, GitHub repo, an overview on O'Reilly here, or just get started with a quick install:

pip install lime

Intel Explainable AI Tools

The Intel Explainable AI Tools are designed to help users detect and mitigate against issues of fairness and interpretability while running best on Intel hardware. The two Python* components in the repository are the Model Card Generator, which creates interactive HTML reports containing performance and fairness metrics, and the Explainer, which runs post-training model distillation and visualization methods for TensorFlow and PyTorch models.

Quickly get started with Intel's XAI Tool documentation and review the detailed GitHub repo. Then give it a quick try after installing with:

pip install intel-xai

Obviously, if you run into any issues, skim the GitHub repo for more detailed install options and instructions.  

ELI5

ELI5 is a library for debugging/inspecting machine learning classifiers and explaining their predictions. It provides feature importance scores, as well as "reason codes" for scikit-learn, Keras, XGBoost, LightGBM, CatBoost, lightning, and sklearn-crfsuite.

ELI5 also implements several algorithms for inspecting black-box models with Text Explainer and Permutation Importance.

Learn more by diving into the documentation or one of the GitHub repos (here and here) and get started by installing with:

pip install eli5

or using:

conda install -c conda-forge eli5

Shapash

Shapash is a Python library that aims to make machine learning interpretable and understandable to everyone. Shapash provides several types of visualization with explicit labels.

Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria.

Get started by reading the docs and reviewing the GitHub repo. Then it's a quick install with:

pip install shapash

Anchor

Anchor is another technique from the University of Washington similar to LIME. An anchor explanation is a rule that sufficiently "anchors" the prediction locally so that changes to the rest of the feature values of the instance do not matter. This method is able to explain any black-box classifier, with two or more classes. All that is required is that the classifier implements a function that takes in raw text or a numpy array and outputs a prediction in the form of an integer.

To learn more read the paper or head over to the GitHub repo.

XAI

XAI is a Machine Learning library that is designed with AI explainability at its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.

Review the documentation here, dive into the GitHub repo, and even watch the talk where the original idea was shared. After reviewing, quickly get started by running:

pip install xai

Break Down

Break Down is a model-agnostic tool for the decomposition of predictions from black boxes. Break Down Table shows the contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This tool primarily works for binary classifiers and general regression models.

The location of the maintained codebase has migrated over time and in multiple formats, but you can review the docs and review the original implementation in this GitHub repo and then look at the more recently maintained repos for a python implementation, migration to DALEX, and also to iBreakDown.  

To get started in python, install by:

git clone https://github.com/bondyra/pyBreakDown
cd ./pyBreakDown
python3 setup.py install  # (or use pip install . instead)

or use the DALEX version:

pip install dalex

or with the iBreakDown implementation:

# the easiest way to get iBreakDown is to install it from CRAN:
install.packages("iBreakDown")

# Or the the development version from GitHub:
# install.packages("devtools")
devtools::install_github("ModelOriented/iBreakDown")

Interpret-ML

InterpretML is an open-source package that incorporates state-of-the-art machine-learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions.

Under the umbrella of Interpret-ML is another widely known package for text classification called Interpret-Text. Interpret-Text incorporates community-developed interpretability techniques for NLP models and a visualization dashboard to view the results. Users can run their experiments across multiple state-of-the-art explainers and easily perform comparative analyses on them.

Review the Interpret-ML GitHub repo and the Interpret-Text GitHub repo, then get started by installing either package with:

pip install interpret
# OR
conda install -c conda-forge interpret
# OR
pip install interpret-text

Interpretable Machine Learning (iML)

Interpretable ML (iML) is a set of data type objects, visualizations, and interfaces that can be used by any method designed to explain the predictions of machine learning models (or really the output of any function). It currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.

Review the GitHub repo and install with a simple:

pip install iml

AIX360

The AI Explainability 360 toolkit is an open-source library that supports the interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.

This is also a Linux Foundation Project and has a great overview site that you can review here. However, as always the most recent changes and code updates are in their GitHub repo. There are numerous install options shown in the repo, but it's easy to get started with the quick install:

pip install aix360

OmniXAI

OmniXAI (short for Omni eXplainable AI), addresses several problems with interpreting judgments produced by machine learning models in practice.

It is a Python machine-learning library for explainable AI (XAI), offering omni-way explainable AI and interpretable machine-learning capabilities to address many pain points in explaining decisions made by machine-learning models in practice. OmniXAI aims to be a one-stop comprehensive library that makes explainable AI easy for data scientists, ML researchers, and practitioners who need an explanation for various types of data, models, and explanation methods at different stages of the ML process.

Read through the documentation here or head straight to the GitHub repo. Finally, get started with:

pip install omnixai