A python package for NLP explainability
Project description
A python package for benchmarking interpretability techniques.
Free software: MIT license
Documentation: https://ferret.readthedocs.io.
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from ferret import Benchmark
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
bench = Benchmark(model, tokenizer)
explanations = bench.explain("You look stunning!")
bench.show_table(explanations)
Features
ferret builds on top of the transformers library. The library supports explanations using:
Gradients
Integrated Gradinets
Gradient x Input word embeddings
SHAP
LIME
TODOs
Possibility to run on select device (“cpu”, “cuda”)
Sample-And-Occlusion explanations
Discretized Integrated Gradients: https://arxiv.org/abs/2108.13654
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
Cookiecutter: https://github.com/audreyr/cookiecutter
audreyr/cookiecutter-pypackage: https://github.com/audreyr/cookiecutter-pypackage
History
0.1.0 (2022-05-30)
First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for ferret_xai-0.1.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd8e468b37c8a07eb6491ac284a574ac2110942c8652281b3f7246f8bcf5b19c |
|
MD5 | b308ed1329156a03b3c680bb02c3a8bf |
|
BLAKE2b-256 | d4fe8f283ee2db16b8f35bfe7d4afd59da19fe83f08c28eec1205d5ad0f66965 |