PAIR-code / lit
- вторник, 18 августа 2020 г. в 00:25:33
TypeScript
The Language Interpretability Tool: Interactively analyze NLP models for model understanding in an extensible and framework agnostic interface.
The Language Interpretability Tool (LIT) is a visual, interactive model-understanding tool for NLP models.
LIT is built to answer questions such as:
LIT supports a variety of debugging workflows through a browser-based UI. Features include:
For a broader overview, check out our paper and the user guide.
Download the repo and set up a Python environment:
git clone https://github.com/PAIR-code/lit.git ~/lit
# Set up Python environment
cd ~/lit
conda env create -f environment.yml
conda activate lit-nlp
conda install cudnn cupti # optional, for GPU support
conda install -c pytorch pytorch # optional, for PyTorch
# Build the frontend
cd ~/lit/lit_nlp/client
yarn && yarn build
cd ~/lit
python -m lit_nlp.examples.quickstart_sst_demo --port=5432
This will fine-tune a BERT-tiny model on the Stanford Sentiment Treebank, which should take less than 5 minutes on a GPU. After training completes, it'll start a LIT server on the development set; navigate to http://localhost:5432 for the UI.
To explore predictions from a pretrained language model (BERT or GPT-2), run:
cd ~/lit
python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased \
--port=5432
And navigate to http://localhost:5432 for the UI.
See ../lit_nlp/examples. Run similarly to the above:
cd ~/lit
python -m lit_nlp.examples.<example_name> --port=5432 [optional --args]
To learn about LIT's features, check out the user guide, or watch this short video.
You can easily run LIT with your own model by creating a custom demo.py
launcher, similar to those in ../lit_nlp/examples. The basic
steps are:
Dataset
APIModel
APIFor a full walkthrough, see adding models and data.
LIT is easy to extend with new interpretability components, generators, and more, both on the frontend or the backend. See the developer guide to get started.
If you use LIT as part of your work, please cite:
@misc{tenney2020language,
title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models},
author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
year={2020},
eprint={2008.05122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This is not an official Google product.