salesforce / decaNLP
- пятница, 22 июня 2018 г. в 00:15:00
Python
The Natural Language Decathlon: A Multitask Challenge for NLP

The Natural Language Decathlon is a multitask challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. Each task is cast as question answering, which makes it possible to use our new Multitask Question Answering Network (MQAN). This model jointly learns all tasks in decaNLP without any task-specific modules or parameters in the multitask setting. For a more thorough introduction to decaNLP, see our blog post or paper.
| Model | decaNLP | SQuAD | IWSLT | CNN/DM | MNLI | SST | QA‑SRL | QA‑ZRE | WOZ | WikiSQL | MWSC |
|---|---|---|---|---|---|---|---|---|---|---|---|
| MQAN | 571.7 | 74.3 | 13.7 | 24.6 | 69.2 | 86.4 | 77.6 | 34.7 | 84.1 | 58.7 | 48.4 |
| S2S | 513.6 | 47.5 | 14.2 | 25.7 | 60.9 | 85.9 | 68.7 | 28.5 | 84.0 | 45.8 | 52.4 |
First, make sure you have docker and nvidia-docker installed. Then build the docker image:
cd dockerfiles && docker build -t decanlp . && cd -You will also need to make a data directory and move the examples for the Winograd Schemas into it:
mkdir .data/schema
mv local_data/schema.txt .data/schema/You can run a command inside the docker image using
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "COMMAND"For example, to train a Multitask Question Answering Network (MQAN) on the Stanford Question Answering Dataset (SQuAD):
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "python /decaNLP/train.py --train_tasks squad --gpus DEVICE_ID"To multitask with the fully joint, round-robin training described in the paper, you can add multiple tasks:
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "python /decaNLP/train.py --train_tasks squad iwslt.en.de --train_iterations 1 --gpus DEVICE_ID"To train on the entire Natural Language Decathlon:
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "python /decaNLP/train.py --train_tasks squad iwslt.en.de cnn_dailymail multinli.in.out sst srl zre woz.en wikisql schema --train_iterations 1 --gpus DEVICE_ID"You can find a list of commands in experiments.sh that correspond to each trained model that we used to report validation results comparing models and training strategies in the paper.
If you would like to make use of tensorboard, run (typically in a tmux pane or equivalent):
docker run -it --rm -p 0.0.0.0:6006:6006 -v `pwd`:/decaNLP/ decanlp bash -c "tensorboard --logdir /decaNLP/results"If you are running the server on a remote machine, you can run the following on your local machine to forward to http://localhost:6006/:
ssh -4 -N -f -L 6006:127.0.0.1:6006 YOUR_REMOTE_IPIf you are having trouble with the specified port on either machine, run lsof -if:6006 and kill the process if it is unnecessary. Otherwise, try changing the port numbers in the commands above. The first port number is the port the local machine tries to bind to, and and the second port is the one exposed by the remote machine (or docker container).
--load <PATH_TO_CHECKPOINT> and --resume. By default, models are stored every --save_every iterations in the results/ folder tree.--val_every flag to change the frequency of validation.--train_batch_tokens and --val_batch_size.You can evaluate a model for a specific task with EVALUATION_TYPE as validation or test:
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "python /decaNLP/predict.py --evaluate EVALUATION_TYPE --path PATH_TO_CHECKPOINT_DIRECTORY --gpus DEVICE_ID --tasks squad"or evaluate on the entire decathlon by removing any task specification:
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) decanlp bash -c "python /decaNLP/predict.py --evaluate EVALUATION_TYPE --path PATH_TO_CHECKPOINT_DIRECTORY --gpus DEVICE_ID"For test performance, please use the original SQuAD, MultiNLI, and WikiSQL evaluation systems.
If you use this in your work, please cite The Natural Language Decathlon: Multitask Learning as Question Answering.
@article{McCann2018decaNLP,
title={The Natural Language Decathlon: Multitask Learning as Question Answering},
author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
journal={arXiv preprint ???},
year={2018}
}
Contact: bmccann@salesforce.com and nkeskar@salesforce.com