This repository contains the code in both PyTorch and TensorFlow for our paper
Zihang Dai*, Zhilin Yang*, Yiming Yang, William W. Cohen, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov (*: equal contribution)
tf/folder, supporting (1) single-node multi-gpu training, and (2) multi-host TPU training.
pytorch/folder, supporting single-node multi-gpu training via the module
Transformer-XL achieves new state-of-the-art results on multipole language modeling benchmarks. Transformer-XL is also the first to break through the 1.0 barrier on char-level language modeling. Below is a summary.
|Method||enwiki8||text8||One Billion Word||WT-103||PTB (w/o finetuning)|
A large portion of the
getdata.sh script comes from the awd-lstm repo. Happy Language Modeling :)