github

alexandres / lexvec

  • пятница, 29 июля 2016 г. в 03:13:27
https://github.com/alexandres/lexvec

Go
This is an implementation of the LexVec word embedding model (similar to word2vec and GloVe) that achieves state of the art results in multiple NLP tasks



LexVec

This is an implementation of the LexVec word embedding model (similar to word2vec and GloVe) that achieves state of the art results in multiple NLP tasks, as described in this paper and this one.

Pre-trained Vectors

Installation

Binary

The easiest way to get started with LexVec is to download the binary release. We only distribute amd64 binaries for Linux.

Download binary

If you are using Windows, OS X, 32-bit Linux, or any other OS, follow the instructions below to build from source.

Building from source

  1. Install the Go compiler
  2. Make sure your $GOPATH is set
  3. Execute the following commands in your terminal:

    go get github.com/alexandres/lexvec
    cd $GOPATH/src/github.com/alexandres/lexvec
    go build

Usage

In-memory (default, faster)

To get started, run $ ./demo.sh which trains a model using the small text8 corpus (100MB from Wikipedia).

Basic usage of LexVec is:

$ ./lexvec -corpus somecorpus -output someoutputdirectory/vectors

Run $ ./lexvec -h for a full list of options.

Additionally, we provide a word2vec script which implements the exact same interface as the word2vec package should you want to test LexVec using existing scripts.

External Memory

By default, LexVec stores the sparse matrix being factorized in-memory. This can be a problem if your training corpus is large and your system memory limited. We suggest you first try using the in-memory implementation. If you run into Out-Of-Memory issues, try this External Memory approximation. xi

env OUTPUTDIR=output ./external_memory_lexvec.sh -corpus somecorpus -dim 300 ...exactsameoptionsasinmemory

Pre-processing can be accelerated by installing nsort and pypy and editing pairs_to_counts.sh.

References

Salle, A., Idiart, M., & Villavicencio, A. (2016). Matrix Factorization using Window Sampling and Negative Sampling for Improved Word Representations. arXiv preprint arXiv:1606.00819.

Salle, A., Idiart, M., & Villavicencio, A. (2016). Enhancing the LexVec Distributed Word Representation Model Using Positional Contexts and External Memory. arXiv preprint arXiv:1606.01283.

License

Copyright (c) 2016 Salle, Alexandre atsalle@inf.ufrgs.br. All work in this package is distributed under the MIT License.