microsoft / recommenders
- воскресенье, 15 сентября 2019 г. в 00:27:53
Jupyter Notebook
Best Practices on Recommendation Systems
This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail our learnings on five key tasks:
Several utilities are provided in reco_utils to support common tasks such as loading datasets in the format expected by different algorithms, evaluating model outputs, and splitting training/test data. Implementations of several state-of-the-art algorithms are included for self-study and customization in your own applications. See the reco_utils documentation.
For a more detailed overview of the repository, please see the documents at the wiki page.
Please see the setup guide for more details on setting up your machine locally, on Spark, or on Azure Databricks.
To setup on your local machine:
git clone https://github.com/Microsoft/Recommenders
cd Recommenders
python scripts/generate_conda_file.py
conda env create -f reco_base.yaml
conda activate reco_base
python -m ipykernel install --user --name reco_base --display-name "Python (reco)"
cd notebooks
jupyter notebook
NOTE - The Alternating Least Squares (ALS) notebooks require a PySpark environment to run. Please follow the steps in the setup guide to run these notebooks in a PySpark environment.
A setup.py file is provided in order to simplify the installation of this utilities in this repo from the main directory. This still requires the conda environment to be installed as described above. Once the necessary dependencies are installed you can use the following command to install reco_utils as it's own python package.
pip install -e reco_utils
It is also possible to install directly from Github. Or from a specific branch as well.
pip install -e git+https://github.com/microsoft/recommenders/#egg=pkg\&subdirectory=reco_utils
pip install -e git+https://github.com/microsoft/recommenders/@staging#egg=pkg\&subdirectory=reco_utils
NOTE - The pip installation does not install any of the necessary package dependencies, it is expected that conda will be used as shown above to setup the environment for the utilities being used.
The table below lists the recommender algorithms currently available in the repository. Notebooks are linked under the Environment column when different implementations are available.
| Algorithm | Environment | Type | Description |
|---|---|---|---|
| Alternating Least Squares (ALS) | PySpark | Collaborative Filtering | Matrix factorization algorithm for explicit or implicit feedback in large datasets, optimized by Spark MLLib for scalability and distributed computing capability |
| Deep Knowledge-Aware Network (DKN)* | Python CPU / Python GPU | Content-Based Filtering | Deep learning algorithm incorporating a knowledge graph and article embeddings to provide powerful news or article recommendations |
| Extreme Deep Factorization Machine (xDeepFM)* | Python CPU / Python GPU | Hybrid | Deep learning based algorithm for implicit and explicit feedback with user/item features |
| FastAI Embedding Dot Bias (FAST) | Python CPU / Python GPU | Collaborative Filtering | General purpose algorithm with embeddings and biases for users and items |
| LightGBM/Gradient Boosting Tree* | Python CPU / PySpark | Content-Based Filtering | Gradient Boosting Tree algorithm for fast training and low memory usage in content-based problems |
| Neural Collaborative Filtering (NCF) | Python CPU / Python GPU | Collaborative Filtering | Deep learning algorithm with enhanced performance for implicit feedback |
| Restricted Boltzmann Machines (RBM) | Python CPU / Python GPU | Collaborative Filtering | Neural network based algorithm for learning the underlying probability distribution for explicit or implicit feedback |
| Riemannian Low-rank Matrix Completion (RLRMC)* | Python CPU | Collaborative Filtering | Matrix factorization algorithm using Riemannian conjugate gradients optimization with small memory consumption. |
| Simple Algorithm for Recommendation (SAR)* | Python CPU | Collaborative Filtering | Similarity-based algorithm for implicit feedback dataset |
| Surprise/Singular Value Decomposition (SVD) | Python CPU | Collaborative Filtering | Matrix factorization algorithm for predicting explicit rating feedback in datasets that are not very large |
| Vowpal Wabbit Family (VW)* | Python CPU (online training) | Content-Based Filtering | Fast online learning algorithms, great for scenarios where user features / context are constantly changing |
| Wide and Deep | Python CPU / Python GPU | Hybrid | Deep learning algorithm that can memorize feature interactions and generalize user features |
NOTE: * indicates algorithms invented/contributed by Microsoft.
Preliminary Comparison
We provide a benchmark notebook to illustrate how different algorithms could be evaluated and compared. In this notebook, the MovieLens dataset is split into training/test sets at a 75/25 ratio using a stratified split. A recommendation model is trained using each of the collaborative filtering algorithms below. We utilize empirical parameter values reported in literature here. For ranking metrics we use k=10 (top 10 recommended items). We run the comparison on a Standard NC6s_v2 Azure DSVM (6 vCPUs, 112 GB memory and 1 P100 GPU). Spark ALS is run in local standalone mode. In this table we show the results on Movielens 100k, running the algorithms for 15 epochs.
| Algo | MAP | nDCG@k | Precision@k | Recall@k | RMSE | MAE | R2 | Explained Variance |
|---|---|---|---|---|---|---|---|---|
| ALS | 0.004732 | 0.044239 | 0.048462 | 0.017796 | 0.965038 | 0.753001 | 0.255647 | 0.251648 |
| SVD | 0.012873 | 0.095930 | 0.091198 | 0.032783 | 0.938681 | 0.742690 | 0.291967 | 0.291971 |
| SAR | 0.113028 | 0.388321 | 0.333828 | 0.183179 | N/A | N/A | N/A | N/A |
| NCF | 0.107720 | 0.396118 | 0.347296 | 0.180775 | N/A | N/A | N/A | N/A |
| FastAI | 0.025503 | 0.147866 | 0.130329 | 0.053824 | 0.943084 | 0.744337 | 0.285308 | 0.287671 |
This project welcomes contributions and suggestions. Before contributing, please see our contribution guidelines.
| Build Type | Branch | Status | Branch | Status | |
|---|---|---|---|---|---|
| Linux CPU | master | staging | |||
| Linux GPU | master | staging | |||
| Linux Spark | master | staging | |||
| Windows CPU | master | staging | |||
| Windows GPU | master | staging | |||
| Windows Spark | master | staging |
These DevOps pipelines run the existing tests on AzureML.
| Build Type | Branch | Status | Branch | Status | |
|---|---|---|---|---|---|
| nightly_cpu_tests | master | Staging | |||
| nightly_gpu_tests | master | Staging |
NOTE - these tests are the nightly builds, which compute the smoke and integration tests. Master is our main branch and staging is our development branch. We use pytest for testing python utilities in reco_utils and papermill for the notebooks. For more information about the testing pipelines, please see the test documentation.