Qihoo360 / XLearning
- среда, 6 декабря 2017 г. в 03:14:57
AI on Hadoop
XLearning is a convenient and efficient scheduling platform combined with the big data and artificial intelligence, support for a variety of machine learning, deep learning frameworks. XLearning is running on the Hadoop Yarn and has integrated deep learning frameworks such as TensorFlow, MXNet, Caffe, Theano, PyTorch, Keras, XGBoost. XLearning has the satisfactory scalability and compatibility.
There are three essential components in XLearning:
Besides the distributed mode of TensorFlow and MXNet frameworks, XLearning supports the standalone mode of all deep learning frameworks such as Caffe, Theano, PyTorch. Moreover, XLearning allows the custom versions and multi-version of frameworks flexibly.
XLearning is enable to specify the input strategy for the input data --input
by setting the --input-strategy
parameter or xlearning.input.strategy
configuration. XLearning support three ways to read the HDFS input data:
Similar with the read strategy, XLearning allows to specify the output strategy for the output data --output
by setting the --output-strategy
parameter or xlearning.output.strategy
configuration. There are two kinds of result output modes:
The application interface can be divided into three parts:
Except the automatic construction of the ClusterSpec at the distributed mode TensorFlow framework, the program at standalone mode TensorFlow and other deep learning frameworks can be executed at XLearning directly.
Run the following command in the root directory of the source code:
mvn package
After compiling, a distribution package named xlearning-1.0-dist.tar.gz
will be generated under target
in the root directory.
Unpacking the distribution package, the following subdirectories will be generated under the root directory:
Under the "conf" directory of the unpacking distribution package "$XLEARNING_HOME", configure the related files:
xlearning-env.sh: set the environment variables, such as:
xlearning-site.xml: configure related properties. Note that the properties associated with the history service needs to be consistent with what has configured when the history service started.For more details, please see the Configuration part。
log4j.properties:configure the log level
$XLEARNING_HOME/sbin/start-history-server.sh
.Use $XLEARNING_HOME/bin/xl-submit
to submit the application to cluster in the XLearning client.
Here are the submit example for the TensorFlow application.
upload the "data" directory under the root of unpacking distribution package to HDFS
cd $XLEARNING_HOME
hadoop fs -put data /tmp/
cd $XLEARNING_HOME/examples/tensorflow
$XLEARNING_HOME/bin/xl-submit \
--app-type "tensorflow" \
--app-name "tf-demo" \
--input /tmp/data/tensorflow#data \
--output /tmp/tensorflow_model#model \
--files demo.py,dataDeal.py \
--launch-cmd "python demo.py --data_path=./data --save_path=./model --log_dir=./eventLog --training_epochs=10" \
--worker-memory 10G \
--worker-num 2 \
--worker-cores 3 \
--ps-memory 1G \
--ps-num 1 \
--ps-cores 2
The meaning of the parameters are as follows:
Property Name | Meaning |
---|---|
app-name | application name as "tf-demo" |
app-type | application type as "tensorflow" |
input | input file, HDFS path is "/tmp/data/tensorflow" related to local dir "./data" |
output | output file,HDFS path is "/tmp/tensorflow_model" related to local dir "./model" |
files | application program and required local files, including demo.py, dataDeal.py |
launch-cmd | execute command |
worker-memory | amount of memory to use for the worker process is 10GB |
worker-num | number of worker containers to use for the application is 2 |
worker-cores | number of cores to use for the worker process is 3 |
ps-memory | amount of memory to use for the ps process is 1GB |
ps-num | number of ps containers to use for the application is 1 |
ps-cores | number of cores to use for the ps process is 2 |
For more details, set the Submit Parameter part。
Mail: g-xlearning-dev@360.cn
QQ群:588356340