cortexlabs / cortex
- пятница, 17 января 2020 г. в 00:18:57
Go
Deploy machine learning models in production
Cortex is an open source platform for deploying machine learning models as production web services.
install • tutorial • docs • examples • we're hiring • email us • chat with us
cortex.yaml file.Cortex is designed to be self-hosted on any AWS account. You can spin up a cluster with a single command:
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.12/get-cli.sh)"
# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
aws region: us-west-2
aws instance type: p2.xlarge
spot instances: yes
min instances: 0
max instances: 10
○ spinning up your cluster ...
your cluster is ready!# predictor.py
class PythonPredictor:
def __init__(self, config):
self.model = download_model()
def predict(self, payload):
return model.predict(payload["text"])# cortex.yaml
- kind: deployment
name: sentiment
- kind: api
name: classifier
predictor:
type: python
path: predictor.py
tracker:
model_type: classification
compute:
gpu: 1
mem: 4G$ cortex deploy
creating classifier (http://***.amazonaws.com/sentiment/classifier)$ curl http://***.amazonaws.com/sentiment/classifier \
-X POST -H "Content-Type: application/json" \
-d '{"text": "the movie was amazing!"}'
positive$ cortex get classifier --watch
status up-to-date requested last update avg inference
live 1 1 8s 24ms
class count
positive 8
negative 4Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, Fargate, and Elastic Compute Cloud (EC2) and open source projects like Docker, Kubernetes, and TensorFlow Serving.
The CLI sends configuration and code to the cluster every time you run cortex deploy. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using Elastic Load Balancing (ELB), TensorFlow Serving, and ONNX Runtime. The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.