github

roboflow / inference

  • ะฟัั‚ะฝะธั†ะฐ, 25 ะฐะฒะณัƒัั‚ะฐ 2023โ€ฏะณ. ะฒ 00:00:13
https://github.com/roboflow/inference

An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.



Roboflow Inference banner

๐Ÿ‘‹ hello

Roboflow Inference is an opinionated tool for running inference on state-of-the-art computer vision models. With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments. Inference supports object detection, classification, and instance segmentation models, and running foundation models (CLIP and SAM).

๐ŸŽฅ Inference in action

Check out Inference running on a video of a football game:

inferece.mp4

๐Ÿ’ป Why Inference?

Inference provides a scalable method through which you can manage inferences for your vision projects.

Inference is backed by:

  • A server, so you donโ€™t have to reimplement things like image processing and prediction visualization on every project.

  • Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.

  • Model architecture implementations, which implement the tensor parsing glue between images and predictions for supervised models that you've fine-tuned to perform custom tasks.

  • A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.

  • Data management integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.

And more!

๐Ÿ“Œ Install pip vs Docker:

  • pip: Installs inference into your Python environment. Lightweight, good for Python-centric projects.
  • Docker: Packages inference with its environment. Ensures consistency across setups; ideal for scalable deployments.

๐Ÿ’ป install

With ONNX CPU Runtime:

For CPU powered inference:

pip install inference

or

pip install inference-cpu

With ONNX GPU Runtime:

If you have an NVIDIA GPU, you can accelerate your inference with:

pip install inference-gpu

Without ONNX Runtime:

Roboflow Inference uses Onnxruntime as its core inference engine. Onnxruntime provides an array of different execution providers that can optimize inference on differnt target devices. If you decide to install onnxruntime on your own, install inference with:

pip install inference-core

Alternatively, you can take advantage of some advanced execution providers using one of our published docker images.

Extras:

Some functionality requires extra dependancies. These can be installed by specifying the desired extras during installation of Roboflow Inference.

extra description
http Ability to run the http interface

Example install with http dependancies:

pip install inference[http]

๐Ÿ‹ docker

You can learn more about Roboflow Inference Docker Image build, pull and run in our documentation.

  • Run on x86 CPU:
docker run --net=host roboflow/roboflow-inference-server-cpu:latest
  • Run on Nvidia GPU:
docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
๐Ÿ‘‰ more docker run options
  • Run on arm64 CPU:
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
  • Run on Nvidia GPU with TensorRT Runtime:
docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
  • Run on Nvidia Jetson with JetPack 4.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-trt-jetson:latest
  • Run on Nvidia Jetson with JetPack 5.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-trt-jetson-5.1.1:latest

๐Ÿ”ฅ quickstart

Docker Quickstart:

import requests

dataset_id = "soccer-players-5fuqs"
version_id = "1"
image_url = "https://source.roboflow.com/pwYAXv9BTpqLyFfgQoPZ/u48G0UpWfk8giSw7wrU8/original.jpg"
#Replace ROBOFLOW_API_KEY with your Roboflow API Key
api_key = "ROBOFLOW_API_KEY"
confidence = 0.5

url = f"http://localhost:9001/{dataset_id}/{version_id}"

params = {
    "api_key": api_key,
    "confidence": confidence,
    "image": image_url,
}

res = requests.post(url, params=params)
print(res.json())

Pip Quickstart:

After installing via pip, you can run a simple inference using:

from inference.core.data_models import ObjectDetectionInferenceRequest
from inference.models.yolov5.yolov5_object_detection import (
    YOLOv5ObjectDetectionOnnxRoboflowInferenceModel,
)

model = YOLOv5ObjectDetectionOnnxRoboflowInferenceModel(
    model_id="soccer-players-5fuqs/1", device_id="my-pc", 
    #Replace ROBOFLOW_API_KEY with your Roboflow API Key
    api_key="ROBOFLOW_API_KEY"
)

request = ObjectDetectionInferenceRequest(
    image={
        "type": "url",
        "value": "https://source.roboflow.com/pwYAXv9BTpqLyFfgQoPZ/u48G0UpWfk8giSw7wrU8/original.jpg",
    },
    confidence=0.5,
    iou_threshold=0.5,
)

results = model.infer(request)

print(results)

๐Ÿ“ license

The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.

model license
inference/models/clip MIT
inference/models/sam Apache 2.0
inference/models/vit Apache 2.0
inference/models/yolact MIT
inference/models/yolov5 AGPL-3.0
inference/models/yolov7 GPL-3.0
inference/models/yolov8 AGPL-3.0

๐Ÿš€ enterprise

With a Roboflow Inference Enterprise License, you can access additional Inference features, including:

  • Server cluster deployment
  • Device management
  • Active learning
  • YOLOv5 and YOLOv8 model sub-license

To learn more, contact the Roboflow team.

๐Ÿ“š documentation

Visit our documentation for usage examples and reference for Roboflow Inference.

๐Ÿ† contribution

We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! ๐Ÿ™