google-ai-edge / LiteRT
- ΠΏΡΡΠ½ΠΈΡΠ°, 13 ΠΌΠ°ΡΡΠ° 2026β―Π³. Π² 00:00:06
LiteRT, successor to TensorFlow Lite. is Google's On-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
Google's on-device framework for high-performance ML & GenAI deployment on edge platforms, via efficient conversion, runtime, and optimization
π Get Started | π€ Contributing | π License | π‘ Security Policy | π Documentation
| Nightly Builds | Continuous Builds | Other Builds |
|---|---|---|
LiteRT continues the legacy of TensorFlow Lite as the trusted, high-performance runtime for on-device AI.
LiteRT features advanced GPU/NPU acceleration, delivers superior ML & GenAI performance, making on-device ML inference easier than ever.
π New LiteRT Compiled Model API: Streamline development with automated accelerator selection, true async execution, and efficient I/O buffer handling.
π€ Unified NPU Acceleration: Offer seamless access to NPUs from major chipset providers with a consistent developer experience. LiteRT NPU, previously under Early access program is available to all users: https://ai.google.dev/edge/litert/next/npu
β‘ Best-in-class GPU Performance: Use state-of-the-art GPU acceleration for on-device ML. The new buffer interoperability enables zero-copy and minimizes latency across various GPU buffer types.
π§ Superior Generative AI inference: Enable the simplest integration with the best performance for GenAI models.
LiteRT is designed for cross-platform deployment on a wide range of hardware.
| Platform | CPU Support | GPU Support | NPU Support |
|---|---|---|---|
| π€ Android | β | β
OpenCL β OpenGL |
Google Tensor* β Qualcomm β MediaTek S.LSI* Intel* |
| π iOS | β | β Metal | ANE* |
| π§ Linux | β | β WebGPU | N/A |
| π macOS | β | β
WebGPU β Metal |
ANE* |
| π» Windows | β | β WebGPU | Intel* |
| π Web | β | β WebGPU | Coming soon |
| π§© IoT | β | β WebGPU | Broadcom* Raspberry Pi* |
*Coming soon
Coming soon...
For a comprehensive guide to setting up your application with LiteRT, see the Get Started guide.
You can build LiteRT from source:
build_with_docker.sh under docker_build/The script automatically creates a Linux Docker image, which allows you to build artifacts for Linux and Android (through cross compilation). See build instructions in CMake build instructions and Bazel build instructions for more information on how to build runtime libraries with the docker container.
For more information about using docker interactive shell or building different
targets, please refer to docker_build/README.md.
Every developer's path is different. Here are a few common journeys to help you get started based on your goals:
.tflite format, and use AI Edge
Quantizer to optimize the model for optimal performance under resource
constraints. From there, you can deploy it using the standard LiteRT runtime.Our commitment is to make LiteRT the best runtime for any on-device ML deployment. Our product strategies are:
We welcome contributions to LiteRT. Please see the CONTRIBUTING.md file for more information on how to contribute.
We encourage you to reach out if you need help.
LiteRT is part of a larger ecosystem of tools for on-device machine learning. Check out these other projects from Google:
This project is dedicated to fostering an open and welcoming environment. Please read our Code of Conduct to understand the standards of behavior we expect from all participants in our community.
LiteRT is licensed under the Apache-2.0 License.