oumi-ai / oumi
- воскресенье, 2 февраля 2025 г. в 00:00:01
Everything you need to build state-of-the-art foundation models, end-to-end.
Oumi is a fully open-source platform that streamlines the entire lifecycle of foundation models - from data preparation and training to evaluation and deployment. Whether you're developing on a laptop, launching large scale experiments on a cluster, or deploying models in production, Oumi provides the tools and workflows you need.
With Oumi, you can:
All with one consistent API, production-grade reliability, and all the flexibility you need for research.
Learn more at oumi.ai, or jump right in with the quickstart guide.
Installing oumi in your environment is straightforward:
# Install the package (CPU & NPU only)
pip install oumi # For local development & testing
# OR, with GPU support (Requires Nvidia or AMD GPU)
pip install oumi[gpu] # For GPU training
# To get the latest version, install from the source
pip install git+https://github.com/oumi-ai/oumi.git
For more advanced installation options, see the installation guide.
You can quickly use the oumi
command to train, evaluate, and infer models using one of the existing recipes:
# Training
oumi train -c configs/recipes/smollm/sft/135m/quickstart_train.yaml
# Evaluation
oumi evaluate -c configs/recipes/smollm/evaluation/135m/quickstart_eval.yaml
# Inference
oumi infer -c configs/recipes/smollm/inference/135m_infer.yaml --interactive
For more advanced options, see the training, evaluation, inference, and llm-as-a-judge guides.
You can run jobs remotely on cloud platforms (AWS, Azure, GCP, Lambda, etc.) using the oumi launch
command:
# GCP
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_gcp_job.yaml
# AWS
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_aws_job.yaml
# Azure
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_azure_job.yaml
# Lambda
oumi launch up -c configs/recipes/smollm/sft/135m/quickstart_lambda_job.yaml
Note: Oumi is in beta and under active development. The core features are stable, but some advanced features might change as the platform improves.
If you need a comprehensive platform for training, evaluating, or deploying models, Oumi is a great choice.
Here are some of the key features that make Oumi stand out:
Explore the growing collection of ready-to-use configurations for state-of-the-art models and training workflows:
Note: These configurations are not an exhaustive list of what's supported, simply examples to get you started. You can find a more exhaustive list of supported models, and datasets (supervised fine-tuning, pre-training, preference tuning, and vision-language finetuning) in the oumi documentation.
Model | Example Configurations |
---|---|
DeepSeek R1 671B | Inference (Together AI) |
Distilled Llama 8B | FFT • LoRA • QLoRA • Inference • Evaluation |
Distilled Llama 70B | FFT • LoRA • QLoRA • Inference • Evaluation |
Distilled Qwen 1.5B | FFT • LoRA • Inference • Evaluation |
Distilled Qwen 32B | LoRA • Inference • Evaluation |
Model | Example Configurations |
---|---|
Llama 3.1 8B | FFT • LoRA • QLoRA • Pre-training • Inference (vLLM) • Inference • Evaluation |
Llama 3.1 70B | FFT • LoRA • QLoRA • Inference • Evaluation |
Llama 3.1 405B | FFT • LoRA • QLoRA |
Llama 3.2 1B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
Llama 3.2 3B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
Llama 3.3 70B | FFT • LoRA • QLoRA • Inference (vLLM) • Inference • Evaluation |
Llama 3.2 Vision 11B | SFT • Inference (vLLM) • Inference (SGLang) • Evaluation |
Model | Example Configurations |
---|---|
Llama 3.2 Vision 11B | SFT • LoRA • Inference (vLLM) • Inference (SGLang) • Evaluation |
LLaVA 7B | SFT • Inference (vLLM) • Inference |
Phi3 Vision 4.2B | SFT • Inference (vLLM) |
Qwen2-VL 2B | SFT • Inference (vLLM) • Inference (SGLang) • Inference • Evaluation |
SmolVLM-Instruct 2B | SFT |
This section lists all the language models that can be used with Oumi. Thanks to the integration with the 🤗 Transformers library, you can easily use any of these models for training, evaluation, or inference.
Models prefixed with a checkmark (✅) have been thoroughly tested and validated by the Oumi community, with ready-to-use recipes available in the configs/recipes directory.
Model | Size | Paper | HF Hub | License | Open 1 | Recommended Parameters |
---|---|---|---|---|---|---|
✅ SmolLM-Instruct | 135M/360M/1.7B | Blog | Hub | Apache 2.0 | ✅ | |
✅ DeepSeek R1 Family | 1.5B/8B/32B/70B/671B | Blog | Hub | MIT | ❌ | |
✅ Llama 3.1 Instruct | 8B/70B/405B | Paper | Hub | License | ❌ | |
✅ Llama 3.2 Instruct | 1B/3B | Paper | Hub | License | ❌ | |
✅ Llama 3.3 Instruct | 70B | Paper | Hub | License | ❌ | |
✅ Phi-3.5-Instruct | 4B/14B | Paper | Hub | License | ❌ | |
Qwen2.5-Instruct | 0.5B-70B | Paper | Hub | License | ❌ | |
OLMo 2 Instruct | 7B | Paper | Hub | Apache 2.0 | ✅ | |
MPT-Instruct | 7B | Blog | Hub | Apache 2.0 | ✅ | |
Command R | 35B/104B | Blog | Hub | License | ❌ | |
Granite-3.1-Instruct | 2B/8B | Paper | Hub | Apache 2.0 | ❌ | |
Gemma 2 Instruct | 2B/9B | Blog | Hub | License | ❌ | |
DBRX-Instruct | 130B MoE | Blog | Hub | Apache 2.0 | ❌ | |
Falcon-Instruct | 7B/40B | Paper | Hub | Apache 2.0 | ❌ |
Model | Size | Paper | HF Hub | License | Open | Recommended Parameters |
---|---|---|---|---|---|---|
✅ Llama 3.2 Vision | 11B | Paper | Hub | License | ❌ | |
✅ LLaVA-1.5 | 7B | Paper | Hub | License | ❌ | |
✅ Phi-3 Vision | 4.2B | Paper | Hub | License | ❌ | |
✅ BLIP-2 | 3.6B | Paper | Hub | MIT | ❌ | |
✅ Qwen2-VL | 2B | Blog | Hub | License | ❌ | |
✅ SmolVLM-Instruct | 2B | Blog | Hub | Apache 2.0 | ✅ |
Model | Size | Paper | HF Hub | License | Open | Recommended Parameters |
---|---|---|---|---|---|---|
✅ SmolLM2 | 135M/360M/1.7B | Blog | Hub | Apache 2.0 | ✅ | |
✅ Llama 3.2 | 1B/3B | Paper | Hub | License | ❌ | |
✅ Llama 3.1 | 8B/70B/405B | Paper | Hub | License | ❌ | |
✅ GPT-2 | 124M-1.5B | Paper | Hub | MIT | ✅ | |
DeepSeek V2 | 7B/13B | Blog | Hub | License | ❌ | |
Gemma2 | 2B/9B | Blog | Hub | License | ❌ | |
GPT-J | 6B | Blog | Hub | Apache 2.0 | ✅ | |
GPT-NeoX | 20B | Paper | Hub | Apache 2.0 | ✅ | |
Mistral | 7B | Paper | Hub | Apache 2.0 | ❌ | |
Mixtral | 8x7B/8x22B | Blog | Hub | Apache 2.0 | ❌ | |
MPT | 7B | Blog | Hub | Apache 2.0 | ✅ | |
OLMo | 1B/7B | Paper | Hub | Apache 2.0 | ✅ |
Model | Size | Paper | HF Hub | License | Open | Recommended Parameters |
---|---|---|---|---|---|---|
Qwen QwQ | 32B | Blog | Hub | License | ✅ |
Model | Size | Paper | HF Hub | License | Open | Recommended Parameters |
---|---|---|---|---|---|---|
✅ Qwen2.5 Coder | 0.5B-32B | Blog | Hub | License | ❌ | |
DeepSeek Coder | 1.3B-33B | Paper | Hub | License | ❌ | |
StarCoder 2 | 3B/7B/15B | Paper | Hub | License | ✅ |
Model | Size | Paper | HF Hub | License | Open | Recommended Parameters |
---|---|---|---|---|---|---|
DeepSeek Math | 7B | Paper | Hub | License | ❌ |
To learn more about all the platform's capabilities, see the Oumi documentation.
Oumi is a community-first effort. Whether you are a developer, a researcher, or a non-technical user, all contributions are very welcome!
oumi
repository, please check the CONTRIBUTING.md
for guidance on how to contribute to send your first Pull Request.Oumi makes use of several libraries and tools from the open-source community. We would like to acknowledge and deeply thank the contributors of these projects! ✨ 🌟 💫
If you find Oumi useful in your research, please consider citing it:
@software{oumi2025,
author = {Oumi Community},
title = {Oumi: an Open, End-to-end Platform for Building Large Foundation Models},
month = {January},
year = {2025},
url = {https://github.com/oumi-ai/oumi}
}
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Open models are defined as models with fully open weights, training code, and data, and a permissive license. See Open Source Definitions for more information. ↩