google / tunix
- пятница, 3 октября 2025 г. в 00:00:02
A JAX-native LLM Post-Training Library
Tunix(Tune-in-JAX) is a JAX based library designed to streamline the post-training of Large Language Models. It provides efficient and scalable supports for:
Tunix leverages the power of JAX for accelerated computation and seamless integration with JAX-based modeling framework Flax NNX.
Current Status: Early Development
Tunix is in early development. We're actively working to expand its capabilities, usability and improve its performance. Stay tuned for upcoming updates and new features!
Tunix is still under development, here's a glimpse of the current features:
You can install Tunix in several ways:
pip install "tunix[prod]"
pip install git+https://github.com/google/tunix
git clone https://github.com/google/tunix.git
cd tunix
pip install -e ".[dev]"
To get started, we have a bunch of detailed examples and tutorials.
To setup Jupyter notebook on single host GCP TPU VM, please refer to the setup script.
We plan to provide clear, concise documentation and more examples in the near future.
We welcome contributions! As Tunix is in early development, the contribution process is still being formalized. A rough draft of the contribution process is present here. In the meantime, you can make feature requests, report issues and ask questions in our Tunix GitHub discussion forum.
GRL (Game Reinforcement Learning), developed by Hao AI Lab from UCSD, is an open-source framework for post-training large language models through multi-turn RL on challenging games. In collaboration with Tunix, GRL integrates seamless TPU support—letting users quickly run scalable, reproducible RL experiments (like PPO rollouts on Qwen2.5-0.5B-Instruct) on TPU v4 meshes with minimal setup. This partnership empowers the community to push LLM capabilities further, combining Tunix’s optimized TPU runtime with GRL’s flexible game RL pipeline for cutting-edge research and easy reproducibility.
Thank you for your interest in Tunix. We're working hard to bring you a powerful and efficient library for LLM post-training. Please follow our progress and check back for updates!
Thank you to all our wonderful contributors!