skypilot-org / skypilot
- пятница, 16 декабря 2022 г. в 00:36:20
SkyPilot is a framework for easily running machine learning workloads on any cloud through a unified interface.
SkyPilot is a framework for easily and cost effectively running ML workloads1 on any cloud.
SkyPilot abstracts away the cloud infra burden:
SkyPilot cuts your cloud costs:
SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
Install with pip (choose your clouds) or from source:
pip install "skypilot[aws,gcp,azure]"
You can find our documentation here.
A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.
Once written in this unified interface (YAML or Python API), the task can be launched on any available cloud. This avoids vendor lock-in, and allows easily moving jobs to a different provider.
Paste the following into a file my_task.yaml
:
resources:
accelerators: V100:1 # 1x NVIDIA V100 GPU
num_nodes: 1 # Number of VMs to launch
# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples
# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
pip install torch torchvision
# Commands to run as a job.
# Typical use: launch the main program.
run: |
cd mnist
python main.py --epochs 1
Prepare the workdir by cloning:
git clone https://github.com/pytorch/examples.git ~/torch_examples
Launch with sky launch
(note: access to GPU instances is needed for this example):
sky launch my_task.yaml
SkyPilot then performs the heavy-lifting for you, including:
workdir
to the VMsetup
commands to prepare the VM for running the taskrun
commandsRefer to Quickstart to get started with SkyPilot.
More information:
We are excited to hear your feedback!
For general discussions, join us on the SkyPilot Slack.
We welcome and value all contributions to the project! Please refer to CONTRIBUTING for how to get involved.
While SkyPilot is currently targeted at machine learning workloads, it supports and has been used for other general batch workloads. We're excited to hear about your use case and how we can better support your requirements; please join us in this discussion!