godly-devotion / MochiDiffusion
- среда, 4 января 2023 г. в 00:37:18
Run Stable Diffusion on Apple Silicon Macs natively
Run Stable Diffusion on Apple Silicon Macs natively
This app uses Apple's Core ML Stable Diffusion implementation to achieve maximum performance and speed on Apple Silicon based Macs while reducing memory requirements.
Download the latest version from the releases page.
When using a model for the very first time, it may take up to 30 seconds for the Neural Engine to compile a cached version. Afterwards, subsequent generations will be much faster.
CPU & Neural Engine
provides a good balance between speed and low memory usageCPU & GPU
may be faster on M1 Max, Ultra and later but will use more memoryDepending on the option chosen, you will need to use the correct model version (see Models section for details).
You will need to convert or download Core ML models in order to use Mochi Diffusion.
A few models have been converted and uploaded here.
split_einsum
version is compatible with all compute unit options including Neural Engineoriginal
version is only compatible with CPU & GPU
option~/Documents/MochiDiffusion/models/[Model Folder Name]/[Model's Files]
All generation happens locally and absolutely nothing is sent to the cloud.
Mochi Diffusion is always looking for contributions, whether it's through bug reports, code, or new translations.
If you find a bug, or would like to suggest a new feature or enhancement, try searching for your problem first as it helps avoid duplicates. If you can't find your issue, feel free to create a new discussion.
If you're looking to contribute code, feel free to open a Pull Request or create a new discussion to talk about it first.