bmaltais / kohya_ss
- четверг, 2 марта 2023 г. в 00:14:20
This repository provides a Windows-focused Gradio GUI for Kohya's Stable Diffusion trainers. The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.
If you run on Linux and would like to use the GUI, there is now a port of it as a docker container. You can find the project here.
How to Create a LoRA Part 1: Dataset Preparation:
How to Create a LoRA Part 2: Training the Model:
Give unrestricted script access to powershell so venv can work:
Set-ExecutionPolicy Unrestricted
and answer 'A'Open a regular user Powershell terminal and run the following commands:
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
python -m venv venv
.\venv\Scripts\activate
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --use-pep517 --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
This step is optional but can improve the learning speed for NVIDIA 30X0/40X0 owners. It allows for larger training batch size and faster training speed.
Due to the file size, I can't host the DLLs needed for CUDNN 8.6 on Github. I strongly advise you download them for a speed boost in sample generation (almost 50% on 4090 GPU) you can download them here.
To install, simply unzip the directory and place the cudnn_windows
folder in the root of the this repo.
Run the following commands to install:
.\venv\Scripts\activate
python .\tools\cudann_1.8_install.py
When a new release comes out, you can upgrade your repo with the following commands in the root directory:
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
To run the GUI, simply use this command:
.\gui.ps1
or you can also do:
.\venv\Scripts\activate
python.exe .\kohya_gui.py
You can find the dreambooth solution specific here: Dreambooth README
You can find the finetune solution specific here: Finetune README
You can find the train network solution specific here: Train network README
Training a LoRA currently uses the train_network.py
code. You can create a LoRA network by using the all-in-one gui.cmd
or by running the dedicated LoRA training GUI with:
.\venv\Scripts\activate
python lora_gui.py
Once you have created the LoRA network, you can generate images via auto1111 by installing this extension.
page file
: Increase the page file size limit in Windows.This is usually related to an installation issue. Make sure you do not have any python modules installed locally that could conflict with the ones installed in the venv:
pip freeze > uninstall.txt
pip uninstall -r uninstall.txt
This will store your a backup file with your current locally installed pip packages and then uninstall them. Then, redo the installation instructions within the kohya_ss venv.
kohya_gui.py
and gui.ps1
Use 8bit adam
checkbox that is now superceided by the Optimizer
dropdown selection. This field will be eventually removed. Kept for now for backward compatibility.train_network.py
.
fp16
training is probably not affected by this issue.float
for SD2.x models will work now. Also training with bf16 might be improved.--optimizer_type
and --use_8bit_adam
.)gen_img_diffusers.py
(no documentation yet.)AdamW, AdamW8bit, Lion, SGDNesterov, SGDNesterov8bit, DAdaptation, AdaFactor
--noise_offset
--optimizer_type
option for each training script. Please see help. Japanese documentation is here.--use_8bit_adam
and --use_lion_optimizer
options also work and will override the options above for backward compatibility.pip install dadaptation
(it is not in requirements.txt currently.)train_network.py
.--max_grad_norm
option for each training script for gradient clipping. 0.0
disables clipping.