sail-sg / EditAnything
- четверг, 13 апреля 2023 г. в 00:14:24
Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc.
This is an ongoing project aims to Edit and Generate Anything in an image, powered by Segment Anything, ControlNet, BLIP2, Stable Diffusion, etc.
A project for fun. Any forms of contribution and suggestion are very welcomed!
2023/04/12 - An initial version of text-guided edit-anything is in sam2groundingdino_edit.py
(object-level) and sam2vlpart_edit.py
(part-level).
2023/04/10 - An initial version of edit-anything is in sam2edit.py
.
2023/04/10 - We transfer the pretrained model into diffusers style, the pretrained model is auto loaded when using sam2image_diffuser.py
. Now you can combine our pretrained model with different base models easily!
2023/04/09 - We released a pretrained model of StableDiffusion based ControlNet that generate images conditioned by SAM segmentation.
Highlight features:
Text Grounding: "dog head"
Text Grounding: "cat eye"
Human Prompt: "A cute small humanoid cat"
Text Grounding: "bench"
Human Prompt: "esplendent sunset sky, red brick wall"
Human Prompt: "chairs by the lake, sunny day, spring"
An initial version of edit-anything. (We will add more controls on masks very soon.)
BLIP2 Prompt: "a large white and red ferry"
(1:input image; 2: segmentation mask; 3-8: generated images.)
BLIP2 Prompt: "a black drone flying in the blue sky"
Note: Due to the privacy protection in the SAM dataset, faces in generated images are also blurred. We are training new models with unblurred images to solve this.
Conditional Generation trained with 85k samples in SAM dataset.
Training with more images from LAION and SAM.
Interactive control on different masks for image editing.
Using Grounding DINO for category-related auto editing.
ChatGPT guided image editing.
Create a environment
conda env create -f environment.yaml
conda activate control
Install BLIP2 and SAM
Put these models in models
folder.
pip install git+https://github.com/huggingface/transformers.git
pip install git+https://github.com/facebookresearch/segment-anything.git
# For text-guided editing
pip install git+https://github.com/openai/CLIP.git
pip install git+https://github.com/facebookresearch/detectron2.git
pip install git+https://github.com/IDEA-Research/GroundingDINO.git
Download pretrained model
# Segment-anything ViT-H SAM model.
cd models/
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
# BLIP2 model will be auto downloaded.
# Part Grounding Swin-Base Model.
wget https://github.com/Cheems-Seminar/segment-anything-and-name-it/releases/download/v1.0/swinbase_part_0a0000.pth
# Grounding DINO Model.
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth
# Get edit-anything-ckpt-v0-1.ckpt pretrained model from huggingface.
# No need to download this if your are using sam2image_diffuser.py!!! But please install safetensors for reading the ckpt.
https://huggingface.co/shgao/edit-anything-v0-1
Run Demo
python sam2image_diffuser.py
# or
python sam2image.py
# or
python sam2edit.py
# or
python sam2vlpart_edit.py
# or
python sam2groundingdino_edit.py
Set 'use_gradio = True' in these files if you have GUI to run the gradio demo.
dataset_build.py
.tool_add_control_sd21.py
.sam_train_sd21.py
.This project is based on:
Segment Anything, ControlNet, BLIP2, MDT, Stable Diffusion, Large-scale Unsupervised Semantic Segmentation, Grounded Segment Anything: From Objects to Parts, Grounded-Segment-Anything
Thanks for these amazing projects!