anuragxel / salt
- четверг, 13 апреля 2023 г. в 00:14:21
Segment Anything Labelling Tool
Uses the Segment-Anything Model By Meta AI and adds a barebones interface to label images and saves the masks in the COCO format.
Under active development, apologies for rough edges and bugs. Use at your own risk.
conda env create -f environment.yaml
on the labelling machine (Need not have GPU).<dataset_name>/images/*
and create empty folder <dataset_name>/embeddings
.
<dataset_name>/annotations.json
by default.helpers
scripts to the base folder of your segment-anything
folder.
extract_embeddings.py
to extract embeddings for your images.generate_onnx.py
generate *.onnx
files in models.models
folder.<dataset_name>
.segment_anything_annotator.py
with argument <dataset_name>
and categories cat1,cat2,cat3..
.
n
adds predicted mask into your annotations. (Add button)r
rejects the predicted mask. (Reject button)a
and d
to cycle through images in your your set. (Next and Prev)l
and k
to increase and decrease the transparency of the other annotations.Ctrl + S
to save progress to the COCO-style annotations file.python cocoviewer.py -i <dataset> -a <dataset>/annotations.json
Follow these guidelines to ensure that your contributions can be reviewed and merged. Need a lot of help in making the UI better.
If you have found a bug or have an idea for an improvement or new feature, please create an issue on GitHub. Before creating a new issue, please search existing issues to see if your issue has already been reported.
When creating an issue, please include as much detail as possible, including steps to reproduce the issue if applicable.
Create a pull request (PR) to the original repository. Please use black
formatter when making code changes.
The code is licensed under the MIT License. By contributing to SALT, you agree to license your contributions under the same license as the project. See LICENSE for more information.