sensity-ai / dot
- пятница, 10 июня 2022 г. в 00:30:52
The Deepfake Offensive Toolkit
dot (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. dot is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.
If you want to learn more about dot is used for penetration tests with deepfakes in the industry, read these articles by The Verge and Biometric Update
dot is developed for research and demonstration purposes. As an end user, you have the responsibility to obey all applicable laws when using this program. Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.
In a nutshell, dot works like this
__________________ _____________________________ __________________________
| your webcam feed | -> | suite of realtime deepfakes | -> | virtual camera injection |
------------------ ----------------------------- --------------------------
All deepfakes supported by dot do not require additional training. They can be used in real-time on the fly on a photo that becomes the target of face impersonation. Supported methods:
224
and 512
256
and 512
Linux
sudo apt install ffmpeg cmake
MacOS
brew install ffmpeg cmake
The instructions assumes that you have Miniconda installed on your machine. If you don't, you can refer to this link for installation instructions.
conda env create -f envs/environment-gpu.yaml
conda activate dot
Install the torch
and torchvision
dependencies based on the CUDA version installed on your machine:
cudatoolkit
from conda
: conda install cudatoolkit=<cuda_version_no>
(replace <cuda_version_no>
with the version on your machine)torch
and torchvision
dependencies: pip install torch==1.9.0+<cuda_tag> torchvision==0.10.0+<cuda_tag> -f https://download.pytorch.org/whl/torch_stable.html
, where <cuda_tag>
is the CUDA tag defined by Pytorch. For example, pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
for CUDA 11.1. See here for a list of all available torch
and torchvision
versions.To check that torch
and torchvision
are installed correctly, run the following command: python -c "import torch; print(torch.cuda.is_available())"
. If the output is True
, the dependencies are installed with CUDA support.
conda env create -f envs/environment-cpu.yaml
conda activate dot
pip install -e .
There are 2 options for downloading the model weights:
GDrive: Download folder from here, unzip and place the folder in the root directory of the repository.
gdown
: Run the following command:
gdown https://drive.google.com/drive/folders/1FX1QoXragN4aKJZFo2DLiDE8fqKHeXEB -O ./saved_models --folder
Run dot --help
to get a full list of available options.
Simswap
dot \
--swap_type simswap \
--target 0 \
--source "./data" \
--parsing_model_path ./saved_models/simswap/parsing_model/checkpoint/79999_iter.pth \
--arcface_model_path ./saved_models/simswap/arcface_model/arcface_checkpoint.tar \
--checkpoints_dir ./saved_models/simswap/checkpoints \
--show_fps \
--use_gpu
SimSwapHQ
dot \
--swap_type simswap \
--target 0 \
--source "./data" \
--parsing_model_path ./saved_models/simswap/parsing_model/checkpoint/79999_iter.pth \
--arcface_model_path ./saved_models/simswap/arcface_model/arcface_checkpoint.tar \
--checkpoints_dir ./saved_models/simswap/checkpoints \
--crop_size 512 \
--show_fps \
--use_gpu
Additionally, to enable face superresolution, use the flag --gpen_type gpen_256
or --gpen_type gpen_512
.
FOMM
dot \
--swap_type fomm \
--target 0 \
--source "./data" \
--model_path ./saved_models/fomm/vox-adv-cpk.pth.tar \
--show_fps \
--use_gpu
FaceSwap
dot \
--swap_type faceswap_cv2 \
--target 0 \
--source "./data" \
--model_path ./saved_models/faceswap_cv/shape_predictor_68_face_landmarks.dat \
--show_fps \
--use_gpu
Note: To use dot on CPU (not recommended), do not pass the --use_gpu
flag.
Disclaimer: We use the
SimSwap
technique for the following demonstration
Running dot via any of the above methods generates real-time Deepfake on the input video feed using source images from the ./data
folder.
When running dot a list of available control options appear on the terminal window as shown above. You can toggle through and select different source images by pressing the associated control key.
Watch the following demo video for better understanding of the control options:
Instructions vary depending on your operating system.
Install OBS Studio.
Install VirtualCam plugin.
Choose Install and register only 1 virtual camera
.
Run OBS Studio.
In the Sources section, press on Add button ("+" sign),
select Windows Capture and press OK. In the appeared window, choose "[python.exe]: fomm" in Window drop-down menu and press OK. Then select Edit -> Transform -> Fit to screen.
In OBS Studio, go to Tools -> VirtualCam. Check AutoStart,
set Buffered Frames to 0 and press Start.
Now OBS-Camera
camera should be available in Zoom
(or other videoconferencing software).
sudo apt update
sudo apt install v4l-utils v4l2loopback-dkms v4l2loopback-utils
sudo modprobe v4l2loopback devices=1 card_label="OBS Cam" exclusive_caps=1
v4l2-ctl --list-devices
sudo add-apt-repository ppa:obsproject/obs-studio
sudo apt install obs-studio
Open OBS Studio
and check if tools --> v4l2sink
exists.
If it doesn't follow these instructions:
mkdir -p ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
ln -s /usr/lib/obs-plugins/v4l2sink.so ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
Use the virtual camera with OBS Studio
:
OBS Studio
tools --> v4l2sink
/dev/video2
and YUV420
start
OBS Cam
--use_cam
flag to enable camera feedThis is not a commercial Sensity product, and it is distributed freely with no warranties
The software is distributed under BSD 3-Clause. dot utilizes several open source libraries. If you use dot, make sure you agree with their licenses too. In particular, this codebase is built on top of the following research projects:
This repository follows the Google Python Style Guide for code formatting.
If you have ideas for improving dot, feel free to open relevant Issues and PRs. Please read CONTRIBUTING.md before contributing to the repository.
If you are working on improving the speed of dot, please read first our guide on code profiling.
Install Dev Requirements
pip install -r requirements-dev.txt
Install Pre-Commit Hooks
pre-commit install
Run Unit Tests (with coverage)
pytest --cov=dot --cov-report=term --cov-fail-under=10