alexkimxyz / nsfw_data_scrapper

Collection of scripts to aggregate image data for the purposes of training an NSFW Image Classifier

NSFW Data Scrapper


This is a set of scripts that allows for an automatic collection of 10s of thousands of images for the following (loosely defined) categories to be later used for training an image classifier:

  • porn - pornography images
  • hentai - hentai images, but also includes pornographic drawings
  • sexy - sexually explicit images, but not pornography. Think nude photos, playboy, bikini, beach volleyball, etc.
  • neutral - safe for work neutral images of everyday things and people
  • drawings - safe for work drawings (including anime)

Note: the scripts have only been tested in Ubuntu 16.04 Linux distribution

Here is what each script (located under scripts directory) does:

  • - iterates through text files under scripts/source_urls downloading URLs of images for each of the 5 categories above. The Ripme application performs all the heavy lifting. The source URLs are mostly links to various subreddits, but could be any website that Ripme supports. Note: I already ran this script for you, and its outputs are located in raw_data directory. No need to rerun unless you edit files under scripts/source_urls
  • - downloads actual images for urls found in text files in raw_data directory
  • - (optional) script that downloads SFW anime images from the Danbooru2018 database
  • - (optional) script that downloads SFW neutral images from the Caltech256 dataset
  • - creates data/train directory and copy all *.jpg and *.jpeg file into it from raw_data. Also removes corrupted images
  • - creates data/test directory and moves N=2000 random files for each class from data/train to data/test (change this number inside the script if you need a different train/test split). Alternatively, you can run it multiple times, each time it will move N images for each class from data/train to data/test.


  • Python3 environment: conda env create -f environment.yml
  • Java runtime environment:
    • Ubuntu linux:sudo apt-get install default-jre
  • Linux command line tools: wget, convert (imagemagick suite of tools), rsync, shuf

How to run

Change working directory to scripts and execute each script in the sequence indicated by the number in the file name, e.g.:

$ bash # has already been run
$ find ../raw_data -name "urls_*.txt" -exec sh -c "echo Number of URLs in {}: ; cat {} | wc -l" \;
Number of URLs in ../raw_data/drawings/urls_drawings.txt:
Number of URLs in ../raw_data/hentai/urls_hentai.txt:
Number of URLs in ../raw_data/neutral/urls_neutral.txt:
Number of URLs in ../raw_data/sexy/urls_sexy.txt:
Number of urls in ../raw_data/porn/urls_porn.txt:
$ bash
$ bash # optional
$ bash # optional
$ bash
$ bash
$ cd ../data
$ ls train
drawings hentai neutral porn sexy
$ ls test
drawings hentai neutral porn sexy

I was able to train a CNN classifier to 91% accuracy with the following confusion matrix: alt text

As expected, anime and hentai are confused with each other more frequently than with other classes.

Same with porn and sexy categories.

Currently unrated

Recent Posts






RSS / Atom