garnele007 / SwiftOCR
- понедельник, 2 мая 2016 г. в 03:11:32
Objective-C
Fast and simple OCR library written in Swift
SwiftOCR is a fast and simple OCR library written in Swift. It uses a neural network for image recognition. As of now, SwiftOCR is optimized for recognizing short, one line long alphanumeric codes (e.g. DI4C9CM). We currently support iOS and OS X.
This is a really good question.
If you want to recognize normal text like a poem or a news article, go with Tesseract, but if you want to recognize short, alphanumeric codes (e.g. gift cards), I would advise you to choose SwiftOCR because that's where it exceeds.
Tesseract is written in C++ and over 30 years old. To use it you first have to write a Objective-C++ wrapper for it. The main issue that's slowing down Tesseract is the way memory is managed. Too many memory allocations and releases slow it down.
I did some testing on over 50 difficult images containing alphanumeric codes. The results where astonishing. SwiftOCR beat Tesseract in every category.
SwiftOCR | Tesseract | |
---|---|---|
Speed | 0.08 sec. | 0.63 sec. |
Accuracy | 97.7% | 45.2% |
CPU | ~30% | ~90% |
Memory | 45 MB | 73 MB |
First, SwiftOCR binarizes the input image. Afterwards it extracts the characters of the image using a technique called Connected-component labeling. Finally the seperated characters get converted into numbers which then get feed into the neural network.
If you ever used Tesseract you know how exhausting it can be to implement OCR into your project. SwiftOCR is the exact opposite of Tesseract. It can be implemented using just 6 lines of code.
import SwiftOCR
let swiftOCRInstance = SwiftOCR()
swiftOCRInstance.image = myImage
swiftOCRInstance.recognize({recognizedString in
print(recognizedString)
})
To improve your experience with SwiftOCR you should set your Build Configuration to Release
.
Training SwiftOCR is pretty easy. There are only a few steps you have to do, before it can recognize a new font.
SwiftOCR.swift
file replace internal let network = FFNN.fromFile(...)
with internal let network = FFNN(inputs: 321, hidden: 100, outputs: 36, learningRate: 0.7, momentum: 0.4, weights: nil, activationFunction: .Sigmoid, errorFunction: .CrossEntropy(average: false))
.errorThreshold
value in the training file to something like 15.trainingFontNames
array at the beginning of the SwiftOCRTraining.swift
file.trainWithCharSet()
and wait.Here is an example image. SwiftOCR has no problem recognizing it. If you try to recognize the same image using Tesseract the output is 'LABMENSW' ?!?!?.
This image is difficult to recognize because of two reasons:
The code in this repository is licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
NOTE: This software depends on other packages that may be licensed under different open source licenses.