intel / neural-compressor
- суббота, 24 декабря 2022 г. в 00:35:39
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, is an open-source Python library that runs on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep-learning frameworks for popular network compression technologies such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help the user quickly find out the best quantized model. It also implements different weight-pruning algorithms to generate a pruned model with predefined sparsity goal. It also supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor is a critical AI software component in the Intel® oneAPI AI Analytics Toolkit.
Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.
Python version: 3.7, 3.8, 3.9, 3.10
# install stable basic version from pip
pip install neural-compressor
# Or install stable full version from pip (including GUI)
pip install neural-compressor-full
git clone https://github.com/intel/neural-compressor.git
cd neural-compressor
pip install -r requirements.txt
# install nightly basic version from pip
pip install -i https://test.pypi.org/simple/ neural-compressor
# Or install nightly full version from pip (including GUI)
pip install -i https://test.pypi.org/simple/ neural-compressor-full
More installation methods can be found at Installation Guide. Please check out our FAQ for more details.
# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
from neural_compressor.config import PostTrainingQuantConfig
from neural_compressor.data.dataloaders.dataloader import DataLoader
from neural_compressor.data import Datasets
dataset = Datasets('tensorflow')['dummy'](shape=(1, 224, 224, 3))
from neural_compressor.quantization import fit
config = PostTrainingQuantConfig()
fit(
model="./mobilenet_v1_1.0_224_frozen.pb",
conf=config,
calib_dataloader=DataLoader(framework='tensorflow', dataset=dataset),
eval_dataloader=DataLoader(framework='tensorflow', dataset=dataset))
Search for jupyter-lab-neural-compressor
in the Extension Manager in JupyterLab and install with one click:
# An ONNX Example
pip install onnx==1.12.0 onnxruntime==1.12.1 onnxruntime-extensions
# Prepare fp32 model
wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx
# Start GUI
inc_bench
Framework | TensorFlow | Intel TensorFlow | PyTorch | Intel® Extension for PyTorch* | ONNX Runtime | MXNet |
---|---|---|---|---|---|---|
Version | 2.10.0 2.9.1 2.8.2 | 2.10.0 2.9.1 2.8.0 | 1.12.1+cpu 1.11.0+cpu 1.10.0+cpu |
1.12.0 1.11.0 1.10.0 |
1.12.1 1.11.0 1.10.0 |
1.8.0 1.7.0 1.6.0 |
Note: Set the environment variable
TF_ENABLE_ONEDNN_OPTS=1
to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.
Intel® Neural Compressor validated 420+ examples for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available here.
View our full publication list.
We are actively hiring. Send your resume to inc.maintainers@intel.com if you are interested in model compression techniques.