Skip to main content

Repository of Intel® Neural Compressor

Project description

Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)

python version license coverage Downloads


Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, an open-source Python library running on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep learning frameworks for popular network compression technologies, such as quantization, pruning, knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help user quickly find out the best quantized model. It also implements different weight pruning algorithms to generate pruned model with predefined sparsity goal and supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor has been one of the critical AI software components in Intel® oneAPI AI Analytics Toolkit.

Note: GPU support is under development.

Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.

Installation

Prerequisites

  • Python version: 3.7 or 3.8 or 3.9 or 3.10

Install on Linux

  • Release binary install
    # install stable basic version from pip
    pip install neural-compressor
    # Or install stable full version from pip (including GUI)
    pip install neural-compressor-full
    
  • Nightly binary install
    git clone https://github.com/intel/neural-compressor.git
    cd neural-compressor
    pip install -r requirements.txt
    # install nightly basic version from pip
    pip install -i https://test.pypi.org/simple/ neural-compressor
    # Or install nightly full version from pip (including GUI)
    pip install -i https://test.pypi.org/simple/ neural-compressor-full
    

More installation methods can be found at Installation Guide. Please check out our FAQ for more details.

Getting Started

  • Quantization with Python API
# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
import tensorflow as tf
from neural_compressor.experimental import Quantization, common
tf.compat.v1.disable_eager_execution()
quantizer = Quantization()
quantizer.model = './mobilenet_v1_1.0_224_frozen.pb'
dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()
  • Quantization with GUI
# An ONNX Example
pip install onnx==1.12.0 onnxruntime==1.12.1 onnxruntime-extensions
# Prepare fp32 model
wget https://github.com/onnx/models/raw/main/vision/classification/resnet/model/resnet50-v1-12.onnx
# Start GUI
inc_bench
Architecture
from neural_coder import auto_quant
auto_quant(
    code="https://github.com/huggingface/transformers/blob/v4.21-release/examples/pytorch/text-classification/run_glue.py",
    args="--model_name_or_path albert-base-v2 \
          --task_name sst2 \
          --do_eval \
          --output_dir result \
          --overwrite_output_dir",
)

System Requirements

Intel® Neural Compressor supports systems based on Intel 64 architecture or compatible processors, specially optimized for the following CPUs:

  • Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
  • Future Intel Xeon Scalable processor (code name Sapphire Rapids)

Validated Software Environment

  • OS version: CentOS 8.4, Ubuntu 20.04
  • Python version: 3.7, 3.8, 3.9, 3.10
Framework TensorFlow Intel TensorFlow PyTorch IPEX ONNX Runtime MXNet
Version 2.9.1
2.8.2
2.7.3
2.9.1
2.8.0
2.7.0
1.12.0+cpu
1.11.0+cpu
1.10.0+cpu
1.12.0
1.11.0
1.10.0
1.11.0
1.10.0
1.9.0
1.8.0
1.7.0
1.6.0

Note: Please set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable oneDNN optimizations if you are using TensorFlow from v2.6 to v2.8. oneDNN has been fully default from TensorFlow v2.9.

Validated Models

Intel® Neural Compressor validated 420+ examples for quantization with performance speedup geomean 2.2x and up to 4.2x on VNNI while minimizing the accuracy loss. And also provided 30+ pruning and knowledge distillation samples.
More details for validated models are available here.

Architecture

Documentation

Overview
Architecture Examples GUI APIs
Intel oneAPI AI Analytics Toolkit AI and Analytics Samples
Basic API
Transform Dataset Metric Objective
Deep Dive
Quantization Pruning (Sparsity) Knowledge Distillation Mixed Precision Orchestration
Benchmarking Distributed Training Model Conversion TensorBoard
Advanced Topics
Adaptor Strategy Reference Example

Selected Publications

Please check out our full publication list.

Additional Content

Hiring :star:

We are actively hiring. Please send your resume to inc.maintainers@intel.com if you have interests in model compression techniques.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_compressor-1.14.tar.gz (505.5 kB view hashes)

Uploaded Source

Built Distribution

neural_compressor-1.14-py3-none-any.whl (753.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page