Skip to content

Releases: intel/neural-compressor

Intel® Low Precision Optimization Tool v1.5.1 Release

25 Jul 14:26
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.5.1 release is featured by:

  • Gradient-sensitivity pruning for CNN model
  • Static quantization support for ONNX NLP model
  • Dynamic seq length support in NLP dataloader
  • Enrich quantization statistics

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8 & 3.9
  • Centos 8.3 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
  • MxNet 1.6.0, 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.5 Release

12 Jul 14:23
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.5 release is featured by:

  • Add pattern-lock sparsity algorithm for NLP fine-tuning tasks
    • Up to 70% unstructured sparsity and 50% structured sparsity with <2% accuracy loss on 5 Bert finetuning tasks
  • Add NLP head pruning algorithm for HuggingFace models
    • Performance speedup up to 3.0X within 1.5% accuracy loss on HuggingFace BERT SST-2
  • Support model optimization pipeline
  • Integrate SigOPT with multi-metrics optimization
    • Complementary as basic strategy to speed up the tuning
  • Support TensorFlow 2.5, PyTorch 1.8, and ONNX Runtime 1.8

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8 & 3.9
  • Centos 8.3 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
  • MxNet 1.6.0, 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.4.1 Release

25 Jun 16:20
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.4.1 release is featured by:

  1. Support TensorFlow 2.5.0
  2. Support PyTorch 1.8.0
  3. Support TensorFlow Object Detection YOLO-V3 model

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.4 Release

30 May 18:21
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.4 release is featured by:

Quantization

  1. PyTorch FX-based quantization support
  2. TensorFlow & ONNX RT quantization enhancement

Pruning

  1. Pruning/sparsity API refinement
  2. Magnitude-based pruning on PyTorch

Model Zoo

  1. INT8 key models updated (BERT on TensorFlow, DLRM on PyTorch, etc.)
  2. 20+ HuggingFace model quantization

User Experience

  1. More comprehensive logging message
  2. UI enhancement with FP32 optimization, auto-mixed precision (BF16/FP32), and graph visualization
  3. Online document: https://intel.github.io/lpot

Extended Capabilities

  1. Model conversion from QAT to Intel Optimized TensorFlow model

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.3.1 Release

11 May 05:26
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.3 release is featured by:

  1. Improve graph optimization without explicit input/output setting

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.3 Release

16 Apr 14:58
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.3 release is featured by:

  1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
  2. Dynamic quantization support for PyTorch
  3. ONNX Runtime v1.7 support
  4. Configurable benchmarking support (multi-instances, warmup, etc.)
  5. Multiple batch size calibration & mAP metrics for object detection models
  6. Experimental user facing APIs for better usability
  7. Various HuggingFace models support

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.2.1 Release

02 Apr 14:53
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2.1 release is featured by:

  1. user-facing APIs backward compatibility with v1.1 and v1.0.
  2. refined experimental user-facing APIs for better out-of-box experience.

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.2 Release

12 Mar 15:31
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2 release is featured by:

  • Broad TensorFlow model type support
  • operator-wise quantization scheme for ONNX RT
  • MSE driven tuning for metric-free use cases
  • UX improvement, including UI web server preview support
  • More key model supports

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.1 Release

31 Dec 13:41
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.1 release is featured by:

  • New backends (PyTorch/IPEX, ONNX Runtime) backend preview support
  • Add built-in industry dataset/metric and custom registration
  • Preliminary input/output node auto-detection on TensorFlow models
  • New INT8 quantization recipes: bias correction and label balance

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact lpot.maintainers@intel.com, if you get any questions.

Intel® Low Precision Optimization Tool v1.0 Release

30 Oct 15:24
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.0 release is featured by:

  • Refined user facing APIs for best OOB.
  • Add TPE tuning strategies (Experimental).
  • Pruning POC support on PyTorch
  • TensorBoard POC support for tuning analysis.
  • Built-in INT8/Dummy dataloader Support.
  • Built-in Benchmarking support.
  • Tuning history for strategy finetune.
  • Support TF Keras and checkpoint model type as input.

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15UP1
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda install ilit -c intel

Contact:

Please feel free to contact ilit.maintainers@intel.com, if you get any questions.