NC

intel/neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

2.6k 302 +0/wk
GitHub
auto-tuning awq fp4 gptq int4 int8 knowledge-distillation large-language-models low-precision mxformat post-training-quantization pruning
Trend 3

Star & Fork Trend (20 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

intel/neural-compressor has +0 stars this period . 7-day velocity: 0.0%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric neural-compressor HarvestText Time-LLM evalscope
Stars 2.6k 2.6k2.6k2.6k
Forks 302 339463301
Weekly Growth +0 +0+0+0
Language Python PythonPythonPython
Sources 1 111
License Apache-2.0 MITApache-2.0Apache-2.0

Capability Radar vs HarvestText

neural-compressor
HarvestText
Maintenance Activity 100

Last code push 1 days ago.

Community Engagement 58

Fork-to-star ratio: 11.6%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.