AR

intel/auto-round

SOTA rounding-based quantization for high-accuracy low-bit LLM inference, seamlessly optimized for CPU/XPU/CUDA, with multi-datatype support and full compatibility with vLLM, SGLang, and Transformers.

952 96 +1/wk
GitHub
gguf int4 llms mxfp4 nvfp4 quantization rounding sglang transformers vllm vlms
Trend 3

Star & Fork Trend (49 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

intel/auto-round has +1 stars this period . 7-day velocity: 1.3%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric auto-round NLP-Tutorials GenAI_LLM_timeline pyresparser
Stars 952 952953953
Forks 96 31858447
Weekly Growth +1 +0+0+0
Language Python PythonN/APython
Sources 1 111
License Apache-2.0 MITN/AGPL-3.0

Capability Radar vs NLP-Tutorials

auto-round
NLP-Tutorials
Maintenance Activity 100

Last code push 1 days ago.

Community Engagement 50

Fork-to-star ratio: 10.1%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 46

+1 stars this period — 0.11% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.