IN

SemiAnalysisAI/InferenceX

Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TPUv6e/v7/Trainium2/3

774 121 +4/wk
GitHub
ai amd benchmark cuda gb200 llm nvidia pytorch rocm sglang vllm
Trend 3

Star & Fork Trend (28 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

SemiAnalysisAI/InferenceX has +4 stars this period . 7-day velocity: 1.3%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric InferenceX QiZhenGPT vllm-mlx CodeFuse-muAgent
Stars 774 774775775
Forks 121 8817782
Weekly Growth +4 +0+1-1
Language Python PythonPythonPython
Sources 1 111
License Apache-2.0 GPL-3.0N/ANOASSERTION

Capability Radar vs QiZhenGPT

InferenceX
QiZhenGPT
Maintenance Activity 100

Last code push 0 days ago.

Community Engagement 78

Fork-to-star ratio: 15.6%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 71

+4 stars this period — 0.52% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.