VL
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
75.7k 15.3k +140/wk
GitHub PyPI 2-source
amd blackwell cuda deepseek deepseek-v3 gpt gpt-oss inference kimi llama llm llm-serving
Trend
19
Star & Fork Trend (53 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
vllm-project/vllm has +140 stars this period , with cross-source activity across 2 platforms (github, pypi). 7-day velocity: 0.4%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | vllm | gpt4all | ragflow | llm-course |
|---|---|---|---|---|
| Stars | 75.7k | 77.3k | 77.5k | 78.0k |
| Forks | 15.3k | 8.3k | 8.7k | 9.1k |
| Weekly Growth | +140 | -3 | +113 | +41 |
| Language | Python | C++ | Python | N/A |
| Sources | 2 | 3 | 2 | 2 |
| License | Apache-2.0 | MIT | Apache-2.0 | Apache-2.0 |
Capability Radar vs gpt4all
vllm
gpt4all
Maintenance Activity 100
Last code push 0 days ago.
Community Engagement 100
Fork-to-star ratio: 20.3%. Active community forking and contributing.
Issue Burden 70
Issue data not yet available.
Growth Momentum 51
+140 stars this period — 0.18% growth rate.
License Clarity 95
Licensed under Apache-2.0. Permissive — safe for commercial use.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.