TV
jmaczan/tiny-vllm
Build your own high performance LLM inference engine in C++ and CUDA - a smaller version of vLLM
87 2 +0/wk
GitHub
ai attention batching course cpp cuda hpc inference llm llm-inference pagedattention tiny-vllm
Trend
3
Star & Fork Trend (48 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
jmaczan/tiny-vllm has +0 stars this period . 7-day velocity: 2.4%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | tiny-vllm | Spice | minimind-notes | LLMOne |
|---|---|---|---|---|
| Stars | 87 | 87 | 87 | 87 |
| Forks | 2 | 2 | 10 | 3 |
| Weekly Growth | +0 | +0 | -1 | +0 |
| Language | C++ | Python | Python | TypeScript |
| Sources | 1 | 1 | 1 | 1 |
| License | Apache-2.0 | NOASSERTION | Apache-2.0 | MulanPSL-2.0 |
Capability Radar vs Spice
tiny-vllm
Spice
Maintenance Activity 100
Last code push 0 days ago.
Community Engagement 62
Fork-to-star ratio: 2.3%. Lower fork ratio may indicate passive usage.
Issue Burden 70
Issue data not yet available.
Growth Momentum 30
No measurable growth in the current period (first-day cold start expected).
License Clarity 95
Licensed under Apache-2.0. Permissive — safe for commercial use.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.