LL
EM-GeekLab/LLMOne
Enterprise-grade LLM automated deployment tool that makes AI servers truly "plug-and-play".
87 3 +0/wk
GitHub
agent ai-server llm llm-inference llm-serving mindie ollama transformer vllm
Trend
0
Star & Fork Trend (16 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
EM-GeekLab/LLMOne has +0 stars this period . Velocity data will be available after more historical data is collected.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | LLMOne | Spice | tiny-vllm | SciDER |
|---|---|---|---|---|
| Stars | 87 | 87 | 87 | 87 |
| Forks | 3 | 2 | 2 | 6 |
| Weekly Growth | +0 | +0 | +1 | +1 |
| Language | TypeScript | Python | C++ | Python |
| Sources | 1 | 1 | 1 | 1 |
| License | MulanPSL-2.0 | NOASSERTION | Apache-2.0 | Apache-2.0 |
Capability Radar vs Spice
LLMOne
Spice
Maintenance Activity 100
Last code push 4 days ago.
Community Engagement 65
Fork-to-star ratio: 3.4%. Lower fork ratio may indicate passive usage.
Issue Burden 70
Issue data not yet available.
Growth Momentum 30
No measurable growth in the current period (first-day cold start expected).
License Clarity 60
Licensed under MulanPSL-2.0. Review license terms for your use case.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.