LS

raketenkater/llm-server

Smart launcher for llama.cpp / ik_llama.cpp — auto-detects GPUs, optimizes MoE placement, crash recovery

144 6 +2/wk
GitHub Breakout +171.7%
apple-silicon cli cuda gguf ik-llama-cpp inference llama-cpp llm local-ai metal moe multi-gpu
Trend 20

Star & Fork Trend (88 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

raketenkater/llm-server has +2 stars this period . 7-day velocity: 171.7%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric llm-server claude-code-plus sagemaker-xgboost-container amux
Stars 144 144144144
Forks 6 209614
Weekly Growth +2 +0+0+1
Language Shell JavaScriptPythonPython
Sources 1 111
License MIT MITApache-2.0NOASSERTION

Capability Radar vs claude-code-plus

llm-server
claude-code-plus
Maintenance Activity 100

Last code push 0 days ago.

Community Engagement 68

Fork-to-star ratio: 4.2%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 100

+2 stars this period — 1.39% growth rate.

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.

Need help implementing llm-server in production?

FluxWise Agentic AI Platform — 让AI真正替你干活