LT
huggingface/llm_training_handbook
An open collection of methodologies to help with successful training of large language models.
557 44 +0/wk
GitHub
cuda large-language-models llm nccl nlp performance python pytorch scalability troubleshooting
Trend
0
Star & Fork Trend (31 data points)
Stars
Forks
Multi-Source Signals
Growth Velocity
huggingface/llm_training_handbook has +0 stars this period . Velocity data will be available after more historical data is collected.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | llm_training_handbook | Flame-Code-VLM | AgentLab | Lemur |
|---|---|---|---|---|
| Stars | 557 | 557 | 557 | 557 |
| Forks | 44 | 48 | 112 | 35 |
| Weekly Growth | +0 | -1 | +0 | +0 |
| Language | Python | Python | Python | Python |
| Sources | 1 | 1 | 1 | 1 |
| License | CC-BY-SA-4.0 | Apache-2.0 | NOASSERTION | Apache-2.0 |
Capability Radar vs Flame-Code-VLM
llm_training_handbook
Flame-Code-VLM
Maintenance Activity 0
Last code push 784 days ago.
Community Engagement 39
Fork-to-star ratio: 7.9%. Lower fork ratio may indicate passive usage.
Issue Burden 70
Issue data not yet available.
Growth Momentum 30
No measurable growth in the current period (first-day cold start expected).
License Clarity 60
Licensed under CC-BY-SA-4.0. Review license terms for your use case.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.