RP

seal-rg/recurrent-pretraining

Pretraining and inference code for a large-scale depth-recurrent language model

870 78 +0/wk
GitHub
llms pretraining reasoning recurrent-depth
Trend 0

Star & Fork Trend (22 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

seal-rg/recurrent-pretraining has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric recurrent-pretraining asreview pypostal LLM-Finetuning-Toolkit
Stars 870 870870870
Forks 78 15992105
Weekly Growth +0 +0+0+0
Language Python PythonCPython
Sources 1 111
License Apache-2.0 Apache-2.0MITApache-2.0

Capability Radar vs asreview

recurrent-pretraining
asreview
Maintenance Activity 46

Last code push 101 days ago.

Community Engagement 45

Fork-to-star ratio: 9.0%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.