CE

hiyouga/ChatGLM-Efficient-Tuning

Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调

3.7k 466 +0/wk
GitHub
alpaca chatglm chatglm2 chatgpt fine-tuning huggingface language-model lora peft pytorch qlora rlhf
Trend 0

Star & Fork Trend (26 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

hiyouga/ChatGLM-Efficient-Tuning has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric ChatGLM-Efficient-Tuning ReAct PhiCookBook olivia
Stars 3.7k 3.7k3.7k3.7k
Forks 466 361486342
Weekly Growth +0 +4+0+0
Language Python Jupyter NotebookJupyter NotebookGo
Sources 1 111
License Apache-2.0 MITMITMIT

Capability Radar vs ReAct

ChatGLM-Efficient-Tuning
ReAct
Maintenance Activity 0

Last code push 909 days ago.

Community Engagement 63

Fork-to-star ratio: 12.5%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.