AL

onejune2018/Awesome-LLM-Eval

Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.

630 54 +1/wk
GitHub
awsome-list awsome-lists benchmark bert chatglm chatgpt dataset evaluation gpt3 large-language-model leaderboard llama
Trend 3

Star & Fork Trend (50 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

onejune2018/Awesome-LLM-Eval has +1 stars this period . 7-day velocity: 0.2%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric Awesome-LLM-Eval Deepdive-llama3-from-scratch agents-from-scratch ai_wiki
Stars 630 630630632
Forks 54 52156115
Weekly Growth +1 +0+1+0
Language N/A Jupyter NotebookPythonJupyter Notebook
Sources 1 111
License MIT MITMITN/A

Capability Radar vs Deepdive-llama3-from-scratch

Awesome-LLM-Eval
Deepdive-llama3-from-scratch
Maintenance Activity 26

Last code push 136 days ago.

Community Engagement 43

Fork-to-star ratio: 8.6%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 50

+1 stars this period — 0.16% growth rate.

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.