EV
openai/evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
18.2k 2.9k +8/wk
GitHub HuggingFace PyPI 3-source
Trend
3
Star & Fork Trend (25 data points)
Stars
Forks
Multi-Source Signals
GitHub
stars 18.2k
forks 2.9k
PyPI
package evals
version 3.0.1.post1
weekly Downloads 0
HuggingFace
models 38
downloads 88.7M
spaces 4
Growth Velocity
openai/evals has +8 stars this period , with cross-source activity across 3 platforms (github, huggingface, pypi). 7-day velocity: 0.1%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
No comparable projects found in the same topic categories.
Maintenance Activity 100
Last code push 2 days ago.
Community Engagement 80
Fork-to-star ratio: 16.0%. Active community forking and contributing.
Issue Burden 70
Issue data not yet available.
Growth Momentum 43
+8 stars this period — 0.04% growth rate.
License Clarity 30
No clear license detected — proceed with caution.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.