CHATS-lab/verbalized-sampling
Verbalized Sampling, a training-free prompting strategy to mitigate mode collapse in LLMs by requesting responses with probabilities. Achieves 2-3x diversity improvement while maintaining quality. Model-agnostic framework with CLI/API for creative writing, synthetic data generation, and dialogue simulation.
Star & Fork Trend (32 data points)
Multi-Source Signals
Growth Velocity
CHATS-lab/verbalized-sampling has +0 stars this period . 7-day velocity: 0.1%.
Deep analysis is being generated for this repository.
Signal-backed technical analysis will be available soon.
| Metric | verbalized-sampling | PromptKG | COMET | llm-server-docs |
|---|---|---|---|---|
| Stars | 734 | 734 | 733 | 733 |
| Forks | 83 | 74 | 106 | 56 |
| Weekly Growth | +0 | +0 | +0 | +1 |
| Language | Python | Python | Python | N/A |
| Sources | 1 | 1 | 1 | 1 |
| License | NOASSERTION | MIT | Apache-2.0 | MIT |
Capability Radar vs PromptKG
Last code push 95 days ago.
Fork-to-star ratio: 11.3%. Active community forking and contributing.
Issue data not yet available.
No measurable growth in the current period (first-day cold start expected).
No clear license detected — proceed with caution.
Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.