LM

ictnlp/LLaVA-Mini

LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.

569 32 +1/wk
GitHub
efficient gpt4o gpt4v large-language-models large-multimodal-models llama llava multimodal multimodal-large-language-models video vision vision-language-model
Trend 3

Star & Fork Trend (18 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

ictnlp/LLaVA-Mini has +1 stars this period . 7-day velocity: 0.3%.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric LLaVA-Mini Awesome-AGI minigpt4.cpp SepLLM
Stars 569 569569569
Forks 32 562746
Weekly Growth +1 +0+0-1
Language Python N/AC++Python
Sources 1 111
License Apache-2.0 MITMITN/A

Capability Radar vs Awesome-AGI

LLaVA-Mini
Awesome-AGI
Maintenance Activity 0

Last code push 284 days ago.

Community Engagement 28

Fork-to-star ratio: 5.6%. Lower fork ratio may indicate passive usage.

Issue Burden 70

Issue data not yet available.

Growth Momentum 51

+1 stars this period — 0.18% growth rate.

License Clarity 95

Licensed under Apache-2.0. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.