DB

triton-inference-server/dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

141 35 +0/wk
GitHub
dali data-preprocessing deep-learning fast-data-pipeline gpu image-processing nvidia-dali python
Trend 0

Star & Fork Trend (14 data points)

Stars
Forks

Multi-Source Signals

Growth Velocity

triton-inference-server/dali_backend has +0 stars this period . Velocity data will be available after more historical data is collected.

Deep analysis is being generated for this repository.

Signal-backed technical analysis will be available soon.

Metric dali_backend Vibe-Workflow awesome-awesome-artificial-intelligence robovision
Stars 141 141141140
Forks 35 331124
Weekly Growth +0 +0+0+0
Language C++ JavaScriptN/APython
Sources 1 111
License MIT MITMITGPL-3.0

Capability Radar vs Vibe-Workflow

dali_backend
Vibe-Workflow
Maintenance Activity 98

Last code push 10 days ago.

Community Engagement 100

Fork-to-star ratio: 24.8%. Active community forking and contributing.

Issue Burden 70

Issue data not yet available.

Growth Momentum 30

No measurable growth in the current period (first-day cold start expected).

License Clarity 95

Licensed under MIT. Permissive — safe for commercial use.

Risk scores are computed from real-time repository data. Higher scores indicate healthier metrics.

Need help implementing dali_backend in production?

FluxWise Agentic AI Platform — 让AI真正替你干活