Ant Group has open-sourced dInfer, an inference framework for diffusion-based language models said to run up to ten times faster than
Nvidia’s Fast-dLLM and
Alibaba’s vLLM model. The framework seeks to boost
AI efficiency and cut computational costs as China accelerates software innovation to offset limits in advanced chip access.