
Alibaba has just unveiled its latest large language model (LLM), Qwen2.5-Max, making a splash in the rapidly evolving AI landscape. This new model represents a significant upgrade to Alibaba’s Qwen series and is already claiming superior performance compared to some of the leading LLMs currently available. Alibaba asserts that Qwen2.5-Max outperforms not only DeepSeek’s recently released V3 model, but also OpenAI’s GPT-4o, Meta’s Llama-3.1-405B, and others, in a variety of important benchmarks. This release comes amidst a flurry of activity in the LLM space, with competitors like DeepSeek pushing the boundaries of what’s possible. The timing suggests a fierce competition among AI developers, particularly within China, as companies race to develop the most powerful and versatile AI models.
Qwen2.5-Max boasts impressive results across reasoning, coding, and general AI tasks. While it is not open-source, Alibaba is making it accessible through APIs and a user interface (Qwen Chat), allowing developers and researchers to experiment with its capabilities. This move is crucial for fostering innovation and understanding the model’s potential applications. The release of Qwen2.5-Max demonstrates the rapid progress being made in the field of LLMs and highlights the increasing competition among tech giants to develop the most powerful and versatile AI models.
My own experience using Qwen2.5-Max for video generation has been mixed. While the generated videos are generally pretty good, they don’t quite reach the impressive quality of videos produced by models like Kling or Luma’s Dream Machine. I accessed Qwen2.5-Max through the Qwen Chat interface (https://chat.qwenlm.ai) and encountered a significant number of issues. Roughly 75% of my generation attempts resulted in a frustrating error message: “Uh-oh! There was an issue connecting to Qwen2.5-Max. Cannot read properties of null (reading ‘messages’)”. On my first day of using the Qwen chat, I would get errors stating I had “too many requests” in 1 min. Come the second day, I started getting a message about “too many requests” in 1 day. This level of unreliability makes consistent use challenging, but not too surprising since there was no cost involved.
When video generation did succeed, the wait times were also a factor. Generating a single video took anywhere from 5 to 20 minutes. The output resolution is 1280×720, which is decent, but not necessarily cutting-edge in today’s environment. Despite these challenges, the videos that were successfully generated did show potential. Three examples are in the below video. It’s clear that Qwen2.5-Max, while promising, still has some kinks to work out before it can be considered a truly reliable tool for video generation.