US companies bet on larger AI while China demonstrates efficiency with smaller models.
    Inteligencia Artificial (IA)

    US companies bet on larger AI while China demonstrates efficiency with smaller models.

    Gianro Compagno
    2026-04-28
    5 min read
    Alibaba has surprised the artificial intelligence world with the launch of Qwen3.6-27B, an open language model that redefines expectations for compact models. Until now, the company was known for Qwen3.5-397B-A17B, a colossal model with 397 billion parameters and a weight of 807 GB, making it largely inaccessible for most users. However, the new Qwen3.6-27B, in its quantized version, drastically reduces the size to less than 17 GB without sacrificing performance. Unlike the current trend towards Mixture-of-Experts (MoE) architectures, where only a portion of the parameters is activated during each inference, Qwen3.6-27B is a dense model: it uses all 27 billion parameters in every operation. This simplifies its use, eliminates the need to configure expert routes, and allows for more efficient and predictable quantization. The results support this approach. In tests like SWE-bench Verified, a benchmark for programming tasks, Qwen3.6-27B achieves 77.2%, surpassing the 397B model, which scores 76.2%. In Terminal-Bench 2.0, focused on console tasks, the new model scores 59.3%, compared to 2.5% for its predecessor, even matching Claude Opus 4.5 from Anthropic, one of the most advanced commercial models. Although these data come from Alibaba and independent verification is still pending, initial impressions from the community are very positive. The Alibaba team itself has highlighted the performance of Qwen3.6-27B, placing it above its previous flagship. This internal recognition underscores a paradigm shift: smaller models can compete with, and even surpass, giants in specific tasks. Another key advantage is its accessibility. With just 24 GB of VRAM, cards like the RTX 3090 can run Qwen3.6-27B locally with great efficiency, something unthinkable with larger models. Although dense models do not perform as well on unified memory systems like MacBooks, the ability to run advanced AI on relatively affordable hardware is a notable advancement. Alibaba had already shown its commitment to compact models with recent releases of SLMs ranging from 0.8B to 9B parameters. The market for small models is becoming dynamic, with alternatives like Google’s Gemma 4, Microsoft’s Phi-4, or Mistral’s Devstral 2, demonstrating that competition is also coming from the West. Despite its impressive performance, Qwen3.6-27B confirms that Chinese open-source models are still, according to experts like Demis Hassabis, 6 to 12 months behind the leaders from Anthropic, OpenAI, or Google. Additionally, running these models locally requires a significant investment in hardware. For those seeking maximum speed and efficiency, commercial cloud services remain the preferred option, although local AI is gaining ground in privacy and control. Source: xataka.com
    Gianro Compagno

    Gianro Compagno

    CTO

    Gianro aporta una gran experiencia en gestión de proyectos tecnológicos en entornos multinacionales. Su experiencia técnica combinada con un MBA y una maestría en Psicología Investigativa crea un enfoque único para las soluciones tecnológicas. Como Experto en IA y Automatización, aplica conocimientos psicológicos para diseñar sistemas más intuitivos y centrados en el ser humano. Su enfoque orientado al detalle y mentalidad positiva aseguran que nuestras soluciones no solo sean innovadoras y confiables, sino que también se alineen con cómo las personas piensan y trabajan naturalmente.