China’s Moonshot AI closed a $2 billion funding round at a $20 billion valuation, led by Long-Z Investments, the venture capital arm of Meituan. The round also included Tsinghua Capital, China Mobile, and CPE Yuanfeng, bringing the company’s total raised in the past six months to $3.9 billion. The valuation jump from $4.3 billion at the end of 2025 to $20 billion in May 2026 represents one of the fastest valuation accelerations in the AI sector this cycle, and it positions Moonshot as the highest-valued of the open-weight Chinese model labs.
The financial performance behind the valuation is what makes the round notable. Moonshot’s annualized recurring revenue topped $200 million in April, driven by paid subscriptions to the Kimi chatbot and API usage from developers building on Kimi’s models. Those revenue numbers are significant relative to OpenAI’s own subscription revenue at comparable points in its history, and they provide hard evidence that Chinese open-weight models are not just academically competitive but commercially viable at scale.
The Kimi K2.6 model is the technical foundation. According to OpenRouter usage data, Kimi K2.6 is currently the second most used large language model on the platform globally, behind only Claude. That ranking is a meaningful endorsement because OpenRouter usage reflects actual developer choice across hundreds of model options, not artificial ranking based on benchmark wins. Developers route their workloads to Kimi because the price-to-performance ratio is favorable for the use cases they care about, particularly for high-volume inference tasks where cost per token matters more than the marginal capability advantage of frontier proprietary models.
The competitive context is shifting rapidly. DeepSeek, the other Chinese open-weight lab that gained global attention in late 2024 and 2025, is reportedly closing its first outside investment round at a $45 billion valuation backed by Chinese state capital. The two labs together represent more than $65 billion in valuation for Chinese AI companies that operate primarily on open-weight models, a strategic posture that did not exist as a viable commercial path two years ago. The narrative that frontier AI requires the closed-weight, capital-intensive approach of OpenAI and Anthropic is being challenged by these companies’ actual revenue numbers and OpenRouter usage data.
For sales teams tired of cold leads, slow customer responses, and manual processes, Dapta is the ultimate tool.
Dapta is the leading platform for creating AI sales agents specifically designed to increase inbound lead conversion. Respond to your leads in less than a minute with voice AI and WhatsApp that converts.
If you want your team to sell more while AI handles the complex stuff, you have to try it.
The implications for global enterprise buyers are practical. For companies with workloads that require high-volume inference, particularly in customer service, content generation, and code completion, Chinese open-weight models like Kimi K2.6 and DeepSeek’s models offer cost-per-token economics that closed-weight competitors cannot match without significant subsidization. The performance trade-off, while real, is small enough for most production workloads that the cost savings dominate. Enterprise procurement teams that have not yet tested Chinese open-weight models for inference-cost-sensitive workloads are leaving meaningful budget on the table.
For LATAM markets specifically, the calculus has additional weight. Latin American enterprises operate with smaller AI budgets than their US counterparts, and the cost-per-token math matters proportionally more. A LATAM company spending $50,000 a month on inference can sustain that spend more easily on Kimi K2.6 than on Claude or GPT-5, even accounting for the capability gap. The cost gap is large enough that the calculus often favors the cheaper model even when the capability gap is meaningful, simply because the alternative is to either reduce AI usage or fail to launch the product at all.
The geopolitical context is worth flagging but not overstating. US-China technology decoupling continues to intensify, with successive US administrations expanding export controls on AI training compute. Whether enterprise buyers in the US can or should deploy Chinese open-weight models for production workloads is becoming a regulatory question as much as a technical one. The European Union has so far taken a less restrictive position, and LATAM markets remain largely open. For enterprise buyers in jurisdictions where Chinese open-weight models are commercially viable, the cost advantage is substantial enough to be strategically meaningful.
The broader pattern is that the AI vendor map is no longer dominated by a small number of US labs. Open-weight Chinese labs have established commercial revenue, top-of-platform ranking on developer platforms, and credible long-term funding. The next 12 months will determine how aggressively enterprise buyers in OECD markets adopt these models for production, and the answer will significantly shape AI procurement budgets globally.