DeepSeek-V4 has just burst onto the global AI scene. But the announcement goes beyond the battle of models: this new generation of DeepSeek is tailored to Huawei Ascend chips. Peking does not yet claim to completely break free from Nvidia. It shows something else: China is assembling its own AI stack, from semiconductors to open models, with the ambition to bring out a competing standard. For the United States, the risk is no longer just technological. It is becoming economic. For Europe, it could be existential.
On April 24, 2026, DeepSeek emerged. The Chinese startup, already known for shaking Wall Street with its R1 model, released the pre-version of DeepSeek-V4. On paper, the announcement initially seems like a new step in the race for big models: a V4-Pro version for complex tasks, a faster and cheaper V4-Flash version, a claimed context of a million tokens, and an architecture designed to reduce calculation and memory costs. Reuters notes that V4-Pro specifically targets the use of agent coding and competitive programming, but still falls behind the best closed models from Google or OpenAI.
However, the key was not just in the technical specifications. It was in the second, almost simultaneous announcement. Huawei indicated that its Ascend 950 clusters supported DeepSeek-V4. The model would thus be adapted to Huawei Ascend chips. These chips would have contributed to part of the training for V4-Flash. A clarification is missing, though: DeepSeek does not clearly state if V4-Pro was fully trained without Nvidia. Chinese autonomy is therefore not proven. However, it stops being just an incantation.
A few days earlier, Jensen Huang had provided the deciphering key. In the Dwarkesh Podcast, the Nvidia CEO had warned: “DeepSeek is not a trivial advancement. The day DeepSeek is released first on Huawei, it will be a terrible outcome for our country.” The statement aims not just at Nvidia’s financials. It targets American power itself.
Because the question is not just about whether DeepSeek-V4 competes with GPT, Gemini, or Claude. It would already be a lot. But it would still be looking at the scene through the peephole. The real question is broader: what happens if China stops running behind the American stack and starts building its own?
Since 2022, Washington has been seeking to slow down China by controlling access to advanced semiconductors. The reasoning is simple: fewer high-end GPUs, less computation; less computation, fewer large models; fewer large models, less strategic power. This logic has had effects. It has complicated Chinese access to the best accelerators and kept Nvidia at the center of the global AI economy.
But a technological economy doesn’t just rely on metal. It depends on actions, tools, and habits. Nvidia doesn’t just sell chips. Nvidia sells a must: GPUs, interconnections, CUDA, software libraries, partner clouds, skills, documentation, developer communities. This is what they call an “AI stack.”
This exact must is what China aims to bypass. Huawei is not just pushing its Ascend chips. It seeks to create a world where Chinese models run correctly and naturally on Chinese infrastructure. And in computing, the word “naturally” is often the first name of dependence.
Huawei remains technologically behind Nvidia. Turning developers away from the Nvidia ecosystem remains difficult. Therefore, a finished revolution should not be told. A shift should be narrated. China doesn’t need to immediately produce the best chip in the world. It needs a sufficiently performant chip, available locally, integrated with its clouds, optimized for its models, and supported by its developers.
The economic risk: cracking the Nvidia tollbooth
To understand why Jensen Huang speaks of a “terrible outcome,” Nvidia should not be seen as just a component manufacturer, but as a global tollgate of AI. In October 2025, the company became the first to reach a market capitalization of $5 trillion. Matt Britzman, an analyst at Hargreaves Lansdown, said that this figure was more than just a milestone; it was a statement because Nvidia had shifted from being a chip manufacturer to being a creator of industries.
That’s where the risk lies. If Nvidia is deemed an industry creator, DeepSeek-V4 threatens not just some sales in China. It threatens the narrative that supports part of its valuation: the idea that advanced AI will need, forever, to go through Nvidia.
The precedent already exists. In January 2025, the arrival of DeepSeek-R1 was enough to cause a historic shock. Nvidia lost about 17% in one session, nearly wiping out $593 billion in market value – the largest single-day drop for a listed company. Simultaneously, companies exposed to semiconductors, energy, and AI infrastructure had collectively erased over $1 trillion.
The market didn’t wait for definitive proof. It had reacted to the assumption. If cheaper models produce comparable results, the entire AI economy is questioned: less computation? Fewer data centers? Fewer GPUs? Less Nvidia rent? DeepSeek-V4 raises another question, even more political: what if part of this demand could shift towards a Chinese stack?
In a company valued at trillions of dollars, even a simple crack in the narrative could cost hundreds of billions. Financial empires don’t always fall when walls collapse. They often waver beforehand when investors stop believing the walls are eternal.
CUDA, this invisible lock
Nvidia’s power doesn’t just lie in its GPUs. It lies in CUDA. To draw a metaphor, CUDA is to Nvidia chips like rails are to trains: you can build another locomotive, but you still need the network, switch points, stations, mechanics, and schedules.
Huawei tries to respond with CANN, for “Compute Architecture for Neural Networks,” its software environment for Ascend chips. In September 2025, the company announced the opening of several key components of this ecosystem, including CANN operators and specialized libraries. The goal is clear: to provide developers with the necessary tools to run their models on Ascend and no longer depend on CUDA, Nvidia’s environment.
This statement may sound technical but it is political. Whoever controls developer tools often controls the subsequent narrative. Standards don’t always emerge from standard-setting bodies. They stem from notebooks, code repositories, libraries developers reuse without thinking about them, the training engineers receive, the budgets IT departments reapprove.
DeepSeek-V4 is revealing a broader strategy. It is no longer about having a model. It no longer only about having a chip. It is about connecting the model, the chip, the software, the cloud, the developers, and the domestic market. In computing, a single brick is impressive. A stack transforms.
American sanctions, an unintentional accelerator
The American export control strategy is not absurd. It has slowed China down, made access to advanced accelerators more difficult, and kept Nvidia at the center of global competition.
However, the sanctions have a side effect. They don’t just create obstacles. They channel the opposing effort. By closing a door, Washington has forced Beijing to build a corridor.
DeepSeek-V4 illustrates Chinese efforts to reduce dependence on foreign technologies and develop a more self-sufficient AI ecosystem. But this autonomy is still incomplete. Huawei still lags behind Nvidia, and the costs of the Pro version could only decrease with more large scale calculation clusters equipped with Ascend chips to run these models on a large scale.
The statement could be summed up in one line: American sanctions have not made China autonomous; they have given it a compelling reason to become self-reliant.
The opening, this slow tool
China doesn’t just play the chip card. It plays the open model card. Again, specificity is crucial. In generative AI, the term “open source” is often used too quickly. Many so-called open models are actually open weight models: you can download, adapt, and deploy them, but their training data and entire manufacturing process are not necessarily public.
This circulation is what can shift standards. The Stanford HAI-DigiChina report released in late 2025 highlights that between August 2024 and August 2025, Chinese developers accounted for 17.1% of open model downloads on Hugging Face, slightly ahead of American developers at 15.8%. The report also shows that the Alibaba Qwen family surpassed Llama in September 2025 to become the most downloaded family of language models on Hugging Face.
This might be the most underestimated point. The United States may retain the most powerful closed models. China can win elsewhere: in the models that developers manipulate, adapt, fine-tune, and deploy. OpenAI sells access to a high-level black box. DeepSeek or Qwen distribute bricks that others can pick up. This is not the same economy. It is especially not the same influence strategy.
The possible earthquake for the American economy
The American AI economy today relies on a simple assumption: models will demand more computation, so more chips, more data centers, more energy, and more infrastructure expenses. Nvidia is the beating core of this chain.
This is why the credible assumption of a Chinese stack weighs more heavily than just a model launch. If DeepSeek, Huawei, CANN, Qwen, and Chinese clouds eventually form a coherent ecosystem, the risk is not that Nvidia will disappear. The risk is that the market begins to discount Nvidia’s dominance. Part of the Chinese demand could swing to Huawei. Part of the emerging countries might prefer open and less costly models, less dependent on Washington’s decisions. Some companies might choose not the brightest model but the most economically and politically controllable stack.
Europe and Aleph Alpha’s acquisition by the Canadian Cohere
Europe has been unjustly accused of staying still. It has already laid down its regulatory framework with the AI Act, enforcing obligations applicable to general-purpose AI models since August 2, 2025. The Commission also published a Code of Best Practices on July 10, 2025, to help general-purpose model providers demonstrate conformity. In other words, Europe knows how to make rules and even a confidence grammar.
It also tries to build the computing power it lacks. The European “AI Continent” plan claims €200 billion to boost AI development, with €20 billion to fund up to five gigafactories. These infrastructures should be up to four times more powerful than existing AI factories to train complex models. The European Union has established 19 AI Factories and 13 antennas to provide free computing power and support to SMEs and startups.
However, having various elements is not enough to build a stack. That’s where the problem lies. Europe has scattered assets: rules, calculation power, some models, clouds, researchers, industrial players, public funds, and models like those from Mistral AI. The French startup raised €1.7 billion in September 2025, led by ASML, with a post-money valuation of €11.7 billion. Mistral presents this funding as a way to push scientific research and address the technological challenges of strategic industries.
But a sum does not build a stack. That’s where the shoe pinches. Europe has not assembled a transparent enough ecosystem like the American Nvidia-CUDA-OpenAI-Microsoft stack nor a European-aligned trajectory like the Chinese Huawei-Ascend-CANN-DeepSeek-Qwen path. In this battle, power doesn’t just come from the existence of each brick. It comes from their alignment.
The announcement of Aleph Alpha’s acquisition by the Canadian Cohere on April 24 gave this fragility a concrete face. Aleph Alpha had long been seen as one of Germany’s AI generative hopes. Reuters reports that Cohere would own about 90% of the combined entity, with 10% remaining for Aleph Alpha shareholders, while the Schwarz Group, the parent company of Lidl and Kaufland, would invest $600 million in Cohere’s next round. The operation aims to build a sovereign AI offer for regulated sectors (administrations, finance, health, energy, telecoms, defense) but it also points to an uncomfortable truth: when a European champion needs to scale, its center of gravity can quickly shift outside Europe.
The European paradox is there. Europe wants trustworthy AI, but the technical stacks that establish themselves are built elsewhere. It funds infrastructure, but markets, developers, and dominant models are shaped by the U.S. or arguably by China, who advances its own standards. It talks about sovereignty, but Aleph Alpha’s attachment to Cohere shows that industrial sovereignty is not just a declaration; it is financed, industrialized, distributed, hosted, and maintained in the long run.
Justin Hotard, CEO of Nokia, bluntly summarized it: “The problem today is that Europe doesn’t have the infrastructure.” He warns that without robust infrastructures, companies and developers will go where they already exist, mainly to the U.S. or China.
The agent, it’s the stake for a CIO
This is where the subject moves from geopolitics back to IT. Agent AI will not just be one more module above the applications. It is intended to become an orchestration layer: reading documents, calling APIs, generating code, triggering workflows, engaging with the ERP, CRM, cybersecurity tools, or document bases.
Therefore, for a CIO, the choice will not just revolve around the “best model” of the moment. It will focus on the stack to which they are willing to connect their business processes. Because whoever controls this layer controls not just an AI-given response, but also controls the paths data takes, actions are triggered, decisions are prepared, and eventually, part of the work is executed.
That’s why DeepSeek-V4 goes beyond the duel between Beijing and Washington. It reminds that the AI battle will not just play out in labs or financial markets. It will also unfold within enterprise architectures where CIOs will have to decide, very concretely, if they are building their future agents on an American, Chinese, European stack, or a combination sufficiently controlled to avoid becoming captive of a standard they didn’t choose.





