DeepSeek-V4 has just burst into the global AI race. But the announcement goes beyond the duel of models: this new generation of DeepSeek is adapted to Huawei Ascend chips. Beijing does not yet claim to be completely free from Nvidia. It shows something else: China is assembling its own AI stack, from semiconductors to open models, with the ambition of creating a competing standard. For the United States, the risk is no longer just technological. It becomes economical. For Europe, it could be existential.
On April 24, 2026, DeepSeek came out of the woods. The Chinese start-up, already known for having shaken Wall Street with its R1 model, has published the preview version of DeepSeek-V4. On paper, the announcement initially looks like a new step in the race for large models: a V4-Pro version for complex tasks, a faster and less expensive V4-Flash version, a claimed context of one million tokens, and an architecture supposed to reduce calculation and memory costs. Reuters specifies that V4-Pro targets in particular the uses of agentic coding and competitive programming, while remaining behind the best closed models from Google or OpenAI.
But the main thing was not only in the technical sheet. He was in the second announcement, almost simultaneously. Huawei said its Ascend 950 clusters support DeepSeek-V4. The model would thus be adapted to Huawei Ascend chips. These chips would have contributed to part of the training of V4-Flash. However, one clarification is missing: DeepSeek does not clearly say whether V4-Pro was trained entirely without Nvidia. Chinese autonomy is therefore not proven. However, it ceases to be a simple incantation.
A few days earlier, Jensen Huang had provided the reading key. In the Dwarkesh Podcast, the boss of Nvidia warned: “ DeepSeek is not a negligible advancement. The day DeepSeek is first released on Huawei, it will be a horrible result for our country. »The formula is not only aimed at Nvidia’s income statement. It targets American power itself.
Because the question is not simply whether DeepSeek-V4 competes with GPT, Gemini or Claude. That would already be a lot. But that would still be looking at the scene through the keyhole. The real question is broader: What happens if China stops running behind the US stack and starts building its own?
The American lock wasn’t just the chip
Since 2022, Washington has sought to slow down China by controlling access to advanced semiconductors. The reasoning is simple: fewer high-end GPUs, less computing; less calculation, fewer large models; fewer big models, less strategic power. This logic produced effects. It has complicated Chinese access to the best accelerators and kept Nvidia at the center of the global AI economy.
But a technological economy is not only held together by metal. It is held through gestures, tools, habits. Nvidia doesn’t just sell chips. Nvidia sells an obligatory passage: GPU, interconnections, CUDA, software libraries, partner clouds, skills, documentation, developer communities. This is what we call an “AI stack”.
It is precisely this obligatory passage that China seeks to circumvent. Huawei isn’t just pushing its Ascend chips. It seeks to create a world where Chinese models run correctly, then naturally, on Chinese infrastructure. And in computing, the word “naturally” is often the first name for addiction.
Huawei remains technologically behind Nvidia. Diverting developers from the Nvidia ecosystem remains difficult. We must therefore not tell of a completed revolution. You have to tell the story of a trip. China does not need to immediately produce the world’s best chip. It needs a sufficiently powerful chip, available locally, integrated into its clouds, optimized for its models and supported by its developers.
The economic risk: cracking the Nvidia toll
To understand why Jensen Huang speaks of a “ résultat horrible “, we must look at Nvidia not as a simple component manufacturer, but as a global AI toll. In October 2025, the company became the first company to reach $5 trillion in capitalization. Matt Britzman, an analyst at Hargreaves Lansdown, observed: “ Nvidia reaching $5,000 billion in capitalization is more than a milestone; It’s a statement, as Nvidia has evolved from a chip maker to an industry creator. »Â
This is the heart of the risk. If Nvidia is an industry creator, DeepSeek-V4 does not only threaten some sales in China. It threatens the narrative that supports part of its valuation: the idea that any advanced AI will need, in the long term, to go through Nvidia.
The precedent already exists. In January 2025, the arrival of DeepSeek-R1 was enough to cause a historic shock. Nvidia lost around 17% during the session, or nearly $593 billion in capitalization, the largest daily loss ever recorded for a listed company. At the same time, values exposed to semiconductors, energy and AI infrastructure had collectively wiped out more than $1,000 billion.
The market had not waited for definitive proof. He had reacted to the hypothesis. If less expensive models produce comparable results, the whole economics of AI is questioned: less calculation? fewer data centers? fewer GPUs? less Nvidia annuity? DeepSeek-V4 adds another, even more political question: what if part of this demand could shift to a Chinese battery?
In a company valued at several trillion dollars, a single crack in the narrative can be worth hundreds of billions. Financial empires do not always fall when walls give way. They often falter before, when investors stop believing that the walls are eternal.
CUDA, this invisible lock
Nvidia’s power isn’t just about its GPUs. She cares about CUDA. To extend the metaphor, CUDA is to the Nvidia flea market what the rails are to the train: you can build another locomotive, but you still need to have the network, switches, stations, mechanics and timetables.
Huawei is trying to respond with CANN, to Compute Architecture for Neural Networksits software environment intended for Ascend chips. In September 2025, the group announced the opening of several key components of this ecosystem, including CANN operators and specialized libraries. The objective is clear: to give developers the tools necessary to run their models on Ascend, and no longer depend on CUDA, Nvidia’s environment.
This sentence sounds technical. She is political. Whoever controls the developers’ tools often controls the rest of the story. Standards are not always born in standardization organizations. They are born in notebooks, code repositories, libraries that we reuse without thinking, the training that engineers follow, the budgets that IT departments renew.
DeepSeek-V4 therefore makes a broader strategy visible. It’s no longer just about having a role model. It’s no longer just about having a chip. It’s about connecting the model, the chip, the software, the cloud, the developers and the home market. In IT, a brick impresses. A stack transforms.
American sanctions, an involuntary accelerator
The American export control strategy is not absurd. It slowed down China. It made access to the most advanced accelerators more difficult. It also kept Nvidia at the center of global competition.
But sanctions have a side effect. They are not just an obstacle. They direct the opposing effort. By closing a door, Washington forced Beijing to build a corridor.
DeepSeek-V4 illustrates Chinese efforts to reduce dependence on foreign technologies and build a more self-sufficient AI ecosystem. But this autonomy remains incomplete. Huawei remains behind Nvidia, and the costs of the Pro version could only fall if it has more large computing clusters equipped with Ascend chips, capable of running these models on a large scale.
The formula is in one line: American sanctions have not made China autonomous; they gave him a compelling reason to become one.
The opening, this slow weapon
China is not just playing the chip card. It plays that of open models. Here again, you have to be precise. In generative AI, the expression “ open source » is often used too quickly. Many so-called open models are rather open weight models: they can be downloaded, adapted, deployed, but their training data and the entire manufacturing process are not necessarily published.
For businesses, this nuance is not academic. An open weight model does not automatically guarantee auditability, compliance or sovereignty. But it gives more leverage than a closed API. We can test it, distill it, host it, optimize it, connect it to internal applications. Above all, it circulates.
It is this circulation that can shift standards. The Stanford HAI-DigiChina report published at the end of 2025 highlights that between August 2024 and August 2025, Chinese developers represented 17.1% of open model downloads on Hugging Face, slightly ahead of developers Americans at 15.8%. The report also states that as of September 2025, Alibaba’s Qwen family surpassed Llama to become the most downloaded large language model family on Hugging Face.
This is perhaps the most underestimated point. The United States can keep the most powerful closed models. China can win elsewhere: in the models that developers manipulate, adapt, fine-tune and ship. OpenAI sells access to a very high-level black box. DeepSeek or Qwen distribute bricks that others can use. It’s not the same economy. Above all, it’s not the same influence strategy.
The possible earthquake for the American economy
The American AI economy is based today on a simple hypothesis: the models will require ever more computing, therefore always more chips, therefore always more data centers, therefore always more energy, therefore always more infrastructure spending. Nvidia is the beating heart of this chain.
This is why the hypothesis of a credible Chinese battery carries more weight than a simple launch of a model. If DeepSeek, Huawei, CANN, Qwen and the Chinese clouds end up forming a coherent ecosystem, the risk is not that Nvidia will disappear. The risk is that the market will start to apply a discount to the toll. Part of Chinese demand could shift to Huawei. Some emerging countries could prefer open models, less costly and less dependent on Washington’s arbitrations. Some companies could choose not the most brilliant model, but the most economically and politically manageable battery.
Europe and the takeover of Aleph Alpha by the Canadian Cohere
That leaves Europe. It would be unfair to call it immobile. It has already established its regulatory framework with the AI Act, whose obligations applicable to general-purpose AI models came into force on August 2, 2025. The Commission also published, on July 10, 2025, a Code of good practice intended to help suppliers of generalist models to demonstrate their conformity. In other words, Europe knows how to produce rules, and even a grammar of confidence. HAS
It is also trying to build the computing power it lacks. The European “AI Continent” plan calls for 200 billion euros to stimulate the development of AI, including 20 billion to finance up to five gigafactories. These infrastructures must be up to four times more powerful than existing AI Factories and be used to train complex models. Alongside it, the European Union has established 19 AI Factories and 13 branches intended to provide free computing power and support to SMEs and start-ups. HAS
Europe is therefore not without bricks. It has supercomputers via EuroHPC, cloud players, laboratories, manufacturers, public funding, and models like those of Mistral AI. The French start-up raised 1.7 billion euros in September 2025, with ASML as lead manager, for a post-money valuation of 11.7 billion euros. Mistral presents this contribution as a means of pushing scientific research and responding to the technological challenges of strategic industries. HAS
But one sum does not make a pile. This is where the problem lies. Europe has scattered elements: the rule, the calculation, a few models, clouds, researchers, manufacturers. It has not yet assembled an ecosystem as readable as Nvidia-CUDA-OpenAI-Microsoft on the American side, nor a trajectory as integrated as Huawei-Ascend-CANN-DeepSeek-Qwen on the Chinese side. In this battle, power doesn’t just come from the existence of each brick. It comes from their alignment.
The announcement of the takeover of Aleph Alpha by the Canadian Cohere, on April 24, gives this fragility a concrete face. Aleph Alpha had long been presented as one of the German hopes for generative AI. Reuters reports that Cohere would own around 90% of the combined entity, compared to 10% for Aleph Alpha shareholders, while the Schwarz Group, parent company of Lidl and Kaufland, would invest $600 million in Cohere’s next round. The operation aims to build a sovereign AI offer for regulated sectors (administration, finance, health, energy, telecommunications, defense) but it also reminds us of an uncomfortable truth: when a European champion has to change scale, its center of gravity can quickly leave Europe. HAS
The European paradox is there. Europe wants trustworthy AI, but the necessary technical stacks are being built elsewhere. It finances infrastructure, but the markets, developers and dominant models remain driven by the United States or, now, by a China which is advancing its own standards. She talks about sovereignty, but the support of Aleph Alpha and Cohere shows that industrial sovereignty cannot be decreed: it is financed, industrialized, distributed, hosted and maintained over time.
Justin Hotard, the CEO of Nokia, summed it up bluntly: “ The problem today is that Europe does not have the infrastructure. “He warns that without robust infrastructure, companies and developers will go where they already exist, primarily in the United States or China. HAS
Agentics is the challenge for a CIO
This is where the subject leaves geopolitics to return to IS. Agentic AI will not be another module placed on top of applications. It is intended to become an orchestration layer: read documents, call APIs, generate code, trigger workflows, communicate with the ERP, CRM, cybersecurity tools or document databases.
For a CIO, the choice will therefore not only concern the “best model” of the moment. It will relate to the stack to which it agrees to connect its business processes. Because whoever controls this layer does not only control a response generated by an AI. It controls the paths through which data flows, actions are triggered, decisions are prepared and, tomorrow, part of the work is executed.
This is why DeepSeek-V4 surpasses the duel between Beijing and Washington. He reminds us that the battle for AI will not only be fought in laboratories or on financial markets. It will also play out in business architectures. Where IT departments will have to decide, very concretely, whether they build their future agents on an American, Chinese, European stack or on a sufficiently mastered combination so as not to become captive to a standard that they have not chosen.
Open source ou open weight ?
In generative AI, the term “open source” is often used too broadly. A fully open source model should publish enough elements to allow its reproduction: code, architecture, training method, data or detailed documentation.
Many so-called open models are rather open weight models. Their parameters are available, which allows them to be used, adapted or deployed. But the training data and the entire manufacturing process often remain closed.
For CIOs, the difference is important. An open weight model can reduce reliance on a proprietary API. It does not automatically guarantee complete transparency, complete auditability, compliance or legal sovereignty.
ALSO READ:
Data / IA
Les DSI face à l’après « hype » DeepSeek
We’ll send you a validation email!




