Industry9 min readCNBC Technology

Column: Jensen Huang doesn’t need a new chip. He needs a new moat.

P
Redakcja Pixelift2 views
Share

Jensen Huang is building a new fortress around Nvidia, reaching for open-source tricks — not out of philanthropy, but out of necessity. The chip manufacturer's chief faces growing competition from tech giants developing their own AI processors. Meta, Google and Amazon are already investing billions in alternative solutions, threatening Nvidia's monopoly on the accelerators market. Huang's strategy consists of democratizing access to AI tools through open-source projects. This seemingly altruistic move has a concrete business goal — building an ecosystem in which Nvidia remains an indispensable element of infrastructure. The more companies and developers work with Nvidia tools, the harder it becomes for them to switch to competing solutions. This is a sensible response to changing market dynamics. Nvidia's traditional protective wall — advanced chip technology — is no longer sufficient. Huang is aware that the future belongs to companies that not only sell hardware, but build entire ecosystems in which they are indispensable. Open-source is his new weapon in the fight for dominance.

Jensen Huang, CEO of Nvidia, faces a problem that no other chip manufacturer has had to solve at this scale. Over the past two years, his company has been earning billions of dollars by selling processors for training AI models controlled by other companies — OpenAI, Google, Anthropic. But now this position is beginning to shake dangerously. Not because he lacks new chips. Huang has plenty of them. The problem runs deeper: the world is beginning to understand that a monopoly on hardware is not the same as a monopoly on the future. And Nvidia discovered this too late.

Huang's strategy in recent months — promoting open AI models, supporting open-source alternatives to ChatGPT or Gemini — looks like apparent generosity. In reality, it is a desperate defensive maneuver. Nvidia is not building a new moat for itself. Rather, it is trying to delay the construction of moats for competitors. The difference is crucial for understanding where the AI industry is heading in the coming years.

Why hardware stopped being enough

For a decade, Nvidia ruled the graphics processor market without serious competition. When the era of deep learning arrived, its GPUs were the only sensible choice for scientists and corporations. This position brought the company unimaginable revenue — in the last fiscal year it exceeded 60 billion dollars. But success always carries the seed of its own downfall.

When OpenAI released ChatGPT, and then GPT-4, it became clear that the real value lies not in chips, but in models. Nvidia was selling hammers, but everyone was making money on gold — on data, algorithms, user interfaces. Google has a huge pool of training data and its own TPUs. Meta has similar capabilities. Even startup Anthropic and Elon Musk's xAI are quickly raising capital and getting as many chips as they need. Nvidia remains a supplier, and suppliers always earn less than owners of real value.

The key moment came when companies began investing in their own AI processors. Google has TPU, Amazon has Trainium and Inferentia, Microsoft is working on Maia, Meta on its own hardware. These are not solutions threatening Nvidia — at least not yet — but they send a clear signal: any player with sufficient capital will try to escape Huang's control. Nvidia can sell GPUs for 40 thousand dollars per unit, but if a competitor builds their own chip that is 30 percent cheaper and 20 percent slower, for many companies that would be a perfect deal. Freedom from Nvidia is worth more than marginal performance.

Open models as a strategic weapon

This is where the genius — and desperation — of Huang's strategy appears. Nvidia began intensively promoting open AI models, supporting projects like Meta's Llama or Mistral. On the surface, this looks like a magnanimous decision by a tech giant. In reality, it is a business calculation.

Open models require enormous computing power for training and fine-tuning. If Nvidia ensures that the ecosystem will be dominated by open, free models that anyone can run on their own hardware, then everyone will need a lot of chips. Competition between OpenAI and Google favors Nvidia, as each builds its own supercomputers. But competition between hundreds of startups and small companies that download Llama and train it on their own hardware? That is a goldmine for Nvidia. Each such startup will buy GPUs, because it will be the cheapest option.

Additionally, by promoting open models, Nvidia weakens the position of private AI companies that build closed systems. OpenAI must compete not only with Google, but with free Llama. This forces OpenAI to invest even more in its own chips, which in turn means Nvidia sells them less. But this risk is smaller than the alternative — a world where OpenAI, Google, and Meta have fully independent chip ecosystems.

Polish perspective: what it means for local players

For Polish companies involved in AI, Huang's strategy has concrete consequences. A startup that wants to build a language model or AI-based recommendation system faces a choice: pay OpenAI for an API, or invest in Nvidia GPUs and train its own model based on Llama or another open solution.

The second option seems cheaper, but requires capital for hardware. Here appears another layered genius of Nvidia's strategy. The company offers CUDA — a programming ecosystem that is effectively the industry standard. Every Polish developer learning AI learns CUDA. Everyone who wants to build something serious uses CUDA. This creates lock-in not at the hardware level — where competition is growing — but at the software and skills level. Even if a Nvidia competitor builds a better chip, the code will need to be rewritten.

Polish universities, research institutes, and technology companies are deeply embedded in the CUDA ecosystem. This means that regardless of how the chip landscape changes, Nvidia will remain relevant. This is the true moat — not hardware, but entire ecosystems of people and code.

Playing on two fronts: supporting open-source and closing doors

An interesting paradox of Nvidia's strategy is that the company simultaneously supports open models and invests in closed technologies. Nvidia offers CUDA to everyone, but at the same time builds proprietary optimization libraries available only for its GPUs. It supports Llama, but also invests in startups building closed models. This is not an inconsistency — it is a precise calculation of risk.

Huang knows he can no longer control all the value in the chain. But he can control the bottlenecks. If everyone will train models, everyone will need GPUs. If every GPU requires CUDA, everyone will need Nvidia. The game has shifted from the level of "who has the best chip" to "who has the best ecosystem." And here Nvidia has a huge advantage — 15 years of technological starts, millions of hours of code, an entire world of programmers trained on its solutions.

This does not mean competition will not grow. AMD, Intel, and especially the own chips of large companies will get better. But before they become better, they will be more expensive to implement. And Huang is counting on the fact that by the time competition is ready, the entire world will be such a deep part of Nvidia's ecosystem that switching will be too costly.

Why open-source is a defensive game, not an offensive one

Many articles describe Nvidia's strategy in promoting open models as a manifestation of corporate generosity or long-term thinking about the ecosystem. This is a mistake. This strategy is fully defensive and stems from a specific threat.

The threat is a scenario where a few tech giants (OpenAI, Google, Meta, Microsoft) build completely independent ecosystems. In such a world, Nvidia would be one of many suppliers, not the king. By promoting open models, Huang disperses the investments of competitors. Instead of OpenAI having one supercomputer cluster for training GPT-5, it must have a cluster for GPT-5 and simultaneously compete with free Llama. This forces larger capital expenditures, but does so inefficiently from Nvidia's perspective.

The best scenario for Huang is a world where hundreds of companies train hundreds of small models on Nvidia GPUs. Each model consumes fewer chips than GPT-4, but together they consume more. Additionally, dispersed competition means no player will have enough capital to build an alternative to CUDA. This is a game of time — Huang must maintain dominance long enough for software-level lock-in to become unbreakable.

The real moat: ecosystem, not silicon

If you look at the history of technology, the winner is always not the manufacturer of the best hardware, but the one who controls the ecosystem around it. Intel beat AMD not because it always had better processors, but because it had BIOS, Windows, the entire software infrastructure. Apple beat Samsung not because it has the best screens, but because it has iOS and the App Store. Nvidia beats AMD not because it always has the best GPUs — sometimes AMD has better specs — but because it has CUDA, TensorRT, an entire library of tools that are the industry standard.

Huang understands this. That's why the strategy of open models makes sense. It doesn't make sense to fight over who has the best chip, because this battle will always be a draw — competition is too strong. It makes sense to fight over who has the best ecosystem. And here Nvidia has such an advantage that competition may not be able to break it for the next 10 years.

For Polish companies, this means that regardless of what technical decisions they make, they will be forced to work within Nvidia's ecosystem. This is not always bad — the ecosystem is really good. But it means Huang has already won, even if he doesn't know it.

Shift of power: from hardware to software

The most fundamental shift in Huang's strategy is the recognition that the future does not belong to the chip manufacturer, but to the software and ecosystem manufacturer. Nvidia is no longer building a moat around hardware — that was yesterday. It is building a moat around CUDA, NVIDIA AI Enterprise, the entire infrastructure that makes switching to something else too costly.

This also has implications for competition. OpenAI, Google, Meta — all these companies are investing in their own chips, but none of them has such a software ecosystem as Nvidia. They can build a better chip, but they cannot build a better CUDA in five years. That is too large an investment. That's why Huang can afford to support open models — he knows that regardless of what models will be trained, they will be trained on his hardware and his software.

This is the genius of the strategy. It's no longer about whether Nvidia sells more chips than AMD. It's about whether the entire world is such a deep part of Nvidia's ecosystem that alternatives are irrelevant. And when it comes to CUDA, the answer is clear: yes, the world is already such a deep part of this ecosystem.

Comments

Loading...