AI7 min readTechCrunch AI

Nvidia is quietly building a multibillion-dollar behemoth to rival its chips business

P
Redakcja Pixelift3 views
Share
Nvidia is quietly building a multibillion-dollar behemoth to rival its chips business

Akio Kon / Bloomberg / Getty Images

Nvidia is building a multi-billion-dollar empire in the shadows, which may soon rival revenues from chips alone. The company's networking business, dedicated to connecting data centers, has become the second-largest source of the conglomerate's revenue in just a few years — right behind the computing segment. In the last quarter, it generated 11 billion dollars in revenue, representing a year-over-year increase of 267 percent, and exceeded 31 billion for the entire year. A strategic move from 2020 — a decisive investment in network infrastructure through acquisition — proved to be a brilliant move, though it received significantly less media attention than Nvidia's GPU successes. Jensen Huang, the company's CEO, had previously demonstrated an ability to predict trends — in 2010, he began experimenting with chips dedicated to AI, a decade before the current hype erupted. The dynamic growth of the networking segment shows that Nvidia's revenues may be more diversified than commonly believed, and server interconnection infrastructure itself is becoming a critical bottleneck in the era of artificial intelligence.

While everyone is looking at Nvidia's graphics processors powering the artificial intelligence revolution, somewhere in the shadows a business is developing that is earning almost as much money. The Networking division of the company generated 11 billion dollars in revenue in the last quarter — a growth of 267 percent year-over-year — and still remains largely out of the media spotlight. This is a story about how Nvidia is building the second leg of its empire, and most of the industry hasn't even noticed it's happening.

The story begins with a decision Jensen Huang made many years ago. In 2010, when the world wasn't yet thinking about AI, Nvidia's CEO saw the future and began investing in specialized processors for machine learning. Now, in 2020, he repeated this maneuver — this time not in chips, but in the infrastructure that connects them. A strategic acquisition aimed at datacenter networking opened the door to a business that is growing faster than the graphics card game itself.

How Nvidia took over datacenter networks

The breakthrough moment was the acquisition of Mellanox Technologies in 2020 for 6.9 billion dollars. For most industry observers, it was an interesting, but not spectacular transaction — another acquisition in the tech giant's portfolio. However, for Huang and his team, it was a game-changing move. Mellanox specialized in high-performance networking technologies, particularly InfiniBand solutions, which are crucial for connecting servers in massive datacenters.

This was not a random investment. Huang saw that as the computational power of AI chips grew, a new problem emerged — how to connect thousands of these processors in a way that would allow them to communicate efficiently with each other. Traditional ethernet networks, which dominated in datacenters, were not sufficient. Infrastructure was needed that could transmit enormous amounts of data with minimal latency and maximum throughput.

Mellanox brought to Nvidia exactly what was missing. The InfiniBand technology that the company had developed over the years proved ideal for the AI ecosystem. When other companies were building datacenters full of Nvidia chips, they also needed a way to connect them. Nvidia had the solution — and it was practically without competition.

Networking as the new frontier of growth

11 billion dollars in a single quarter is a number that deserves a moment of reflection. By comparison, it is less than chip revenue (which still dominates), but it is no longer a marginal business — it is a world-class business. And it is growing at an astounding pace. Growth of 267 percent year-over-year is not an anomaly — it is a trend that will continue.

Why is networking growing so fast? The answer is simple: everyone who buys AI processors from Nvidia also has to buy networking infrastructure to connect them. It is not optional. If Amazon Web Services builds a new datacenter with thousands of Nvidia GPUs, it must simultaneously invest in a network that will connect them. The same applies to Google, Meta, OpenAI, and every other organization building AI infrastructure.

Nvidia is well aware of this dynamic. The Networking division is not an independent business — it is part of the ecosystem. When you sell a chip, you must have previously sold a network. When you sell a network, you open the door to selling more chips. This is a classic lock-in play, where each element of the system forces the purchase of the next elements.

Why aren't the media talking about this?

The lack of media interest in Nvidia's networking business is surprising, but has a logical explanation. GPUs are sexy — everyone can understand that a faster processor is better. Networking is abstract. How do you sell a story about infrastructure that connects things that connect? It doesn't have the same drama as the story of the AI revolution powered by graphics processors.

Additionally, the Nvidia brand name is inextricably linked to GPUs. When people think of Nvidia, they think of graphics cards, CUDA, dominance in gaming and AI. The Networking division operates under the Nvidia Networking brand (formerly Mellanox), which doesn't have the same recognition. It is an unknown branch of a tree that everyone knows for its fruit — chips.

There is also a technical aspect. Networking is a field understood mainly by engineers and infrastructure architects. There is no story here for the mass audience. You can't easily explain why 400 gigabits per second throughput is better than 200 gigabits if you don't work in datacenters. GPUs? It's simple — faster, more CUDA cores, better performance. Networking? It's for specialists.

Technology that changes infrastructure

To understand why Nvidia's networking division is growing so fast, you need to delve into the technology. The key product is Nvidia Quantum — a family of switches and network cards designed specifically for AI datacenters. These are not ordinary networking devices — these are machines designed to transmit enormous amounts of data with minimal latency.

Quantum uses InfiniBand technology, which Nvidia inherited from Mellanox, but has significantly improved it. The latest generation, Quantum 2, offers throughput up to 400 gigabits per second — that is 10 times more than traditional ethernet networks. For datacenters training AI models with billions of parameters, this difference means hours saved on every training run.

But it's not just throughput. Nvidia Quantum also offers low latency — meaning data transmitted between processors travels faster. For AI applications where communication between GPUs is critical, every millisecond counts. When you train a model distributed across thousands of chips, communication latency can reduce the performance of the entire system by tens of percent.

The lock-in game in all its glory

Here is something that should interest regulators and Nvidia's competitors. The company is not only selling processors — it is building an ecosystem where each element forces the purchase of the next element. If you are Google and want to build a datacenter with Nvidia GPUs, you also have to buy Nvidia networking. If you want that network to work optimally, you have to buy Nvidia software — CUDA, cuDNN and other tools.

This is a strategy that Nvidia has been using for years, but the networking division makes it even more powerful. Previously, when competitors wanted to challenge Nvidia in chips, they could potentially use traditional ethernet networks. Now? If you want to buy processors from someone else but still want optimal performance, you should buy Nvidia networking. This is the genius of this move.

AMD, Intel and other processor companies are trying to compete with Nvidia in chips, but none of them have comparable networking. This gives Nvidia an advantage that is hard to challenge. Even if AMD created a chip as good as Nvidia's GPU, datacenters that buy it will have to invest in networking infrastructure — and the best option is Nvidia.

Polish companies and the global trend

For Polish technology companies and startups working with AI, this trend has concrete implications. If you are building a cloud service or datacenter, you can no longer think of networking infrastructure as a secondary issue. Networking has become a critical component, and Nvidia has a practical monopoly on the highest performance.

This means that Polish enterprises that want to compete globally in the AI space must invest in networking infrastructure at the Nvidia level. Alternatively, they can look for solutions from other vendors, but they will have to accept lower performance or higher costs. This is the reality of modern AI business.

Revenue, margins and the future

The numbers speak for themselves. 31 billion dollars in annual revenue from the networking division is more than the annual revenue of most technology companies. And margins? Networking has significantly higher margins than chips — Nvidia can sell network switches with 60-70 percent margins, while chips have 40-50 percent margins.

This means that the networking division is not only growing faster, but is also more profitable. For investors looking at revenue, this is less visible — chips still dominate. But for those looking at profits, networking is the star. If the trend continues, in a few years networking could generate more profit than chips, even if revenue is lower.

The future? Nvidia will continue to invest in networking. The company has already announced Quantum 3, which will offer throughput up to 800 gigabits per second. This will further strengthen Nvidia's position in datacenter infrastructure. While everyone is talking about GPUs, Nvidia will already be making billions on the networks that connect them.

Source: TechCrunch AI
Share

Comments

Loading...