Peter Sayer
Executive Editor, News

How Nvidia became a trillion-dollar company

News Analysis
Sep 01, 202310 mins
Artificial IntelligenceC LanguageCryptocurrency

Nvidia’s chips have evolved beyond their video game niche to power enterprise AI models, the industrial metaverse, and self-driving cars. Now the company seeks to seize the generative AI opportunity in the cloud.

Nvidia’s transformation from an accelerator of video games to an enabler of artificial intelligence (AI) and the industrial metaverse didn’t happen overnight — but the leap in its stock market value to over a trillion dollars did.

It was when Nvidia reported strong results for the three months to April 30, 2023, and forecast its sales could jump by 50% in the following fiscal quarter, that its stock market valuation soared, catapulting it into the exclusive trillion-dollar club alongside well-known tech giants Alphabet, Amazon, Apple, and Microsoft. The once-niche chipmaker, now a Wall Street darling, was becoming a household name.

Investor exuberance waned later that week, however, dropping the chip designer out of the trillion-dollar club in short order, just as former members Meta and Tesla before it. But it was soon back in, and in mid-June, investment bank Morgan Stanley forecast Nvidia’s value could continue to rise another 15% before the year is out.

By late August, Nvidia had more than justified its earlier optimism, reporting a quarter-on-quarter increase in revenue of 88% for the three months to July 30, driven by record sales of data center products of over $10 billion, with strong demand from AWS, Google, Meta, Microsoft, and Oracle. Its stock price, too, continued to climb, bumping up against the $500 level Morgan Stanley forecast. Unlike most of its trillion-dollar tech cohorts, Nvidia has less consumer brand awareness to go on, making its Wall Street leap more mysterious to Main Street. How Nvidia got here and where it’s going next sheds light on how the company has achieved that valuation — a story that owes a lot to the rising importance of specialty chips in business, and accelerating interest in the promise of generative AI.

Graphics driver

Nvidia started out in 1993 as a fabless semiconductor firm designing graphics accelerator chips for PCs. Its founders spotted that generating 3D graphics in video games — then a fast-growing market — placed highly repetitive, math-intensive demands on PC central processing units (CPUs). They realized those calculations could be performed more rapidly in parallel by a dedicated chip rather than in series by the CPU, an insight that led to the creation of the first Nvidia GeForce graphic cards.

For many years, graphics drove Nvidia’s business; even 30 years on, its sales of graphics cards for gaming, including the GeForce line, still make it the biggest vendor of discrete graphics cards in the world. (Intel makes more graphics chips, though, because most of its CPUs ship with the company’s own integrated graphics silicon.)

Over the years, other uses for the parallel-processing capabilities of Nvidia’s graphical processing units (GPUs) emerged, solving problems with a similar matrix arithmetic structure to 3D-graphics modelling.

Still, software developers seeking to leverage graphics chips for non-graphical applications had to wrangle their calculations into a form that could be sent to the GPU as a series of instructions for either Microsoft’s DirectX graphics API or the open-source OpenGL (Open Graphics Library).

Then in 2006 Nvidia introduced a new GPU architecture, CUDA, that could be programmed directly in C to accelerate mathematical processing, simplifying its use in parallel computing. One of the first applications for CUDA was in oil and gas exploration, processing the mountains of data from geological surveys.

The market for using GPUs as general-purpose processors (GPGPUs) really opened up in 2009, when OpenGL publisher Khronos Group released Open Computing Language (OpenCL).

Soon, hyperscalers such as AWS added GPUs to some of their compute instances, making scalable GPGPU capacity available on demand, thereby lowering the barrier of entry to compute-intensive workloads for enterprises everywhere.

AI, crypto mining, and the metaverse

One of the biggest drivers of demand for Nvidia’s chips in recent years has been AI, or, more specifically, the need to perform trillions of repetitive calculations to train machine learning (ML) models. Some of those models are truly gargantuan: OpenAI’s GPT-4 is said to have over 1 trillion parameters. Nvidia was an early supporter of OpenAI, even building a special compute module based on its H100 processors to accelerate the training of the large language models (LLMs) the company was developing.

Another unexpected source of demand for the company’s chips has been cryptocurrency mining, the calculations for which can be performed faster and in a more energy-efficient manner on a GPU than on a CPU. Demand for GPUs for cryptocurrency mining meant that graphics cards were in short supply for years, making GPU manufacturers like Nvidia similar to pick-axe retailers during the California Gold Rush.

Although Nvidia’s first chips were used to enhance 3D gaming, the manufacturing industry is also interested in 3D simulations, and its pockets are deeper. Going beyond the basic rendering and accelerating code libraries of OpenGL and OpenCL, Nvidia has developed a software platform called Omniverse — a metaverse for industry used to create and view digital twins of products or even entire production lines in real-time. The resulting imagery can be used for marketing or collaborating on new designs and manufacturing processes.

Efforts to stay in the $1t club

Nvidia is driving forward on many fronts. On the hardware side, it continues to sell GPUs for PCs and some gaming consoles; supplies computational accelerators to server manufacturers, hyperscalers, and supercomputer manufacturers; and makes chips for self-driving cars. It’s also in the service business, operating its own cloud infrastructure for pharmaceutical firms, the manufacturing industry, and others. And it’s a software vendor, developing generic libraries of code that anyone can use to accelerate calculations on Nvidia hardware, as well as more specific tools such as its cuLitho package to optimize the lithography stage in semiconductor manufacturing.

But interest in the latest AI tools such as ChatGPT (developed on Nvidia hardware), among others, is driving a new wave of demand for Nvidia hardware, and prompting the company to develop new software to help enterprises develop and train the LLMs on which generative AI is based.

In the last few months the company has also partnered with software vendors including Adobe, Snowflake, ServiceNow, Hugging Face, and VMware, to ensure the AI elements of their enterprise software are optimized for its chips.

“Because of our scale and velocity, we’re able to sustain this really complex stack of software and hardware, networking and compute across all these different usage models and computing environments,” CEO Jensen Huang said during a call on August 23 to discuss the latest earnings.

Nvidia is also pitching AI Foundations, its cloud-based generative AI service, as a one-stop shop for enterprises that might lack resources to build, tune, and run custom LLMs trained on their own data to perform tasks specific to their industry. The move, announced in March, may be a savvy one, given rising business interest in generative AI, and it pits the company in direct competition with hyperscalers that also rely on Nvidia’s chips.

Nvidia AI Foundations models include NeMo, a cloud-native enterprise framework; Picasso, an AI capable of generating images, video, and 3D applications; and BioNemo, which deals in molecular structures, making generative AI particularly interesting for accelerating drug development, where it can take up to 15 years to bring a new drug to market. Nvidia says its hardware, software, and services can cut early-stage drug discovery from months to weeks. Amgen and AstraZeneca are among the pharmaceutical firms testing the waters, and with US pharmaceutical firms alone spending over $100 billion a year on R&D, more than three times Nvidia’s revenue, the potential upside is clear.

Pharmaceutical development is moving faster, but the road toward widespread adoption of another of Nvidia’s target markets is less clear: self-driving cars have been “just around the corner” for years, but testing and getting approval for use on the open road is proving even more complex than getting approval for a new drug.

Nvidia gets two bites at this market. One is building and running the virtual worlds in which self-driving algorithms are tested without putting anyone at risk. The other is the cars themselves. If the algorithms make it out of the virtual world and onto the roads, cars will need chips from Nvidia and others to process real-time imagery and perform myriad calculations needed to keep them on course. This is the smallest market segment Nvidia breaks out in its quarterly results: just $253 million, or 2% of overall sales, in the three months to July 30, 2023. But it’s a segment that’s been more than doubling each year.

When it reported its results for the three months to April 30, Nvidia made an ambitious forecast: that its revenue for the following fiscal quarter, ending July 30, would be over 50% higher — and it went on to beat that figure by a wide margin, reporting revenue of $13.5 billion. Growth in gaming hardware sales was also up 22% year on year, and 11% quarter on quarter, which would be impressive for most consumer electronics companies, but lags far behind the recent growth in Nvidia’s biggest market — data centers. The proportion of its overall revenue coming from gaming has shrunk from over one-third in the three months to April 30 to just under one-fifth in the period to July 30. Nevertheless, Nvidia still sees opportunity ahead, as less than half of its installed base has upgraded to graphics cards with the Geforce RTX technology it introduced in 2018, CFO Colette Kress said during the call.

Huang and Kress both talked up how clearly Nvidia can see future demand for its consumer and data center products, well into next year.

“The world is transitioning from general-purpose computing to accelerated computing,” Huang said. With around $250 billion in capital expenditure on data centers every year, according to Huang, the potential market for Nvidia is enormous as that transition plays out.

“Demand is tremendous,” he said, adding that the company is significantly expanding its production capacity to boost supply for the rest of this year and into next.

Nevertheless, Kress was more reserved in her projections for the three months to October 30, saying she expects revenue of between $15.7 billion and $16.3 billion, or quarter-on-quarter growth between 16% and 21%.

All eyes will be on the company’s next earnings announcement, on November 21.