nvidia It’s about to rip, and it doesn’t seem to have an expiration date.
Nvidia makes the graphics processors, or GPUs, needed to build AI applications such as ChatGPT. In particular, the cutting-edge AI chip, the H100, is in great demand among technology companies right now.
The company announced Wednesday that Nvidia’s total sales grew 171% year-over-year to $13.51 billion in its fiscal second quarter, which ended July 30. Not only is it selling a bunch of AI chips, it’s also more profitable: The company’s gross margin expanded by more than 25 percentage points versus the same quarter a year ago, to 71.2% — incredible for a physical product.
In addition, Nvidia said it sees demand remaining high over the next year and said it has secured an oversupply, enabling it to increase the number of chips it has available to sell in the coming months.
The company’s stock rose more than 6% hours after the news was released, adding to its remarkable gains of over 200% this year so far.
It’s clear from Wednesday’s report that Nvidia is benefiting more from the AI boom than any other company.
Nvidia reported a staggering $6.7 billion in net income this quarter, an increase of 422% over the same period last year.
“I think I was on top of my mind for next year when this report came out, but my numbers should go up a lot,” Haim Siegel, an analyst at Elazar Advisors, wrote in a note after the report. He raised his price target to $1,600, which is “moved 3 times from here,” and said, “I still think my numbers are very conservative.”
He said the price indicates a 13-fold earnings per share for the year 2024.
Nvidia’s massive cash flow contrasts with its big customers, who spend heavily on AI hardware and build multimillion-dollar AI models, but have yet to start seeing income from the technology.
About half of Nvidia’s data center revenue comes from cloud service providers, followed by major internet companies. Growth in Nvidia’s data center business was in “computing” or artificial intelligence chips, which grew 195% during the quarter, more than overall business growth of 171%.
Microsoftwhich has been a huge customer of Nvidia’s H100 GPUs, both for its Azure cloud and its partnership with OpenAI, has increased its capital expenditures to build its AI servers, and doesn’t expect a positive “revenue signal” until next year.
On the consumer Internet front, meta It said it expects to spend up to $30 billion this year on data centers, and possibly more next year as it works on artificial intelligence. Nvidia said on Wednesday that Meta was seeing returns in the form of increased participation.
Some startups have even gone into debt to buy Nvidia GPUs in hopes of leasing them out for a profit in the coming months.
On the earnings call with analysts, Nvidia officials offered some perspective on why its data center chips are so profitable.
Nvidia said its software contributes to profit margins and that it sells more complex products than just silicon. Analysts point to Nvidia’s artificial intelligence software, called Cuda, as the main reason why customers can’t easily switch to competitors like AMD.
“Our data center products include a significant amount of software and complexity which also helps drive gross margins,” Colette Kress, Nvidia’s chief financial officer, said on a call with analysts.
Nvidia also bundles its technology into expensive and complex systems like its HGX box, which packs eight H100 GPUs into a single PC. And Nvidia boasted on Wednesday that building one of these boxes uses a supply chain of 35,000 parts. HGX boxes can cost around $299,999. Based on the reportsfor a volume price of $25,000 to $30,000 for a single H100, according to Raymond James’ latest estimate.
Nvidia said that while it ships the coveted H100 GPU to cloud service providers, they often choose the most complete system.
“We call it the H100, as if it’s a chip that comes from fab, but the H100s come out, really, like the HGX to the world’s superexpanders and they’re really very large system components,” Nvidia CEO Jensen Huang said on the call. with analysts.