Corporate News|NVIDIA 2023 Q2 Press Conference
GPU leader NVIDIA announced its financial results early this morning, posting record revenue of $13.51 billion, beating market expectations of $11.2 billion, and increasing 88% QoQ and 1,01% YoY. The Q2 results easily exceeded expectations, suggesting that the AI trend is set to continue.
Data Center Business Benefits from DevelopmentLLM With generativeAI
Datacenter's computing growth increased by 1,571 TP3T quarter-over-quarter and 1,951 TP3T year-over-year, reflecting the promotion of the Hooper-based HGX platform (Amazon Web Services, Microsoft Azure, and regional cloud service providers). Network economy increased by 851 TP3T, up 941 TP3T year-on-year, reflecting the strong growth of InfiniBand (unlimited bandwidth) supporting the HGX platform.
Other Business Aspects and Prospects
- Gaming revenue was $900 million, an increase of 22%.
- The specialty graphics business was more regrettable, with revenue of $7.9 billion, down 24%.
- Automotive chip business decreased by 151 TP3T QoQ due to the decline in overall demand, especially in China, and increased by 151 TP3T YoY due to the growth in revenue from self-driving platforms.
Looking ahead to Q3, NVIDIA estimates single-quarter consolidated revenue to reach $16 billion, a range of plus or minus 2%, which is better than the market's estimate of $12 billion to $14 billion, and Q3 gross margin is estimated to be 72.5%, a further increase from Q2. The key lies in global customers' significant investment in artificial intelligence servers and large-scale language modeling (LLM), which will allow NVIDIA to continue to increase orders for TSMC's advanced packaging process, CoWoS.
NVIDIA A100/H100/A800All chips are exclusively manufactured by TSMC.
NVIDIA's AI GPUs have been in complete oversupply so far, with volume and price increasing dramatically for the A100, H100, or the A800, which was created for the Chinese market. NVIDIA's A100/H100/A800 chips are all exclusively OEM'd by TSMC, resulting in an increase in TSMC's 2H23N7/N5 capacity utilization. The bottleneck of the current shortage comes from TSMC's limited CoWoS production capacity. TSMC is currently moving to deploy capacity, and other semiconductor supply chains are also actively accelerating the expansion of related capacity, and proposed that the 2024 production capacity will increase quarter by quarter.
Hyperscaler(Large Scale Cloud Service Provider)Market Demand Boosted by Accelerated Computing, Generative AI
The demand for LLM Inference is increasing rapidly, compared to smaller language models that are like teachers and students, with the latter learning from the former. Both have to start from LLM, and then scale down to small models. Accelerated computing and generative AI are two trends that are developing and are long-term structural changes, and NVIDIA's visibility is excellent, market demand continues to rise, and CEO Jen-Hsun Huang emphasizes that it is not a single application that is driving demand, but rather the transformation of new computing platforms around the world.
The world is catching on to the generative AI craze
NVIDIA CEO Jen-Hsun Huang said, "The world is catching up with the generative AI boom, and NVIDIA is the biggest beneficiary of this new computing era. NVIDIA's data center chipset division, which focuses on the A100 and H100 AI chips, reported revenue of $10.3 billion in the second quarter, more than double that of the same period last year. Randall Yates, an analyst at investment firm Hargreaves Lansdown, said: "Bearish people usually think that a hot stock will reach its price limit at a certain point in time, but fortunately for Fidelity, there are not many bearish people in the market as far as I can see at the moment.
Ji-Pu's point of view.
It is expected that the demand driven by AI will continue to grow over the next 1~2 years, plus many of the leading IT system and software vendors have also established partnerships with NVIDIA to bring NVIDIA AI to various industries. Market demand is huge, order visibility is all the way to next year, supply is rising quarter by quarter, and large-scale language models will be on the front end of almost everything in the future. NVIDIA's introduction of standardized AI server architectures, such as HGX or MGX, will also likely intensify competition between ODMs and other industry players competing with each other for orders from cloud service providers in the future.