인기 기사

During the annual developer conference GTC, CEO Jensen Huang stated that global computing demand has experienced unprecedented growth over the past two years, increasing by a factor of one million. He projected that Nvidia’s two generations of AI chips, Blackwell and Rubin, could generate at least one trillion dollars in cumulative revenue by the end of 2027, highlighting the enormous market potential created by the ongoing AI boom.
Previously, Nvidia had forecast that these chips would generate approximately 500 billion dollars in sales by the end of 2026. The latest projection not only raises the expected revenue to the trillion-dollar level but also extends the timeline by one year. At the GTC conference, Nvidia also unveiled several new products and technology updates to further strengthen its leadership in the AI infrastructure sector.
The company announced that it will integrate technology from AI chip startup Groq into its own product ecosystem and introduced the Groq 3 LPU. This specialized chip is primarily designed for inference tasks in large language models and can significantly accelerate the speed at which AI systems generate text and respond to requests. Nvidia plans to deploy it as a co-processor alongside its existing AI accelerators to enhance overall system performance. These chips will be manufactured by South Korean electronics giant Samsung, with systems based on the technology expected to launch in the second half of this year.
Meanwhile, Nvidia also introduced a new general-purpose CPU architecture called “Vera,” marking the company’s deeper expansion into the traditional data center processor market. Jensen Huang stated that the CPU business could represent a multi-billion-dollar opportunity. As AI data center architectures become increasingly complex, general-purpose CPUs that coordinate different computing tasks are becoming more critical.
According to Nvidia, the Vera processor combines advantages from data center processors, gaming PCs, and laptop chips. It can handle large volumes of data input while performing complex calculations quickly, all while maintaining lower energy consumption. The company also plans to introduce server systems composed entirely of CPUs, creating a new category of products. These systems will be capable of working alongside other Nvidia platforms or operating independently.
As artificial intelligence software continues to mature, some companies are exploring the use of lower-cost, lower-power CPUs to run trained AI models. This trend creates new strategic opportunities for Nvidia to expand its CPU business. The company has already reached a cooperation agreement with social media giant Meta, suggesting that its processors may eventually be marketed as standalone products.
Market Interpretation
Nvidia is gradually transforming from a company known primarily for graphics processing units into a technology provider offering a complete artificial intelligence computing platform. Its product ecosystem now spans processors, networking equipment, software platforms, and AI models. By continuously expanding its technology ecosystem and product portfolio, Nvidia is aiming to establish deeper industry barriers in the rapidly growing AI infrastructure market.








