Nvidia Shares Keep Plunging as Google and Big Short Michael Burry Team Up to Challenge Its Leadership – Google’s TPU Shakes Up the AI Chip Market and Aims to Reshape the AI Compute Landscape
Google’s share price has been climbing, and its Tensor Processing Unit (TPU) has become the hottest star in the AI market. At the same time, the famed “Big Short” investor has launched a fresh attack on Nvidia. Trapped in a cycle of self-justification, Nvidia’s stock has been sliding steadily.

The artificial intelligence chip market is undergoing a major shock. According to The Information, Meta is considering using Google’s TPUs (Tensor Processing Units) in its data centers starting in 2027 in a potential multibillion-dollar deal. At the same time, well-known short seller Michael Burry published a hard-hitting piece calling out the AI bubble and taking direct aim at Nvidia. Nvidia urgently sent a detailed memo to Wall Street analysts, rebutting Burry’s accusations point by point in an attempt to calm market jitters.

Google Further Encroaches on Nvidia’s Turf With New AI Chip Push

Google Further Encroaches on Nvidia’s Turf With New AI Chip Push


The AI Chip Landscape Is Shifting – and Nvidia Is Rattled

Tech giant Meta is considering using Google’s TPUs in its data centers, a strategic shift that could reshape the entire AI chip market. As The Information reports, Meta may start deploying Google TPUs in 2027 and will begin renting TPU compute power from Google Cloud as early as next year.

The news triggered an immediate market reaction: Nvidia’s stock fell more than 6.5%, AMD dropped over 9%, while Alphabet, Google’s parent company, gained more than 6% on Monday and continued to climb by nearly 2% on Tuesday. Google Cloud executives estimate that if TPUs gain broader adoption, Google could capture at least 10% of the tens of billions of dollars in annual revenue that currently flow to Nvidia.

This potential deal is seen as a landmark endorsement of Google’s decade-long investment in TPUs. Publicly known TPU customers already include Salesforce, Safe Superintelligence, Midjourney and Anthropic. Particularly eye-catching is AI startup Anthropic, which recently announced a partnership with Google to deploy up to 1 million Google TPU chips to train its large language model Claude – an expansion plan worth tens of billions of dollars.

Adding fuel to the fire, Michael Burry – the investor whose story inspired the film The Big Short – published a heavyweight essay titled “Supply-Side Gluttony”, effectively declaring war on the AI frenzy and targeting Nvidia head-on. In the piece, Burry refutes the mainstream view that “there is no bubble because big tech companies are highly profitable,” and bluntly states that “Nvidia is the Cisco of today” – a reference to Cisco’s share price collapsing more than 75% after the dot-com bubble burst.

Read Nvidia's rebuttal to Michael Burry's criticism that the AI chip titan  has hurt shareholder value

Read Nvidia’s rebuttal to Michael Burry’s criticism that the AI chip titan has hurt shareholder value

Burry’s core argument is that the key problem with today’s AI boom is “catastrophic oversupply and woefully insufficient demand.” Tech giants, he says, are engaged in an unsustainable capital-expenditure binge. From an accounting standpoint, he accuses them of inflating profits by extending the depreciation lives of AI chips. Burry estimates that from 2026 to 2028, large tech companies will overstate profits by around 176 billion US dollars due to understated depreciation. He claims Oracle’s profits could be overstated by 26.9%, and Meta’s by 20.8%.

He also stresses that, just like in 1999, the rally isn’t driven by unprofitable internet startups; back then, the Nasdaq’s strength was led by highly profitable large-cap stocks. In his view, history is repeating itself.


Nvidia Trapped in a Cycle of Self-Justification

Confronted with Burry’s aggressive critique, Nvidia responded both quickly and comprehensively. The company distributed a detailed memo to Wall Street analysts, rebutting Burry’s arguments line by line.

Paradoxically, Nvidia’s forceful response only sparked more debate in the market. Some commentators argue that the AI chip giant may now be caught in a classic “self-justification trap.”

周末私函“逐条”反驳“大空头”,周二发帖“自证”领先谷歌,英伟达有点慌了?

Over the weekend it privately circulated a letter refuting the “Big Short” point by point, and by Tuesday was posting public commentary to “prove” that it still leads Google – has Nvidia started to panic?

Nvidia’s approach stands in stark contrast to how old-guard tech giants like Apple and Microsoft typically respond to criticism. True industry leaders understand that not every critique deserves a response; core value should ultimately be demonstrated through product iteration and financial performance.

As a Morgan Stanley analyst once put it in a client memo about similar situations: “The vulnerability of giants often lies less in their financials and more in market psychology.” Nvidia’s swift response was intended to reassure investors, but instead has inadvertently validated concerns that growing competition is a very real threat.


TPUs Could Reshape the Industry Landscape

Google’s Tensor Processing Units (TPUs) are fundamentally reshaping the competitive landscape of the AI compute market. With significant cost advantages, superior energy efficiency, and rapidly rising market penetration, TPUs have become a formidable rival that Nvidia’s GPUs can no longer ignore.


1. Cost Advantage

One of TPU’s core strengths is its markedly lower cost structure, primarily because Google’s in-house ASIC design allows it to sidestep the hefty “Nvidia tax.”

  • Breaking free from monopoly pricing:
    In the AI era, cloud providers that buy Nvidia GPUs see up to 75% gross margin captured by Nvidia, effectively paying an “Nvidia tax.” This has driven AI business margins at cloud providers down from a traditional 50–70% range to just 20–35%. By owning full-stack TPU design (front-end RTL designed in-house, with Broadcom only handling back-end physical implementation), Google successfully avoids this cost burden.

  • Significantly better performance per dollar:
    Real-world comparisons from one customer show that for the same task, using a Google v5e Pod delivers higher performance per dollar than a setup with eight H100 GPUs. As Google releases new TPU generations, previous versions become extremely affordable. The customer notes that if they are willing to extend training times slightly, total costs can drop to as low as one-fifth of the original.

  • Tight supply-chain cost control:
    Broadcom, which manufactures TPUs for Google, operates at significantly lower margins than Nvidia. This allows Google to push compute costs down to the limit.


2. Energy Efficiency

TPUs deliver what many see as a “dimensionality-reducing blow” to GPUs in energy efficiency, thanks largely to a fundamental architectural overhaul.

  • Architectural advantage at the root:
    Unlike GPUs, which were originally designed for general-purpose graphics processing, TPUs use a unique “systolic array” architecture. This design allows data to flow through the chip like blood through veins, drastically reducing the number of high-bandwidth memory (HBM) reads and writes. As a result, a much larger share of power is used for actual computation instead of data movement. From the ground up, this tackles the “von Neumann bottleneck” that has long constrained GPUs.

  • Generational leaps in efficiency:
    According to data presented at the Hot Chips 2025 conference, the seventh-generation TPU “Ironwood” doubles performance per watt (FLOPS/watt) versus the sixth-generation “Trillium.” Ironwood delivers 4.2 TFLOPS per watt – already very close to Nvidia’s flagship B200 GPU at 4.5 TFLOPS per watt – underscoring its competitiveness in energy efficiency.

  • Huge advantages in specific scenarios:
    A former Google executive has pointed out that for certain applications, TPUs can deliver 1.4 times higher performance per dollar than GPUs. For dynamic model training workloads such as search-related tasks, TPUs can even achieve speeds up to five times faster than GPUs.


3. Market Penetration

Riding on its technical and cost advantages, TPU is rapidly evolving from an in-house chip to a broadly adopted solution, with its growing penetration validated by orders from top-tier customers and a steadily expanding ecosystem.

  • Key customer breakthroughs:

    • Meta: Plans to rent TPU compute from Google Cloud as early as 2024 and launch multibillion-dollar TPU hardware purchases for its data centers starting in 2027. If this deal goes through, TPUs could make up a significant share of compute in Meta’s data centers and potentially take at least 10% of the tens of billions in annual revenue that currently go to Nvidia.

    • Anthropic: The leading AI startup has announced plans to deploy up to 1 million Google TPU chips to power its Claude models, in an expansion said to be worth tens of billions of dollars.

  • Market scale and ecosystem impact:
    Analysts at JPMorgan note that TPUs and other custom AI chips are “rapidly closing the performance gap with leading GPUs,” prompting hyperscale cloud providers to ramp up investment in custom ASIC projects.

    Google Cloud signed more cloud service contracts in the first nine months of 2025 than in the previous two years combined, with TPU-related deals as a core growth engine. Google Cloud is also offering on-premises TPU deployment options to enterprise clients, successfully attracting financial institutions, high-frequency trading firms and other industries with stringent data compliance requirements.

The chip made for the AI inference era – the Google TPU | Dwight Pond

The chip made for the AI inference era – the Google TPU | Dwight Pond 

Taken together, Google’s TPUs – through strong performance in cost, energy efficiency and market penetration – have proven both their commercial and technical value as dedicated AI accelerators. They also signal that the AI compute market is rapidly moving away from a Nvidia-dominated world toward a more diversified and fiercely competitive new era.


Nvidia’s GPUs Remain Hard to Dislodge in the Short Term

Even so, Nvidia’s GPUs are unlikely to lose their dominant position overnight. Gartner analyst Gaurav Gupta notes that although Google has its own chips, it still remains one of Nvidia’s largest customers because it needs to preserve flexibility for its clients. Compared with TPUs, GPUs can handle a broader range of workloads and are better suited to adapt to changes in customers’ algorithms or models.


Conclusion

The competitive landscape of the AI compute market is being redrawn. As Google’s TPUs win over giants like Meta, tech companies are shifting from being mere hardware buyers to becoming direct competitors to Nvidia.

In his essay, Burry closes with a quote from Charlie Munger: “If you go around popping people’s balloons, you’re not going to be the most popular person in the room.” Today, Nvidia – and indeed the entire AI industry – is being tested by exactly these “balloon-poppers.”

Acuity Trading is a London-based fintech company founded in 2013 that specializes in AI-powered alternative data and sentiment analysis for trading and investments. They revolutionized the online trading experience by introducing visual news and sentiment tools, and today they continue to lead the fintech market with alpha-generating alternative data and highly engaging trading tools using the latest AI research and technology.
Read More

LIVE QUOTES

Name / Symbol
Chart
% Change / Price
EURUSD
1 D change
+0%
0
XAUUSD
1 D change
+0%
0
BTCUSD
1 D change
+0%
0

ALL ABOUT FOREX

Explore More Tools
Trading Academy
Browse a wide range of educational articles covering trading strategies, market insights, and financial fundamentals, all in one place.
Learn More
Courses
Explore structured trading courses designed to support your growth at every stage of your trading journey.
Learn More
Webinar
Join live and on-demand webinars to gain real-time market insights and trading strategies from industry experts.
Learn More