Nvidia Inks Licensing Deal With Groq, Signals New Phase in AI Chip Rivalry With Google

Jonathan Ross of Groq joins Nvidia following a strategic licensing deal amid rising competition in the global AI chip and semiconductor market.
Nvidia’s licensing deal with Groq brings a key Google Tensor architect onboard as the battle for AI chip and inference dominance heats up.

Nvidia has struck a strategic licensing agreement with AI chip startup Groq, a move that sharpens the battle for dominance in the fast-evolving artificial intelligence hardware market—where competition from Google’s Tensor chips is becoming increasingly intense.

Under the deal, Groq founder and CEO Jonathan Ross, along with several senior executives, will join Nvidia to help advance and scale licensed AI chip technology, the startup confirmed in a statement. Despite the leadership shift, Groq will continue to operate as an independent company, and its cloud-based AI services will remain active.

A Key Talent Shift in the AI Chip War

Jonathan Ross is a pivotal figure in the AI hardware ecosystem. He is one of the original architects of Google’s Tensor Processing Unit (TPU)—custom AI accelerators designed to reduce dependence on Nvidia’s premium GPUs. His move to Nvidia is being seen as a significant talent acquisition at a time when chipmakers are racing to control the infrastructure powering generative AI.

While financial details of the licensing deal were not disclosed, it follows earlier media reports suggesting Nvidia was considering acquiring Groq in a $20 billion all-cash deal. Although that acquisition did not go through, the licensing agreement highlights Nvidia’s intent to absorb cutting-edge inference expertise without fully buying the company.

Nvidia’s Expanding AI Empire

Nvidia has emerged as the world’s most valuable company, driven by explosive demand for its GPUs used to train and deploy large language models. Led by CEO Jensen Huang, the Santa Clara-based firm is rapidly investing across the AI ecosystem to maintain its lead—especially in AI inference, the process of running trained models efficiently at scale.

Key indicators of Nvidia’s strategy:

  • Planned investment of up to $100 billion in OpenAI
  • OpenAI’s commitment to deploy at least 10 gigawatts (GW) of Nvidia hardware
  • Growing focus on inference chips as AI adoption expands beyond training labs to real-world applications

Industry analysts expect AI inference to become one of the largest revenue drivers in the AI semiconductor market over the next decade.

Groq’s Rapid Rise

Founded in 2016, Groq has built a reputation for designing high-performance AI inference chips optimized for pre-trained large language models. Its technology positions it as a direct challenger to both Nvidia’s GPUs and Google’s TPUs.

In September 2025, Groq raised $750 million from investors including Samsung, Cisco, Altimeter, Disruptive, and 1789 Capital, where Donald Trump Jr. is a partner. The funding more than doubled Groq’s valuation to $6.9 billion, up from $2.8 billion in August 2024, reflecting strong confidence in alternative AI chip architectures.

Interesting Reads

Bigger Stakes Than Just Chips

The Nvidia-Groq deal comes amid an escalating rivalry between OpenAI’s ChatGPT and Google’s Gemini, underscoring how control over AI hardware is becoming as crucial as leadership in AI software.

By bringing in one of the original minds behind Google’s Tensor chips, Nvidia is sending a clear signal:
The future of AI dominance will be decided not just by models and data—but by who controls the silicon that runs them.

error: Content is protected !!