Advertisementspot_imgspot_img
33.4 C
Delhi
Saturday, March 14, 2026
Advertismentspot_imgspot_img

After saying ‘we are fine’ for months, Nvidia seemingly ‘accepts’ that Google and Meta are coming for it

Date:

After saying ‘we are fine’ for months, Nvidia seemingly ‘accepts’ that Google and Meta are coming for it

The competitive landscape in AI hardware shifted dramatically recently when it was reported that Facebook parent Meta is considering using Google-designed AI chips. This report also wiped away hundreds of billions from Nvidia’s market value because Meta is currently one of Nvidia’s most significant chip customers. The report specifically stated that Meta could begin renting Google’s Tensor Processing Units (TPUs) and potentially incorporate the chips into its own data centres by 2027. However, responding to the report Nvidia also issued a statement defending its market position. “We’re delighted by Google’s success — they’ve made great advances in AI, and we continue to supply to Google,” Nvidia wrote, before pivoting to assert its superiority. “Nvidia is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.”However, Nvidia, which initially brushed off concerns by publicly declaring “we are fine” after losing billions in market value tied to the Google deal, now appears to be acknowledging the threat.

Nvidia preparing to launch a new chip

According to a report by Financial Times, Nvidia is now preparing to launch of a new chip which is specifically designed for AI inference tasks running modes rather than training them. This marks a break from CEO Jensen Huang’s longstanding mantra that one GPU could handle all workloads. The new product, expected to debut at next week’s GTC developer conference, will be the first to emerge from Nvidia’s $20 billion acquisition of Groq’s talent and technology.

India’s AI Rise Gets Global Push As UN Chief Praises Leadership, Nvidia CEO Predicts Job Surge

The yet-to-launch chip described as a language processing unit (LPU), will use SRam memory instead of the high-bandwidth dynamic RAM (HBM) that powers Nvidia’s flagship GPUs. SRam is cheaper, more readily available, and better suited for speeding up AI “reasoning” tasks. Analysts estimate that by 2030, inference will account for 75% of AI data center spending, up from 50% last year making Nvidia’s pivot critical to maintaining relevance.

The rising competition

Meta’s announcement of four inference-focused processors and Google’s aggressive chip development highlight a new phase in AI computing. “We are entering an interesting phase that is not ‘Nvidia dominant’,” one Silicon Valley investor told the Financial Times. Nvidia’s $4.5 trillion market capitalization has been built on GPUs powering generative AI models like ChatGPT, but the rise of specialized chips threatens that position.



Source link

Share post:

Advertisementspot_imgspot_img

Popular

More like this
Related

Advertisementspot_imgspot_img