Google plans to fund cloud providers that use its accelerators.
Google is trying to take its AI accelerators to the next level
OpenAI is actively pushing its “ring” model of collaboration, which apparently has not gone unnoticed by Google. In response, the company decided to use financial resources to attract new customers to purchase its own artificial intelligence (AI) accelerators. Competitors view this initiative with skepticism, and component shortages complicate the plan’s implementation.
What is known
The *Wall Street Journal* reports that Google intends to stimulate demand for its own chips. The largest cloud providers mainly use Nvidia accelerators, so promoting Google's TPU (Tensor Processing Unit) neural processors faces challenges. To change the situation, the company plans to work with new players in the cloud market—“neocloud” providers—and encourage their purchases of its own accelerators through financial participation.
- Investment in Fluidstack
According to a source, Google is negotiating an investment of $100 million in the startup Fluidstack. The deal, valued at $7.5 million, would have Fluidstack use Google's GPU and TPU infrastructure.
- Support for former mining projects
It is also reported that Google plans to fund several projects that previously engaged in crypto‑mining but are now pivoting toward building data centers (DCs).
- Structural independence of the TPU division
Inside the company, there is a discussion about giving the TPU development unit greater autonomy. This would allow external capital to be attracted for chip creation. However, no official statements have been made yet.
Current status of TPU
Since 2018, Google has provided cloud customers access to computing power based on TPUs. Nevertheless, most of Google's infrastructure still relies on Nvidia accelerators. According to some data, the company already sells its TPUs to external customers who are building their own compute capacity.
The head of TPU development—Amin Vahdat—recently received a promotion to chief technology officer for AI infrastructure and now reports directly to CEO Sundar Pichai.
Growth constraints
The key problem for expanding the TPU business remains the lack of manufacturing capacity at TSMC, which prioritizes orders from competing Nvidia. Additionally, memory shortages are slowing the growth of TPU‑based infrastructure.
Despite this, large companies show interest in Google’s accelerators. Potential clients include Meta Platforms and Anthropic. However, Amazon (AWS) sees Google only as a competitor and is not rushing to switch to TPUs, as it is developing its own Graviton processors. The same applies to Microsoft with its Azure cloud service.
Thus, Google is taking a series of financial and organizational steps to elevate its AI accelerators to a higher level of competition in the rapidly evolving cloud segment.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment