Google rolls out $750M fund to expand partner-led AI deployments

fiverr
Coinmama



Alphabet Inc.’s Google Cloud has introduced a $750 million fund to help consulting firms and enterprise partners bring agentic artificial intelligence into real-world use cases.

Summary

  • Google Cloud launches a $750 million fund at Cloud Next 2026 to support consulting partners in developing and deploying agentic AI solutions, including funding for tools, training, and embedded engineering teams.
  • The initiative expands access to Gemini Enterprise capabilities, early model testing, and enterprise-ready AI agents, with major firms like Accenture, Deloitte, and McKinsey involved.
  • Separately, Google is exploring new AI chip designs with Marvell to improve model performance and compete with Nvidia, with a memory-focused processor and next-gen TPU in development.

The initiative was unveiled at the Cloud Next 2026 conference in Las Vegas and is designed to strengthen Google’s partner ecosystem by offering financial support, technical resources, and dedicated engineering expertise. Notably, the funding will be available to global consulting firms, systems integrators, software vendors, and channel partners working to deploy AI solutions at scale.

bybit

Google Cloud noted that the program will support a range of activities, including identifying AI use cases, building and testing agentic systems, deploying AI agents, and training teams. It will also fund the placement of Google’s forward-deployed engineers within partner organizations to assist with complex implementations.

“Google Cloud’s partners are already leaders in agentic AI development and deployment, and have become important channels for distributing AI technologies,” said Kevin Ichhpurani, president of Google Cloud’s global partner ecosystem. “With this expanded funding, we will be able to dedicate new resources and technology to support our partners as they accelerate our mutual customers’ agentic AI journeys.”

Partner ecosystem to gain tools, engineering support

Google Cloud said the fund will extend existing partner capabilities by enabling deeper involvement in AI assessment, prototyping, and integration across enterprise environments.

A portion of the investment will go toward new tools and structured programs, including AI value assessments, Gemini proofs-of-concept, and the development of Gemini Enterprise practices. Besides this, partners will also gain access to agentic AI prototyping frameworks, deployment support, Wiz-based security assessments, and usage incentives aimed at speeding up adoption.

The company will embed forward-deployed engineers alongside firms such as Accenture, Capgemini, Cognizant, Deloitte, Devoteam, HCLTech, and TCS. These teams are expected to work directly on customer deployments and help address technical challenges tied to large-scale AI rollouts.

In parallel, several AI-focused service providers are preparing to launch dedicated Gemini Enterprise practices. Among them is Sydney-based Quantium, which will receive sandbox credits, technical training, and referral support to build and test AI-driven solutions.

Select consulting partners, including Accenture, Bain & Company, BCG, Deloitte, and McKinsey, will also receive early access to Gemini models. Their input is expected to play a role in refining these systems before wider release.

The investment further supports the rollout of enterprise-ready agents within Gemini Enterprise. These agents, built on the Gemini Enterprise Agent Platform, are designed to meet governance and security requirements while allowing businesses to deploy pre-validated solutions. Offerings from companies such as Adobe, Atlassian, Oracle, Palo Alto Networks, Salesforce, ServiceNow, and Workday are expected to be part of this ecosystem.

Speaking ahead of the event, Google Cloud CEO Thomas Kurian said the announcements build on more than a year of internal work to prepare an “agent-ready” technology stack.

“The theme for Next is that the evolution of AI models is now changing what people are trying to do with them,” Kurian said. “In the past, models were primarily used to answer questions… Increasingly, however, models are now being asked to do tasks… All of that has been called the shift towards agents.”

Chip development plans signal deeper AI push

Alongside its partner-focused initiatives, Google is also advancing its in-house hardware strategy as competition in AI infrastructure intensifies.

The company is reportedly in discussions with Marvell Technology to develop two new chips aimed at improving how AI models are executed. One of the proposed designs is a memory-focused processor intended to complement Google’s tensor processing units, while the other is a next-generation TPU optimized for AI workloads.

The effort forms part of Google’s plan to position its custom chips as an alternative to widely used GPUs from Nvidia. Growing adoption of TPUs has already contributed to Google Cloud’s revenue, as the company works to demonstrate returns on its AI infrastructure investments.

According to the report, Google expects to complete the design of the memory-centric chip by next year before moving into test production. The company has also been expanding partnerships with semiconductor firms such as Intel and Broadcom to support the rising demand for AI compute capacity.



Source link

Bybit

Be the first to comment

Leave a Reply

Your email address will not be published.


*