Driven by rising consumer demand for ingredient transparency and new INCI compliance requirements across Europe and ...
General Compute today announced its inference cloud platform built for AI agents, working with early partners now ahead ...
Google has raised the stakes in the contest to develop the world’s fastest and most efficient artificial-intelligence chips.
Eliminating infrastructure overhead of legacy designs, I/ONX debuts a scaled AI inference and fine-tuning stack that cuts ...
Google has announced new Tensor processors for its data centers, claiming they help reduce energy usage significantly.
Google LLC introduced two new custom silicon chips for artificial intelligence today at Google Cloud Next 2026, unveiling two ...
Google announced the first versions of its AI chips that are specialized for training and inference, expanding its competition with Nvidia. The search giant unveiled the new versions of its tensor ...
Google has unveiled the eighth generation of its Tensor Processing Units (TPUs), consisting of two chips dedicated to AI ...
As demand for open-source AI infrastructure grows, Novita AI is establishing itself as the inference provider for developers and engineering teams that need fast and affordable inference for ...
CoreWeave and Google Cloud are collaborating to enable customers to run AI workloads across both cloud infrastructures. To ...
The Chosun Ilbo on MSN
Google unveils 8th-gen TPUs for training, inference to challenge NVIDIA
Google has unveiled its new in-house artificial intelligence (AI) chip, the "8th-Generation Tensor Processing Unit (TPU)." ...
Eliminating infrastructure overhead of legacy designs, I/ONX debuts a scaled AI inference and fine-tuning stack that cuts power by up to 30kW per rack and reduces cost of rack-scale deployments by ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results