Block will deploy the NVIDIA DGX SuperPOD and Cerebras expands AI inference knowledge facilities
On this common replace, RCR Wi-fi Information highlights the highest information and developments impacting the booming AI infrastructure sector
ASUS showcases AI options at CloudFest 2025
ASUS unveiled a complete lineup of AI infrastructure options at CloudFest 2025, integrating Intel Xeon 6 processors, NVIDIA GPUs and AMD EPYC chips. The corporate additionally launched RS700-E12, RS720Q-E12 and ESC8000-E12P-series servers, optimized for scalable AI coaching and inference. ASUS additionally debuted the Intel Gaudi 3 AI accelerator PCIe card, designed for environment friendly generative AI inferencing.
Block deploys NVIDIA DGX SuperPOD for open-source AI
Block mentioned it will likely be the primary North American firm to deploy the NVIDIA DGX SuperPOD with DGX GB200 methods, hosted at an Equinix AI-ready knowledge middle. This high-performance infrastructure will probably be devoted to open-source AI mannequin analysis and coaching, specializing in generative AI improvements in underexplored fields. Block’s AI analysis group mentioned it goals to push AI boundaries whereas sustaining a dedication to open-source improvement. The deployment will leverage Lambda 1-Click on Clusters, providing speedy entry to interconnected NVIDIA GPUs for environment friendly large-scale AI experimentation and innovation, the agency added.
Cerebras expands AI inference knowledge facilities
Cerebras Programs introduced the launch of six new AI inference knowledge facilities throughout North America and Europe, geared up with 1000’s of Cerebras CS-3 methods. These services will ship over 40 million Llama 70B tokens per second, making Cerebras the biggest supplier of high-speed AI inference globally, in response to the agency. New places embrace Minneapolis, Oklahoma Metropolis and Montreal, with further websites within the Midwest, East Coast and Europe slated for the final quarter of 2025. The Oklahoma Metropolis facility will characteristic 300+ CS-3 methods in a Degree 3+ knowledge middle, optimized with customized water-cooling options for high-efficiency AI processing, guaranteeing international entry to sovereign AI infrastructure, the agency added.
What’s a GPU cluster?
In one other article, RCR Wi-fi Information defines a GPU cluster and its function in AI infrastructure. A GPU cluster consists of interconnected computing nodes, every geared up with GPUs, CPUs, reminiscence and storage. These nodes talk by way of high-speed networking, enabling environment friendly knowledge distribution and processing. GPU clusters are on the core of contemporary AI infrastructure, offering the computational energy essential for deep studying, NLP and superior AI-driven functions. As AI continues to redefine industries, organizations that put money into scalable, environment friendly GPU clusters will probably be well-positioned to capitalize on the total potential of synthetic intelligence.
Why these bulletins matter?
These developments spotlight the speedy enlargement of AI infrastructure, pushed by rising enterprise demand for scalable, high-performance AI options. With rising AI complexity, firms are investing in specialised AI {hardware}, liquid-cooled knowledge facilities and hyperscale computing to speed up AI analysis, optimize mannequin coaching and drive next-generation innovation.
