[HTML payload içeriği buraya]
32.7 C
Jakarta
Sunday, May 17, 2026

KDDI, HPE companion to launch superior AI knowledge middle in Japan


KDDI mentioned the the brand new Osaka Sakai knowledge middle might be powered by a rack-scale system that includes the Nvidia GB200 NVL72 platform

In sum, what to know:

New AI hub in Osaka by 2026 – KDDI and HPE are constructing a sophisticated AI knowledge middle powered by Nvidia Blackwell-based infrastructure and liquid cooling to serve Japan and international AI markets.

Deal with efficiency and sustainability – HPE’s rack-scale system brings energy-efficient high-performance computing, combining NVIDIA {hardware} and superior cooling to cut back environmental influence.

AI companies for startups and enterprises – KDDI plans to ship cloud-based AI compute by way of its WAKONX platform, enabling prospects to construct LLMs and scale AI apps with low latency.

Japanese operator KDDI Company and Hewlett Packard Enterprise (HPE) introduced a strategic collaboration geared toward launching a next-generation AI knowledge middle in Sakai Metropolis, Osaka Prefecture, with operations scheduled to start in early 2026.

In a launch, the Japanese firm famous that the brand new AI knowledge middle will help startups, enterprises and analysis establishments in creating AI-powered functions and coaching giant language fashions (LLMs), leveraging Nvidia’s Blackwell structure and HPE’s infrastructure and cooling experience.

The Japanese firm famous that the brand new Osaka Sakai knowledge middle might be powered by a rack-scale system that includes the Nvidia GB200 NVL72 platform, developed and built-in by HPE. The system is optimized for high-performance computing and incorporates superior direct liquid cooling to considerably cut back the environmental footprint, KDDI mentioned.

As AI workloads develop in scale and complexity, the demand for low-latency inferencing and energy-efficient infrastructure is growing. KDDI’s new AI knowledge middle in Osaka goals to satisfy this problem by providing cloud-based AI compute companies through its WAKONX platform, which is designed for Japan’s AI-driven digital economic system.

The Nvidia GB200 NVL72 by HPE is a rack-scale system designed to allow giant and sophisticated AI clusters which are optimized for vitality effectivity and efficiency by way of superior direct liquid cooling.

Geared up with Nvidia-accelerated networking, together with Nvidia Quantum-2 InfiniBand, Nvidia Spectrum-X Ethernet and Nvidia BlueField-3 DPUs, the system delivers high-performance community connectivity for various AI workloads. Prospects may also run the Nvidia AI Enterprise platform on the KDDI infrastructure to speed up growth and deployment, the corporate mentioned.

Antonio Neri, president and CEO of HPE, mentioned: “Our collaboration with KDDI marks a pivotal milestone in supporting Japan’s AI innovation, delivering highly effective computing capabilities that can allow smarter options.”

Trying ahead, the 2 corporations will proceed to strengthen their collaboration to advance AI infrastructure and ship progressive companies ― whereas enhancing vitality effectivity.

HPE and Nvidia just lately unveiled a brand new suite of latest AI manufacturing unit choices geared toward accelerating enterprise adoption of synthetic intelligence throughout industries.

The expanded portfolio, introduced at HPE Uncover 2025 in Las Vegas, introduces a variety of modular infrastructure and turnkey platforms, together with HPE’s new AI-ready RTX PRO Servers and the subsequent era of the corporate’s AI platform, HPE Personal Cloud AI. These choices are designed to offer enterprises with the constructing blocks to develop, deploy and scale generative, agentic and industrial AI workloads.

Branded as Nvidia AI Computing by HPE, the built-in suite combines the chipmaker’s newest applied sciences—together with Blackwell accelerated computing, Spectrum-X Ethernet and BlueField-3 networking—with HPE’s server, storage, software program and companies ecosystem.

The important thing element of the launch is the revamped HPE Personal Cloud AI, co-developed with the chip agency and totally validated below the Nvidia Enterprise AI Manufacturing unit framework. This platform delivers a full-stack resolution for enterprises in search of to harness the ability of generative and agentic AI.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles