In a strategic transfer to deal with the rising calls for for superior AI infrastructure, GMI Cloud, a Silicon Valley-based GPU cloud supplier, has raised $82 million in Collection A funding. Led by Headline Asia and supported by notable companions like Banpu Subsequent and Wistron Company, this spherical brings GMI’s complete capital to over $93 million. The funds will allow GMI Cloud to open a brand new knowledge heart in Colorado, enhancing its capability to serve North America and solidifying its place as a number one AI-native cloud supplier.
Based to democratize entry to superior AI infrastructure, GMI Cloud’s mission is to simplify AI deployment worldwide. The corporate gives a vertically built-in platform combining top-tier {hardware} with strong software program options, making certain companies can construct, deploy, and scale AI with effectivity and ease.
A Excessive-Efficiency, AI-Prepared Cloud Platform
GMI Cloud’s platform supplies a whole ecosystem for AI initiatives, integrating superior GPU infrastructure, a proprietary useful resource orchestration system, and instruments to handle and deploy fashions. This complete answer eliminates many conventional infrastructure challenges:
- GPU Cases: With speedy entry to NVIDIA GPUs, GMI permits customers to deploy GPU sources immediately. Choices embody on-demand or non-public cloud cases, accommodating all the things from small initiatives to enterprise-level ML workloads.
- Cluster Engine: Powered by Kubernetes, this proprietary software program permits seamless administration and optimization of GPU sources. It gives multi-cluster capabilities for versatile scaling, making certain initiatives can regulate to evolving AI calls for.
- Utility Platform: Designed for AI improvement, the platform supplies a customizable atmosphere that integrates with APIs, SDKs, and Jupyter notebooks, providing high-performance assist for mannequin coaching, inference, and customization.
Increasing International Attain with a Colorado Knowledge Heart
GMI Cloud’s Colorado knowledge heart represents a vital step in its enlargement, offering low-latency, high-availability infrastructure to fulfill the rising calls for of North American purchasers. This new hub enhances GMI’s present world knowledge facilities, which have established a robust presence in Taiwan and different key areas, permitting for speedy deployment throughout markets.
Powering AI with NVIDIA Expertise
GMI Cloud, a member of the NVIDIA Accomplice Community, integrates NVIDIA’s cutting-edge GPUs, together with the NVIDIA H100. This collaboration ensures purchasers have entry to highly effective computing capabilities tailor-made to deal with advanced AI and ML workloads, maximizing efficiency, and safety for high-demand functions.
The NVIDIA H100 Tensor Core GPU, constructed on the NVIDIA Hopper structure, supplies top-tier efficiency, scalability, and safety for various workloads. It’s optimized for AI functions, accelerating giant language fashions (LLMs) by as much as 30 instances. Moreover, the H100 includes a devoted Transformer Engine, particularly designed to deal with trillion-parameter fashions, making it supreme for conversational AI and different intensive machine studying duties.
Constructing for an AGI Future
With an eye fixed on the long run, GMI Cloud is establishing itself as a foundational platform for Synthetic Basic Intelligence (AGI). By offering early entry to superior GPUs and seamless orchestration instruments, GMI Cloud empowers companies of all sizes to deploy scalable AI options shortly. This give attention to accessibility and innovation is central to GMI’s mission of supporting a quickly evolving AI panorama, making certain that companies worldwide can undertake and scale AI know-how effectively.
Backed by a staff with deep experience in AI, machine studying, and cloud infrastructure, GMI Cloud is creating an accessible pathway for firms trying to leverage AI for transformative progress. With its strong infrastructure, strategic partnerships, and dedication to driving AI innovation, GMI Cloud is well-positioned to form the way forward for AI infrastructure on a worldwide scale.