Showcasing the complete portfolio built on NVIDIA Blackwell architecture for scalable supercomputing and AI enablement
ST. LOUIS, Nov. 18, 2025 /PRNewswire/ — ASUS announced a showcase of its comprehensive AI infrastructure portfolio accelerated by NVIDIA® Blackwell and Blackwell Ultra architectures at Supercomputing 2025 (SC25). Driven by the strategy of ‘Ubiquitous AI. Incredible Possibilities.’, ASUS infrastructure goes all in AI strategy, aiming to accelerate clients’ time to market with superior computing performance. ASUS stands as a total infrastructure solution provider, offering robust, scalable, cloud and on-premise solutions for diverse AI workloads. This is achieved through the seamless integration of the latest NVIDIA compute platforms with advanced cooling, network orchestration, and large-scale deployment capabilities. By providing a full spectrum of AI solutions, from personal workstations to national supercomputing systems, ASUS is committed to democratizing access to powerful AI technologies for everyone.
Rack-scale AI infrastructure with NVIDIA GB300 NVL72
At the top of the portfolio, the XA GB721-E2, ASUS AI POD built on NVIDIA GB300 NVL72, embodies a field-proven rack-scale AI architecture, accelerated by 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs. Combining a 100% liquid-cooling system with integrated switch trays, the system delivers deployment-ready scalability and energy-efficient performance, offering 10-petaflops-class computing power for enterprise AI and national-cloud workloads.
Building on this strong foundation, ESC NM2N721-E1 accelerated by NVIDIA GB200 NVL72 marks a major step forward in the sovereign-AI domain, underscoring the capability to support national-scale AI platforms through fully integrated storage and compute architectures. ASUS is also thrilled to announce the successful deployment of the ESC NM2N721-E1, built on NVIDIA GB200 NVL72 platform reaffirming its strength in delivering NVIDIA Blackwell architecture for large-scale, industry-backed AI initiatives.
All-New ASUS ESC8000A-E13X system With NVIDIA RTX PRO Server supporting NVIDIA ConnectX-8 SuperNIC
Making its debut at SC25, the ASUS ESC8000A-E13X is a 4U NVIDIA RTX PRO™ Server based on NVIDIA MGX built to accelerate enterprise AI and industrial HPC workloads. Powered by two AMD EPYC™ 9005 series processors and accelerated by eight NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, it delivers exceptional compute density and efficiency for large-model training and inference. Integrated with NVIDIA ConnectX-8 SuperNIC, the system offers ultra-fast 400G NVIDIA InfiniBand/Ethernet connectivity per QSFP port and supports up to eight NVMe drives for high-speed storage. Its optimized 4U air-cooled design ensures sustained performance and reliability, even under the most demanding AI workloads.
From data center to personal AI development
ASUS also showcases a comprehensive lineup accelerated by NVIDIA technologies that spans every scale of AI computing — from large-scale data centers to personal systems.
At the core, the XA NB3I-E12, based on NVIDIA HGX™ B300 system, delivers breakthrough performance and air-cooled efficiency for high-density AI training and HPC workloads. Designed as a foundation for AI Factory deployment, it supports large-model training and multi-GPU scale-up through NVIDIA NVLink, and enables organizations to accelerate AI development pipelines from simulation and model training to full production. Complementing it, the ESC NB8-E11, based on NVIDIA HGX B200 system, brings enterprise-grade flexibility to inference and cluster-level AI applications, providing the compute density and scalability required for AI Factory edge and production environments.
Extending the ecosystem to creators and developers, the ExpertCenter Pro ET900N G3 supercomputer accelerated by NVIDIA Grace Blackwell Ultra Superchip, bringing powerful AI computing to professional creators. Meanwhile, the compact ASUS Ascent GX10 personal AI supercomputer accelerated by NVIDIA GB10 Grace Blackwell Superchip enables next-generation AI exploration in a desktop form factor.
ASUS Professional Services: Uniting deployment and excellence
ASUS maintains a strong ecosystem of partners that supports every stage of the data center lifecycle — from design and validation to deployment and management. At SC25, ASUS and Vertiv are partnering to accelerate rack-scale AI for the NVIDIA GB300 NVL72 platforms. Vertiv augments ASUS platforms with the industry’s most complete power and cooling portfolio, including advanced CDU solutions. As a first-mover with validated grid-to-chip reference architectures for the NVIDIA GB300 NVL72 platform, 800 VDC architecture and gigawatt-scale NVIDIA Omniverse DSX Blueprint — our collaboration de-risks thermal and power integration. This partnership improves energy efficiency, enables industrial-scale deployment, and accelerates time-to-first-token for AI factories.
By combining NVIDIA Blackwell and Blackwell Ultra architectures, advanced cooling, storage, networking, and world-class hardware-software integration, ASUS continues to advance AI and supercomputing infrastructure that empowers innovation across enterprise, research, and sovereign applications. Through ASUS Professional Services, customers gain access to expert engineering, validation, and deployment support, while AIDC and ACC streamline rollout, monitoring, and lifecycle management across multi-node and multi-rack clusters – ensuring reliability, scalability, and rapid time-to-value for every AI deployment. Meet us at SC25 Booth #3732 to experience the future of AI infrastructure!