ASUS AI POD built on the NVIDIA GB300 NVL72 platform and latest AI Servers XA NB3I-E12 accelerated by the NVIDIA HGX B300 system now shipping for enterprise AI
TAIPEI, Oct. 14, 2025 /PRNewswire/ — ASUS today announced its participation in the 2025 OCP Global Summit, being held from October 13–16 at the San Jose Convention Center, booth #C15. At the event, ASUS unveiled its XA NB3I-E12 series AI servers, based on NVIDIA® HGX B300 system integrated with NVIDIA ConnectX-8 InfiniBand SuperNICs, 5 PCIe® expansion slots, 32 DIMM and 10 NVMe. Designed for enterprises and cloud service providers (CSPs) managing intensive AI workloads, these servers deliver outstanding performance and stability, unlocking the full potential of AI.
Starting this September, ASUS AI POD built on NVIDIA GB300 NVL72 and XA NB3I-E12 servers based on NVIDIA HGX B300 have begun shipping, giving enterprises and cloud service providers early access to cutting-edge AI performance and reliability.
Driving AI transformation with ASUS AI Factory
ASUS is also showcasing the ASUS AI Factory built on NVIDIA Blackwell architecture. Featured products include ASUS AI POD built on the NVIDIA GB300 NVL72 platform, and the XA NB3I-E12 servers accelerated by the NVIDIA HGX B300 system. These solutions serve as foundational building blocks for enterprise AI factories.
The ASUS AI Factory is a comprehensive, end-to-end approach that integrates cutting-edge hardware, optimized software platforms, and professional services to accelerate enterprise AI adoption. It enables organizations to deploy AI workloads from edge devices to large-scale AI supercomputing environments, supporting diverse applications such as generative AI, natural language processing, and predictive analytics. By combining ASUS servers, rack-scale ASUS AI PODs, and high-serviceability designs, the AI Factory reduces deployment complexity, improves operational efficiency, and maximizes computing resources.
All these products are on display on-site, giving a firsthand look at how the AI Factory empowers enterprises and cloud service providers to innovate faster, scale reliably, and unlock the full potential of AI across industries — from manufacturing automation to smart city initiatives. This holistic ecosystem ensures seamless integration, flexible deployment, and the scalability required for the rapidly evolving AI landscape.
Furthermore, as part of its powerful AI-inference solutions, ASUS Ascent GX10, a compact personal AI supercomputer accelerated by the NVIDIA GB10 Grace Blackwell Superchip, will be available from October 15. Delivering up to 1 petaFLOP of AI performance for demanding workloads and equipped with an NVIDIA Blackwell GPU, a 20-core Arm CPU, and 128GB of memory, GX10 supports AI models of up to 200-billion parameters, bringing petaflop-scale inferencing directly to developers’ desktops.
Optimizing AI workloads with AMD EPYC 9005 processors
In addition, ASUS showcased server solutions powered by AMD EPYC™ 9005 processors, offering high performance and density for AI-driven, mission-critical data center workloads. ASUS ESC8000A-E13X accelerates generative AI and LLM applications, fully compatible with NVIDIA RTX PRO 6000 Blackwell Server Edition, while an embedded NVIDIA ConnectX-8 SuperNIC supports 400G InfiniBand/ Ethernet per QSFP port for ultra-low latency and high-bandwidth connectivity, enabling unmatched scale-out performance with NVIDIA Quantum InfiniBand or Spectrum-X Ethernet networking platforms.
The RS520QA-E13 series of servers are high-performance multi-node systems optimized for HPC, EDA, and cloud computing, supporting up to 20 DIMM slots per node with advanced CXL memory expansion, PCIe 5.0, and OCP 3.0, maximizing efficiency for demanding workloads.
Join the ASUS 2025 OCP Global Summit session
Don’t miss the 15-minute ASUS Infrastructure for Every Scale—from Edge to Trillion-Token AI session at Expo Hall Stage on October 15 from 16:25–16:40. During this insightful presentation we will share how ASUS helps customers build future-ready AI data centers. Learn how our servers, rack-scale ASUS AI PODs with NVIDIA GB200/GB300 NVL72, and high-serviceability designs address diverse AI workloads and deployment challenges.