SINGAPORE, July 30, 2025 /PRNewswire/ — Super X AI Technology Limited (Nasdaq: SUPX) (“Company” or “SuperX”) today announced the launch of its latest flagship product — the SuperX XN9160-B200 AI Server. Powered by NVIDIA’s Blackwell architecture GPU (B200), this next-generation AI server is engineered to meet the rising demand for scalable, high-performance computing in AI training, machine learning (ML), and high-performance computing (HPC) workloads.
The XN9160-B200 AI Server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads. It is optimized for GPU-supported tasks to support intensive GPU instances, particularly for training and inference of foundation models using reinforcement learning (RL) and distillation techniques, multimodal model training and inference, as well as HPC applications such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling. Its performance rivals that of a traditional supercomputer, offering enterprise-grade capabilities in a compact form.
The launch of the SuperX XN9160-B200 AI server marks a significant milestone in SuperX’s AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.
XN9160-B200 AI Server
The all-new XN9160-B200 features 8 NVIDIA Blackwell B200 GPUs, fifth-generation NVLink technology, 1440 GB of high-bandwidth memory (HBM3E), and 6th Gen Intel® Xeon® processors, unleashing extreme AI compute performance within a 10U chassis.
Built for AI —— Cutting-edge Training Performance
The SuperX XN9160-B200 is powered by its core engine: 8 NVIDIA Blackwell B200 GPUs, equipped with fifth-generation NVLink technology to provide ultra-high inter-GPU bandwidth of up to 1.8TB/s. This significantly accelerates large-scale AI model training, achieving up to a 3x speed improvement and drastically shortening the R&D cycle for tasks like pre-training and fine-tuning trillion-parameter models. For inference, it represents a quantum leap in performance: with 1440GB of high-performance HBM3E memory running at FP8 precision, it achieves an astonishing throughput of 58 tokens per second per card on the GPT-MoE 1.8T model. Compared to the 3.5 tokens per second of the previous generation H100 platform, this represents an extreme performance increase of up to 15x.
The inclusion of 6th Gen Intel® Xeon® processors, in tandem with 5,600-8,000 MT/s DDR5 memory and all-flash NVMe storage, provides key support for the system. These components effectively accelerate data pre-processing, ensure smooth operation in high-load virtualization environments, and enhance the efficiency of complex parallel computing, enabling the stable and efficient completion of AI model training and inference tasks.
To ensure exceptional operational reliability, the XN9160-B200 utilizes an advanced multi-path power redundancy solution. It is equipped with 1+1 redundant 12V power supplies and 4+4 redundant 54V GPU power supplies, effectively mitigating the risk of single point of failures and ensuring the system can run continuously and stably under unexpected circumstances, providing uninterrupted power for critical AI missions.
The SuperX XN9160-B200 has a built-in AST2600 intelligent management system that supports convenient remote monitoring and management. Each server undergoes over 48 hours of full-load stress testing, cold and hot boot validation, and high/low-temperature aging screening, combined with multiple production quality control processes to ensure reliable delivery. We also provide a three-year warranty and professional technical support, offering a full-lifecycle service guarantee to help enterprises navigate the AI wave and lead the future.
Technical Specifications:
CPU |
2* Intel® Xeon® 6710E Processor 64Cores, 2.40 GHz, 205W |
GPU |
8* Nvidia B200 |
Memory |
32* 96GB DDR5 5600 RDIMM |
System Disk |
1* 960GB SSD |
Storage Disk |
3.84TB NVMe U.2 |
Network |
• 8* CX7 MCX75310 IB Card, 400G OSFP • 1* BCM957608-P2200G, dual 200G QSFP56 • 1* BCM957412A4120AC, dual 10G SFP+ |
Dimension |
440mm(H) x 448mm(W) x 900mm(D) |
Market Positioning
The XN9160-B200 is designed for global enterprises and research institutions with demanding compute needs, especially:
- Large Tech Companies: For training and deploying foundation models and generative AI applications
- Academic & Research Institutions: For complex scientific simulations and modeling
- Finance & Insurance: For risk modeling and real-time analytics
- Pharmaceutical & Healthcare: For drug screening and bioinformatics
- Government & Meteorological Agencies: For climate modeling and disaster prediction
Purchase & Contact Information
For product inquiries, sales, and detailed specifications, please contact our product sales team at: Sales@superx.sg
About Super X AI Technology Limited (SUPX)
Super X AI Technology Limited is an AI infrastructure solutions provider, and through its wholly-owned subsidiaries in Singapore, SuperX Industries Pte. Ltd. and SuperX AI Pte. Ltd., offers a comprehensive portfolio of proprietary hardware, advanced software, and end-to-end services for AI data centers. The Company’s services include advanced solution design and planning, cost-effective infrastructure product integration, and end-to-end operations and maintenance. Its core products include high-performance AI servers, High-Voltage Direct Current (HVDC) solutions, high-density liquid cooling solutions, as well as AI cloud and AI agents. Headquartered in Singapore, the Company serves institutional clients globally, including enterprises, research institutions, and cloud and edge computing deployments. For more information, please visit www.superx.sg
Contact Information
Product Inquiries: sales@superx.sg
Investor Relation: ir@superx.sg
Follow our social media:
X.com: https://x.com/SUPERX_AI_
LinkedIn: https://www.linkedin.com/company/superx-ai
Safe Harbor Statement
This press release contains forward-looking statements. In addition, from time to time, we or our representatives may make forward-looking statements orally or in writing. We base these forward-looking statements on our expectations and projections about future events, which we derive from the information currently available to us. You can identify forward-looking statements by those that are not historical in nature, particularly those that use terminology such as “may,” “should,” “expects,” “anticipates,” “contemplates,” “estimates,” “believes,” “plans,” “projected,” “predicts,” “potential,” or “hopes” or the negative of these or similar terms. In evaluating these forward-looking statements, you should consider various factors, including: our ability to change the direction of the Company; our ability to keep pace with new technology and changing market needs; and the competitive environment of our business. These and other factors may cause our actual results to differ materially from any forward-looking statement.
Forward-looking statements are only predictions. The reader is cautioned not to rely on these forward-looking statements. The forward-looking events discussed in this press release and other statements made from time to time by us or our representatives, may not occur, and actual events and results may differ materially and are subject to risks, uncertainties, and assumptions about us. We are not obligated to publicly update or revise any forward-looking statement, whether as a result of uncertainties and assumptions, the forward-looking events discussed in this press release and other statements made from time to time by us or our representatives might not occur.