- Fast Shipping Across India & Worldwide
- Store Location
- Sign in or Register
-
- support@grabnpay.in
- Home
- NVIDIA Switches
- NVIDIA Quantum-2 QM9701 - 64-Port NDR 400Gb/s InfiniBand DGX Switch, 1U, C2P Airflow NVIDIA Quantum-2 QM9701 - 64-Port NDR 400Gb/s InfiniBand DGX Switch, 1U, C2P Airflow
NVIDIA Quantum-2 QM9701 - 64-Port NDR 400Gb/s InfiniBand DGX Switch, 1U, C2P Airflow
- Description
- Shipping & Returns
- Reviews
NVIDIA Quantum-2 QM9701 is a high-performance 1U NDR 400Gb/s InfiniBand switch purpose-built for AI supercomputing, large-scale HPC clusters, and next-generation data-centre fabrics. Powered by the Quantum-2 ASIC, it delivers 64 NDR 400Gb/s InfiniBand ports through 32 OSFP cages, enabling a massive 25.6 Tb/s switching capacity and more than 66.5 billion packets per second of packet processing. This performance makes the QM9701 an ideal backbone for GPU-dense infrastructures, distributed training environments, and systems where ultra-low latency and deterministic throughput are mission-critical.
Designed as a fully managed platform, the QM9701 includes an onboard management CPU, hot-swappable cooling modules, and a built-in Subnet Manager capable of orchestrating InfiniBand deployments up to 2,000 nodes. With a compact 1U form factor, advanced congestion control, and NDR-ready fabrics, the switch ensures efficient, lossless communication across AI workloads, storage clusters, and multi-rack HPC systems. Its robust design and predictable behaviour allow seamless scaling of modern AI fabrics and high-performance compute infrastructures.
KEY FEATURES:
-
64 × NDR 400Gb/s InfiniBand ports for high-density AI and HPC fabrics
-
Quantum-2 ASIC delivering 25.6 Tb/s switching capacity and 66.5 Bpps processing
-
32 × OSFP cages supporting dual-port NDR optics for compact deployment
-
Ultra-low latency forwarding tuned for GPU-to-GPU communication
-
Integrated management CPU with onboard Subnet Manager (up to 2,000 nodes)
-
Flexible speed support: NDR 400G, NDR200 200G, HDR, EDR backwards compatibility
-
Advanced congestion control with adaptive routing and lossless packet handling
-
SHARPv3 in-network acceleration to speed up collective AI and HPC operations
-
1U high-density form factor designed for scalable data-centre racks
- Redundant cooling and hot-swappable fans for continuous, enterprise-grade reliability
TECHNICAL SPECIFICATION:
HARDWARE:
|
SPECIFICATION |
DETAILS |
|
Model |
NVIDIA Quantum-2 QM9701 |
|
Chassis Form Factor |
1U high-density InfiniBand switch |
|
Total Ports |
64 × NDR 400Gb/s InfiniBand ports |
|
Cage Type |
32 × OSFP cages (each supporting 2 NDR ports) |
|
ASIC |
NVIDIA Quantum-2 switch ASIC |
|
Switching Capacity |
25.6 Tb/s bidirectional |
|
Packet Forwarding Rate |
66.5 Billion packets per second (Bpps) |
|
Supported Link Speeds |
NDR 400G, NDR200 200G, HDR/EDR backward compatibility |
|
Management Processor |
Integrated CPU for system control & Subnet Manager |
|
Cooling |
Hot-swappable fan modules |
|
Airflow |
C2P (Back-to-front airflow) |
|
Power Input |
DC Busbar input (48V DC) |
|
LED Indicators |
Port status, system health, fan & power indicators |
|
Mounting Options |
Tool-less rail kit included |
|
System OS |
MLNX-OS (InfiniBand switch operating system) |
PERFORMANCE:
|
SPECIFICATION |
DETAILS |
|
Switching Capacity |
25.6 Tb/s bidirectional switching fabric |
|
Forwarding Rate |
66.5 Billion packets per second (Bpps) |
|
Port Bandwidth |
400Gb/s per NDR InfiniBand port |
|
Latency |
Ultra-low, deterministic InfiniBand-class forwarding |
|
Supported Speeds |
NDR (400G), NDR200 (200G), HDR, EDR compatible |
|
Buffer Architecture |
Optimised deep buffers for lossless transport |
|
Adaptive Routing |
Dynamic congestion-aware routing across paths |
|
Traffic Types Supported |
AI, HPC, storage, RDMA, collective operations |
|
In-Network Acceleration |
SHARPv3 accelerated reductions/collectives |
|
Maximum Fabric Size |
Up to 2,000 nodes with built-in Subnet Manager |
|
Packet Types |
InfiniBand native packet processing, RDMA support |
|
Performance Mode |
Line-rate, non-blocking architecture |
SECURITY:
|
SPECIFICATION |
DETAILS |
|
Secure Boot Support |
Ensures only signed and trusted firmware images can run on the system |
|
Firmware Integrity Protection |
Cryptographic validation of system software to prevent tampering |
|
Management Access Security |
Role-based access control (RBAC), protected CLI/management interfaces |
|
Encrypted Management Channels |
Secure SSH-based remote access for administrative operations |
|
SM (Subnet Manager) Protection |
Controlled access to fabric management functions to avoid unauthorized changes |
|
Control Plane Protection |
Hardware-enforced isolation of management traffic from data-plane operations |
|
System Logging & Alerts |
Event logging, error reporting, and real-time system health monitoring |
|
Physical Security |
Secured chassis, tamper-resistant design, and monitored power/fan modules |
|
Configuration Safeguards |
Secure, persistent storage for system configuration and OS parameters |
|
Network Isolation Capabilities |
Partitioning support within InfiniBand fabric for traffic segmentation |
MANAGEMENT:
|
SPECIFICATION |
DETAILS |
|
Operating System |
MLNX-OS (InfiniBand switch operating system) |
|
Management Processor |
Integrated onboard CPU handling all system control functions |
|
Console Access |
Standard console port for local management and configuration |
|
Remote Management |
Secure SSH-based management for remote administration |
|
Subnet Manager (SM) |
Built-in SM supporting fabric sizes up to 2,000 nodes |
|
Monitoring & Diagnostics |
Real-time system health, event logs, port status, diagnostics tools |
|
Firmware & Software Updates |
Secure update mechanism with image verification |
|
Telemetry Support |
Fabric counters, link metrics, and performance monitoring |
|
System Logging |
Event logging with export to external syslog servers |
|
Configuration Management |
Persistent configuration storage with backup/restore options |
|
Automation Support |
Scriptable CLI to integrate with provisioning workflows |
PHYSICAL & ENVIRONMENTAL:
|
SPECIFICATION |
DETAILS |
|
Form Factor |
1U high-density InfiniBand switch chassis |
|
Dimensions (W × H × D) |
1.7 × 17.2 × 33.62 Standard 19-inch rack width, 1U |
|
Weight |
16.88 kg |
|
Cooling System |
Hot-swappable fan modules for continuous operation |
|
Airflow Direction |
C2P (Back-to-front airflow) |
|
Power Input |
48V DC busbar power input (Telco-grade DC feed) |
|
Power Consumption |
Based on configuration and optics; designed for high-efficiency NDR environments |
|
Operating Temperature |
0°C to 45°C |
|
Storage Temperature |
–40°C to 70°C |
|
Relative Humidity |
10% to 90% non-condensing |
|
Acoustic Noise |
Dependent on fan speed; optimized for data-centre racks |
|
Mounting Options |
Includes tool-less rail kit for standard 19-inch racks |
|
Compliance & Certifications |
Meets data-centre safety, EMC, and environmental standards |
ADVANCED FEATURES:
Adaptive Routing for Congestion-Free Fabric:
The QM9701 incorporates an advanced adaptive routing engine that constantly monitors real-time link utilisation, congestion levels, and flow distribution across the entire InfiniBand fabric. By dynamically redirecting traffic to the most optimal and least congested paths, it prevents hotspot formation and ensures consistent, low-latency performance even during peak AI training loads, collective communication bursts, and large-scale HPC simulations. This intelligent routing capability is essential for maintaining stable throughput and maximising GPU utilisation in multi-node, multi-rack compute environments.
SHARPv3 In-Network Acceleration:
NVIDIA’s SHARPv3 technology significantly upgrades traditional fabric performance by executing collective operations such as all-reduce, all-gather, reduce-scatter, and broadcast directly within the switch hardware instead of on the GPU or CPU. By offloading these synchronisation-heavy tasks to the QM9701’s fabric engine, large distributed training jobs achieve dramatically lower latency and improved scaling efficiency. This results in faster convergence times for massive LLMs, enhanced throughput for scientific computations, and reduced overhead in tightly coupled HPC applications that rely on frequent collective operations.
End-to-End NDR 400Gb/s Transport:
Supporting full-speed NDR 400Gb/s InfiniBand links, the QM9701 delivers exceptional bandwidth per port, enabling AI and HPC clusters to exchange massive datasets with minimal delay. The switch maintains complete backward compatibility with NDR200, HDR, and EDR optics and cables, giving organisations the flexibility to expand existing fabrics or build new NDR-ready environments without disruptive upgrades. This provides a future-proof networking foundation capable of supporting next-generation GPU systems and data-intensive workloads for years to come.
Lossless, Deterministic Performance:
Built on Quantum-2’s precision-engineered architecture, the QM9701 ensures fully lossless data transport through a combination of deep packet buffers, hardware-enforced flow control, and congestion-aware traffic scheduling. This deterministic behaviour eliminates packet drops, minimises jitter, and maintains stable latency across all workloads even during extreme burst traffic or large I/O operations. Such stability is essential for RDMA-dependent workloads, large-scale storage fabrics, and tightly synchronised AI pipelines that demand absolute consistency across hundreds or thousands of nodes.
Integrated Subnet Manager for Large Fabrics:
With a powerful onboard management processor, the QM9701 includes a built-in Subnet Manager capable of autonomously orchestrating InfiniBand fabrics of up to 2,000 nodes. This eliminates the need for external management servers and simplifies fabric bring-up, topology discovery, and routing configuration. The integrated SM ensures faster convergence, automatic recovery from link failures, and smooth scaling across multi-rack deployments, making it ideal for complex AI factories and HPC clusters requiring predictable fabric behaviour.
Advanced Telemetry and Real-Time Diagnostics:
The QM9701 provides deep fabric visibility through detailed counters, link performance analytics, congestion metrics, and instantaneous error reporting. Its real-time diagnostics allow administrators to identify microbursts, congestion points, and performance anomalies before they affect application-level performance. Combined with modern tools such as NVIDIA UFM and NEO, the switch supports automated issue detection and proactive optimisation, ensuring the entire network fabric remains healthy and responsive in large-scale production installations.
High Availability and Hot-Swappable Components:
Engineered for continuous operation, the QM9701 includes hot-swappable fans, redundant cooling paths, and a robust DC power design to maximise uptime in mission-critical environments. These high-availability features allow maintenance to be performed without service disruption, enabling 24×7 operation for large GPU clusters, parallel file systems, and HPC workloads. The system’s resilient thermal and power architecture ensures reliable performance even under sustained high-load conditions typical of modern AI training jobs.
Compact 1U NDR Platform:
Despite offering 64 full NDR 400Gb/s ports, the QM9701 is housed in an efficient 1U chassis optimised for density-constrained environments. This compact design enables massive compute expansion within standard 19-inch racks, maximising port density per RU and reducing overall data-centre footprint. Its streamlined thermal layout and OSFP-based port design make it ideal for high-density AI deployments, hyperscale HPC clusters, and enterprise data centres seeking the highest possible bandwidth in the smallest possible space.
ORDERING INFORMATION:
|
MODEL NAME |
DESCRIPTION |
|
MQM9701-NS2R |
NVIDIA Quantum-2 based NDR InfiniBand DGX Switch featuring 64 NDR 400Gb/s InfiniBand ports, 32 OSFP cages, 48V DC busbar input, standard-depth 1U chassis, fully managed system, C2P (Back-to-Front) airflow, and includes the rail kit for easy rack installation. |
WARRANTY & SUPPORT:
Manufacturers are responsible for warranties, which are based on the purchase date and validity. We make sure the products we sell are delivered on time and in their original condition at Grabnpay. Whether it's physical damage or operational issues, our team will help you find a solution. We help you install, configure, and manage your devices. We'll help you file a warranty claim and guide you every step of the way.
All listed prices reflect standard market rates. Final commercial terms, including volume-based discounts, will be provided based on purchase volume, MOQ, and other applicable criteria. For more information about switches & other accessories, contact our technical support team.