GX4 has inherited the pioneering design of the Inspur SR AI full cabinet solution. Decoupled CPU and GPU. Stand-alone device supports 4 GPUs. Equipped a Dual-Socket Server as head node. Ensuring efficient GPU cross-node communication efficiency. Effectively reduces IO redundancy and system purchase costs. Perfectly suitable for deep learning model training, scientific computation, engineering computation and research, among other such fields of application.
2U space supports 4 GPUs. 4 GX4 platforms can be equipped with any Dual-Socket server to compose a 16 GPU system, creating high-efficiency parallel computational processing capacity.
CPU/GPU Decoupling, Resource Pooling
Flexibly adjustable GPU topology. Responds to different applications.
Balanced: compatible with public cloud services and small-scale model training
Common: appropriate for Deep Learning model training
Cascaded: appropriate for Deep Learning model training and P2P performance optimization
Minimum Lag, Maximum Communication
Given a 16-GPU system composed of 4 GX4 platforms, model training can be conducted without requiring a network protocol for data exchange. Maximum distance of only 1 QPI conversion. Latency reduces 50%
NVIDIA-certified. Supports 4 GPU accelerators, such as V100, P100, P40, M40, K80, M60, M10, etc.
16 2.5-inch U.2 hard drives
I/O expansion slots
1 PCI-E3.0 expansion slot, 4 mini PCIe 4-bit lines