
Moore Threads MCCX D800 X2 640Gb
The Moore Threads MCCX D800 X2 is a flagship AI server specifically engineered for training massive models with trillions of parameters.
Moving beyond standard PCIe form factors, it utilizes a self-developed OAM (Open Accelerator Module) design and a high-speed, fully interconnected architecture to maximize data exchange between GPUs.
Based on the third-generation MUSA architecture, it provides a full-stack solution compatible with CUDA and PyTorch, supporting distributed training frameworks like Megatron-LM and DeepSpeed.
This "cloud-to-terminal" platform is uniquely suited for multimodal AI and embodied intelligence, offering a 91% linear speedup ratio in large-scale computing clusters.
Application scenarios
Large LLM, AIGC, and multimodal model training.
GPU-accelerated Big Data pipelines.
Multi-tenant platforms with Dynamic Slicing.