
Moore Threads MTT S5000 80GB
The Moore Threads MTT S5000 is a fourth-generation AI accelerator built on the MUSA 'Pinghu' architecture. Equipped with 80 GB HBM3e memory at 1600 GB/s bandwidth, it delivers performance up to 1,000 TFLOPS — enabling work with language models containing trillions of parameters.
The OAM (Open Accelerator Module) form factor supports integration into OCP-compliant high-density server chassis and horizontal scaling to thousands of accelerators via MTLink interconnect.
The Musify/MUSA software stack provides zero-cost code migration from NVIDIA CUDA, with support for PyTorch, PaddlePaddle, and optimized LLM libraries.
The MTT S5000 serves as the cornerstone of Moore Threads' KUAE Smart Computing Center — a turnkey AI data center deployment stack.
Beyond LLM workloads, the accelerator is used for HPC simulation, autonomous driving development, and digital twin creation.
Application scenarios
Training LLMs with trillions of parameters within KUAE clusters.
High-performance computing (HPC) and scientific simulation.
Autonomous driving simulation and self-driving system development. Digital twin creation and generative AI.
Scalable OCP clusters with MTLink interconnect.