Our Partners
Direct partnerships with leading GPU manufacturers to power your AI workloads with advanced computing solutions.

Products
Alibaba Cloud
Alibaba Cloud is the cloud computing division of Alibaba Group and one of the world’s leading providers of cloud infrastructure and artificial intelligence platforms. Founded in 2009, the company delivers a wide range of services including computing, storage, networking, databases, analytics, and AI development tools. Alibaba Cloud enables organizations to build and scale digital services through a global infrastructure of data centers and cloud regions. Its platform is widely used across industries such as e-commerce, finance, telecommunications, media, logistics, and research. Artificial intelligence infrastructure is a key focus area for Alibaba Cloud. Through its semiconductor division T-Head, the company develops proprietary processors and AI chips while also providing cloud platforms for machine learning, large language models, and high-performance data processing. Today, Alibaba Cloud is the largest cloud service provider in China and one of the leading cloud platforms in the Asia-Pacific region.

Products
Baidu Kunlun (Kunlunxin)
Baidu Kunlun (Kunlunxin) is a company focused on developing AI accelerators (NPU chips) based on its proprietary XPU architecture. The Kunlun project originated within Baidu in 2011 to build hardware for AI tasks like training and inference. In 2018, Baidu unveiled the first full-fledged Kunlun 818-300 chip, manufactured on Samsung's 14-nm process, delivering up to 260 TOPS. In 2021, the Kunlun division spun off from Baidu into the independent company Kunlunxin, which continues to advance chip lines like Kunlun. These accelerators emphasize energy efficiency, integration with frameworks like PaddlePaddle, and AI workloads in data centers, competing with Nvidia and Huawei solutions.

Products
Enflame Technology
Enflame is a Chinese technology company focused on developing high-performance computing solutions for artificial intelligence and cloud data centers. Founded in 2018 and headquartered in Shanghai, the company designs AI processors, accelerator cards, server platforms, and software tools for deep learning training and inference workloads. Enflame aims to deliver a full-stack AI computing platform, including specialized chips for large-scale model training, inference accelerators, integrated computing systems, and software frameworks compatible with mainstream machine learning ecosystems. Its solutions are used by cloud service providers, internet companies, research institutions, and large computing centers. Backed by major investors such as Tencent and state-supported semiconductor funds, Enflame has become one of the notable players in China’s emerging AI accelerator market and contributes to the development of a domestic AI computing ecosystem.

Products
H3C Technologies
H3C (H3C Technologies Co., Ltd.) is a leading Chinese provider of digital infrastructure and AI-driven IT solutions. The company develops a broad portfolio of technologies including networking equipment, servers, storage systems, cybersecurity solutions, cloud platforms, and AI-based digital infrastructure. H3C delivers integrated solutions designed to support enterprise digital transformation, covering cloud computing, big data, artificial intelligence, edge computing, and intelligent connectivity. Its technologies are widely deployed in data centers, telecom networks, government systems, and industrial infrastructure worldwide. Founded in 2003 as a joint venture between Huawei and 3Com, the company is headquartered in Hangzhou, China. H3C invests heavily in research and development, with more than half of its workforce dedicated to R&D and thousands of patents filed globally. The company’s products and solutions are deployed in over 180 countries and regions.

Products
Huawei Technologies
Huawei is a global technology company that develops telecom infrastructure, cloud platforms, and AI computing solutions, with Atlas positioned as its flagship AI accelerator family. The Atlas portfolio includes PCIe accelerator cards, edge devices, AI servers, and large clusters built on Ascend NPUs, designed to deliver high performance and energy efficiency for deep learning workloads, computer vision, and large language models in real time. Atlas products cover the full AI lifecycle—from compact edge modules like Atlas 200 and inference cards such as Atlas 300 and Atlas 300I Duo, to powerful training platforms Atlas 800 and large-scale clusters like Atlas 900 and Atlas 950 for scientific computing and large scale model training. By combining specialized Da Vinci tensor compute, high memory bandwidth, and integrated video codecs, Atlas accelerators enable dense, low-latency deployment of AI services in finance, telecom, smart city, and industrial automation scenarios.

Products
Iluvatar CoreX
Iluvatar CoreX is a Shanghai-based technology company specializing in the development and production of general-purpose graphics processing units (GPGPU) and comprehensive AI solutions. Our product lineup includes high-performance chips, accelerator cards, servers, and clusters that integrate proprietary hardware with a software stack optimized for neural network training and inference tasks. Iluvatar CoreX solutions are deployed across finance, healthcare, transportation, and other industries, enabling customers to build scalable, high-performance AI infrastructures. Founded in 2015, the company has pioneered mass production of GPGPU in China, with ongoing R&D in high-performance computing. The Tiangai series GPUs and specialized inference solutions help reduce total cost of ownership while minimizing reliance on imported technologies.
Products
Inspur Group
Inspur is a Chinese technology company and a major global provider of IT infrastructure for data centers. The company designs and manufactures servers, storage systems, cloud computing platforms, high-performance computing (HPC) solutions, and infrastructure for artificial intelligence workloads. Inspur’s portfolio includes server platforms for enterprise data centers, cloud service providers, telecommunications operators, and research institutions. The company actively develops AI infrastructure solutions, including GPU servers and systems optimized for training and inference of large-scale AI models. As one of the world’s largest server vendors by shipment volume, Inspur plays a significant role in the global data center ecosystem. Its solutions are widely used across cloud platforms, government infrastructure projects, financial institutions, telecommunications networks, and scientific computing environments.

Products
Lenovo Group
Lenovo is a multinational technology company founded in 1984 and headquartered in Beijing, China, and Morrisville, North Carolina, USA. The company designs and manufactures a wide range of hardware and software products, including personal computers, laptops, workstations, smartphones, tablets, servers, storage systems, and data center infrastructure. Lenovo is consistently ranked among the world's leading PC vendors and has significantly expanded its enterprise infrastructure portfolio through its Infrastructure Solutions Group (ISG). This division focuses on servers, storage, high-performance computing (HPC), artificial intelligence infrastructure, and cloud platforms. The company provides solutions for enterprises, research institutions, and cloud providers, supporting workloads ranging from traditional IT environments to large-scale AI and high-performance computing deployments.

Products
MetaX Integrated Circuits
MetaX was founded in Shanghai in September 2020 and has established wholly owned subsidiaries and R&D centers in Beijing, Nanjing, Chengdu, Hangzhou, Shenzhen, Wuhan, and Changsha. The company brings together a team with extensive experience in technology, design, and industrialization for large-scale production and delivery. The core team members have an average of nearly 20 years of end-to-end R&D experience in high-performance GPU products and have led the development of more than ten world-class GPUs — covering GPU architecture definition, GPU IP design, GPU SoC design, and GPU system solutions. MetaX is dedicated to delivering full-stack GPU chips and solutions for heterogeneous computing, applicable to cutting-edge fields such as intelligent computing, smart cities, cloud computing, autonomous vehicles, digital twins, and the metaverse — providing strong computing power to drive the development of the digital economy. The company develops full-stack GPU chips and is preparing to launch three product lines: • MetaX N-series — for inference computing, • MetaX C-series — for general-purpose computing (GPGPU), • MetaX G-series — for graphics rendering. These solutions are designed to meet the demand for highly efficient and versatile computing power. All MetaX products are based on proprietary GPU IP with completely independent intellectual property rights for the instruction set and architecture, alongside a full software stack (MXMACA) that is compatible with the mainstream GPU ecosystem. This combination provides natural advantages in both efficiency and flexibility, enabling MetaX to build comprehensive hardware–software ecosystem solutions for its customers. These technologies form the foundation of computing power that promotes the development of the digital economy and the intelligent transformation and upgrading of industries under the “dual carbon” strategy.

Products
Moore Threads
Moore Threads is a Chinese technology company focused on designing and manufacturing universal graphics processing units (Universal GPUs) and end to end accelerated computing solutions. Founded in 2020 in Beijing by former Nvidia China vice president Zhang Jianzhong, the company has quickly become one of the most prominent domestic GPU players in the Chinese semiconductor industry. Its portfolio covers GPUs for data centers, cloud infrastructure, artificial intelligence, visualization, gaming, and professional content creation, complemented by a software stack and developer tools. Moore Threads is building a full visual computing and AI ecosystem around its Huagang architecture and next generation product lines for graphics (such as Lushan) and AI (such as Huashan), targeting high performance and large scale deployment in GPU clusters. Positioning itself as a future global leader in GPUs and accelerated computing infrastructure, Moore Threads supports customers’ digital transformation across industries ranging from cloud providers and industrial enterprises to media and gaming companies. Leveraging a domestic supply chain and close alignment with China’s AI ecosystem, its solutions play an increasingly important role in national efforts toward semiconductor self reliance and large scale AI deployment.
Products
Rental Provider
GPU rental server providers
Products
YH
YH is a Chinese manufacturer of specialized AI chips in the RISC-V architecture, focused on cloud computing and energy-efficient accelerators for LLM. YH’s next-generation AI chip is designed as a foundation for cloud computing and large-scale language models (LLMs). It’s not just an accelerator, but a full architectural platform focused on matrix efficiency, scalability, and flexibility for custom AI workloads. At its core is a hybrid instruction set approach: RISC-V (base) with the RVV (vector extension), enhanced by custom matrix instructions and a proprietary Virtual Instruction Set Architecture (VISA). This provides a key advantage — the ability to finely tune execution for specific models and algorithms, unlike the fixed instruction sets found in traditional GPUs. From a compute perspective, the chip follows a TPU-like architecture. It features dual systolic array matrix engines optimized for dense linear algebra operations typical in LLMs and deep learning. Complementing this is a high-performance 4D DMA engine, addressing one of the main bottlenecks in modern accelerators: data movement. As a result, the design achieves high efficiency in both computation and memory transfer. A strong emphasis is placed on optimization for large models, particularly architectures similar to DeepSeek. The chip supports Blocked FP8 precision, enabling significant reductions in memory usage and increased throughput without critical accuracy loss — especially important for both training and inference at scale. For scalability, the chip uses a proprietary ELink interconnect. Positioned as an alternative to NVIDIA NVLink, it is designed for building large-scale clusters and supports advanced features like In-Network Computing. This allows certain operations to be executed directly within the network, reducing latency and offloading compute from the chips themselves. Overall, this is a data center–class AI processor tailored for: large language models (LLMs) distributed training high-throughput inference scalable AI cluster deployments The core idea is a shift away from general-purpose GPU architectures toward deep vertical optimization for AI workloads, where not only FLOPS matter, but also efficiency in memory access, interconnect, and custom data formats.