As a leader in liquid-cooling solutions, Ingrasys provides a wide range of innovative technologies from server-level to rack-level, aimed at boosting energy efficiency and performance in modern data centers. At the heart of their showcase is the NVIDIA HGX B200, a liquid-cooled AI accelerator designed to deliver exceptional AI training performance. It integrates seamlessly with OCP ORv3 racks, making it easy to deploy in customer data centers.
Ingrasys will also highlight its cutting-edge liquid-to-liquid CDU solution, featuring the NVIDIA GB200 NVL72. This liquid-cooled rack, built specifically for training trillion-parameter large language models (LLMs) and real-time inference, operates as a massive GPU powered by 72 NVIDIA NVLink-connected NVIDIA Blackwell Tensor Core GPUs. This enables organizations to fully harness AI capabilities and accelerate innovation across a range of applications. The CDU solution offers immense cooling power, supporting up to 10 GB200 NVL72 units, setting the stage for next-generation AI data centers.
Additionally, Ingrasys will present a model of a future-ready supercomputing center, offering insight into next-level AI infrastructure. The actual facility houses GB200 NVL72 pods, scaling over 1,000 GPUs, and is designed to handle tomorrow’s most demanding AI workloads. This center emphasizes AI performance tuning and optimization, underscoring Ingrasys’ dedication to pushing the boundaries of AI advancement.