| High-Density Air-Cooled | ≈10–15 kW / rack | Hot aisle containment, higher airflow, improved floor plenum management. | Dense CPU racks, storage-heavy infrastructure, SaaS platforms with predictable workloads. | “10–15 kW” may refer to peak rather than sustained power. Confirm usable kW and overage terms. |
| Enhanced Air / In-Row Cooling | ≈15–30 kW / rack | Containment with in-row cooling, stricter rack layouts, engineered airflow management. | Mixed CPU/GPU environments, higher utilization workloads, consolidation projects. | Install lead times, rack placement rules, blanking panel and cable management requirements. |
| RDHx / Heat-Exchanger Assisted | ≈30–60 kW / rack | Rear-door heat exchangers, higher chilled-water capacity, specialized racks or cabinets. | Hotter GPU racks, HPC workloads, teams moving beyond standard air cooling. | Hardware compatibility, maintenance responsibility, and upgrade/expansion constraints. |
| Liquid-Ready Colocation | Path to liquid (not always day-1 liquid) | Infrastructure prepared for liquid cooling: CDU placement, leak detection, operational procedures, plumbing access. | Teams planning future GPU upgrades or deployments starting around 20–30 kW and scaling higher. | “Liquid-ready” varies widely by provider—verify what infrastructure actually exists and what costs extra. |
| Liquid-Cooled High Density | ≈60–100 kW+ / rack | Direct-to-chip or liquid cooling systems, CDU support, coolant management, strict operational controls. | Large GPU clusters, AI training infrastructure, high-performance computing deployments. | Higher MRC and more operational line items (cooling service, installation, maintenance). |
| High-Power Small Footprint | High kW in 1–12U or partial rack | High power density in small deployments without committing to a full cabinet. | Early-stage GPU teams, edge deployments, “test then scale” infrastructure strategies. | Some facilities prioritize full racks despite advertising no minimums. |