Liquid Cooled Colocation: Blackwell‑Ready Racks You Won’t Find on Google

Liquid cooled colocation for Blackwell, Hopper and other high‑TDP servers. We know which data centers already run real rear‑door, direct‑to‑chip and immersion cooling.
Tell us your kW per rack and cooling type, and we’ll match you with 3–5 liquid cooled colocation providers that actually have capacity.

Liquid Cooled Colocation Pricing by Market (2026)

 

Market / RegionTypical Liquid MethodLiquid Cooled Premium vs AirEffective Range (USD per kW/mo, incl. premium)
Northern Virginia (VA)RDHx + direct‑to‑chip+10–25% on 20–30kW racks~ $200–$350 per kW
Dallas / TXRDHx, some immersion pods+10–20% on 20–40kW racks~ $160–$280 per kW
Phoenix / AZDirect‑to‑chip, RDHx+10–20%~ $150–$260 per kW
Pacific Northwest (OR/WA)Immersion + direct‑to‑chip+5–15% (cheap hydro)~ $130–$230 per kW
Silicon Valley / Bay AreaDirect‑to‑chip only, scarce+15–30%~ $230–$400 per kW
North / South DakotaImmersion‑friendly sites+10–15%~ $110–$200 per kW 

 

* Liquid cooling adds a premium on top of already high‑density pricing, but usually lowers TCO above ~30kW per rack.

 

At Blackwell‑class densities (>30kW per rack), liquid cooling reaches cost parity with air and delivers a 20–35% TCO advantage above ~50kW per rack thanks to lower PUE and higher density.

What do these prices typically include?

Here’s what is usually included:

  • Committed power: 10–80kW per rack, billed at 80–100% utilization; A/B 3‑phase 208–415V feeds.
  • Cabinet or cage: 42–52U, with appropriate depth and weight rating for GPU/HPC chassis.
  • Standard cooling:
  1. Air / hot‑aisle containment up to ~20–30kW per rack.
  2. Rear‑door heat exchangers in some facilities to push to ~30–40kW.
  • Network port: 1–10Gbps included; sometimes 10G commit or burst‑able.

What’s not included (but will show up later):

  • Cross‑connects: $100–$400 per month per x‑connect (to cloud on‑ramps, carriers, IX).
  • Remote hands: $150–$300 per hour for GPU swaps, cable tracing, etc.
  • Install & turn‑up: $500–$3,000 for rack/stack, cabling, and initial testing.
  • HD cooling upgrades: RDHx or “liquid‑ready” manifolds adding $500–$2,000 per rack per month at 40kW+.
  • Burst bandwidth / 95th percentile: Overage charges if you go beyond your commit on heavy data movement.

Real‑world difference between the “marketing” quote and the actual monthly run‑rate often lands 20–60% higher, depending on cross‑connect count, support usage, and cooling path.

Why Go Through a Broker like Us? (Spoiler: It’s Faster)

Option A: Google / GPT + vendor sites

“Liquid cooled colocation” searches surface mostly coolant vendors and hyperscale pilots, not colo for 20–100kW. No visibility into live CDU capacity.

Option B: Traditional colo sales

“HD” pages cap at 15–20kW air; “liquid ready” often means no actual hardware. You cycle through sales teams with no real liquid experience.

Option C: Coolant / hardware vendors

They push their ecosystem but miss optimal colo pricing, markets and operators for your specific kW and network needs.

Option D: QuoteColo (liquid broker)

Share kW, cooling type and markets. We check 500+ sites for deployed liquid and send 3–5 options with $/kW, design details and lead times. Free to you; typically 10% savings.

How It Works

Submit Your Request
Submit Your Request
1

1 rack, 10–25kW, e.g., “8 H100, 15kW peak, Dallas, air OK.”

Get Quotes, Fast
Get Quotes, Fast
2

We source across 500+ providers, including regional operators, high-density facilities, “unlisted” providers that accept small deployments.

Choose Your Best Option
Choose Your Best Option
3
  • Ship equipment
  • You go live

Why Choose Us

  • Access to 500+ Hosting Colocation Facilities
  • 10% OFF Avg. Annual Savings
  • Trusted service since 2004

Get Free Quotes From Providers

Describe your needs and and we’ll email you 3-5 options with pricing and terms from providers that match. Free.

    Case studies

    Helped 750+ companies in 20+ years

    From startups colocating their first servers to companies deploying multi-rack, high-density GPU and AI colocation infrastructure, businesses trust QuoteColo to find the right data center faster.

    See how we helped teams secure colocation with the right power, pricing, and providers.

    500+ Colocation Providers in Our Network worldwide

    From global brands to highly competitive regional datacenters that rarely show up in ChatGPT and Google searches. We help you compare both – and often uncover better pricing and faster availability.

    Popular Client Requests

    “Blackwell‑ready racks, 40kW each, Dallas or Phoenix”
    ​“Immersion tank hosting for 500kW analytics cluster in Pacific Northwest”
    “Retrofit: 20kW racks today, 30–40kW with RDHx over next refresh”

    Who Actually Uses GPU & AI Colocation

    GPU / AI training teams (>30kW per rack): Blackwell/Hopper racks needing direct‑to‑chip for stable PUE and clocks.

    Immersion HPC & analytics: Custom fluid‑native builds targeting cheap power in OR/WA or Dakotas.

    Cloud repatriation projects: Enterprises shifting heavy workloads to liquid for 20–35% TCO wins above 50kW per rack.

    ESG / sustainability teams: PUE 1.05–1.15 targets via immersion or D2C, versus 1.4+ for high‑density air.

    Research & defense labs: Controlled thermal envelopes for high‑density, low‑noise workloads.

    Liquid Cooling Methods Explained

    Rear Door Heat Exchangers (RDHx)

    RDHx replaces the rear rack door with a liquid‑cooled heat exchanger that removes heat directly from server exhaust air. Modern active RDHx systems can handle 15–200+ kW per rack, while passive variants are ideal up to ~20kW with no extra fan power.

    • Best for: 20–40kW racks where you want to keep standard air‑cooled servers.
    • Pros: Minimal changes to IT gear, retrofits into existing rows, good stepping stone to full liquid.
    • Cons: Still air‑dependent; very high densities or uneven airflow can be tricky.

    Direct‑to‑Chip (D2C) Liquid Cooling

    Direct‑to‑chip runs coolant through cold plates attached to CPUs/GPUs, with a CDU managing a closed loop and rejecting heat to facility water. As rack densities exceed 30kW, D2C hits TCO parity with air and becomes 20–35% cheaper than optimized air above 50kW per rack when you factor in PUE and space.

    • Best for: 30–60kW racks with standardized server platforms (e.g., HGX reference designs).
    • Pros: High thermal efficiency, good fit for modern AI racks, straightforward PUE gains.
    • Cons: Requires compatible hardware, CDUs, and careful leak detection and maintenance.

    Immersion Cooling

    Immersion places entire servers in a dielectric fluid, eliminating most server fans and enabling extremely high rack‑equivalent densities with lower CAPEX and OPEX. One 10MW TCO study found immersion cooling cut 10‑year total cost by ~39% vs air, with 30–40% savings in annual OPEX alone.

     

    • Best for: Very high density clusters and greenfield designs where you control hardware and firmware.
    • Pros: Highest density, lower mechanical complexity, large energy savings, quiet.

    Cons: Hardware compatibility, operational learning curve, and fewer colos offering production immersion today.

    Market Trends: Why Liquid Cooled Colocation is Exploding

    • The global liquid cooling for AI data centers market is growing at 30%+ CAGR, driven by AI and accelerator‑heavy deployments.
    • Above 30kW per rack, liquid cooling matches or beats air TCO; above 50kW, it can be 20–35% cheaper than air on a full rack basis due to lower PUE and higher density.
    • Immersion cooling on 10MW‑scale builds has shown ~40% TCO savings over 10 years compared to air, including both CAPEX and OPEX.
    • Colocation operators are racing to add liquid capabilities, but not all projects are live or open to smaller footprints—many early capacities are reserved for hyperscale.

    Why Choose Us

    • Access to 500+ Hosting Colocation Facilities
    • 10% OFF Avg. Annual Savings
    • Trusted service since 2004

    Get Free Quotes From Providers

    Describe your needs and and we’ll email you 3-5 options with pricing and terms from providers that match. Free.

      FAQs – Liquid Cooled Colocation

      Is liquid cooled colocation only for hyperscale or can smaller clusters use it?

      No. While hyperscalers drive most volume, more colos now expose smaller liquid‑cooling slices for 20–200kW clusters, including 30–60kW rack‑equivalent deployments. We focus on providers that will actually accept “sub‑hyperscale” liquid footprints, not just 5MW+ cloud deals.

      When does liquid cooling make financial sense vs air?

      Industry data shows D2C liquid hits TCO parity with air at roughly 30kW per rack, and delivers a 20–35% advantage above 50kW thanks to lower PUE and higher density. Immersion can push savings close to 40% over a 10‑year horizon in 10MW‑class builds. If your roadmap goes beyond 30kW per rack or you’re planning Blackwell‑class racks, liquid should be on the table.

      Which markets are best for liquid cooled colocation?

      Dallas, Phoenix and the Pacific Northwest combine relatively low power costs with operators actively rolling out D2C and immersion systems. Ashburn and Silicon Valley have strong ecosystems but tighter power and higher $/kW. For price‑sensitive deployments, power‑rich regions like OR/WA or the Dakotas often win if latency requirements are flexible.

      How do I choose between RDHx, direct‑to‑chip and immersion?

      If you want to keep standard servers and are targeting 20–30kW per rack, RDHx is often the least disruptive option. For 30–60kW Blackwell/HGX racks, direct‑to‑chip is usually the best balance of performance and operational risk. Immersion makes sense when you control hardware design and are comfortable with a different operational model, especially for very high density or greenfield builds. We usually model 3–5 options across these paths for your actual hardware and growth targets.

      Are many colocation providers really ready for immersion today?

      Only a subset have production immersion pods in colocation (as opposed to pilots or on‑paper capabilities), and even fewer will open that capacity to non‑hyperscale customers. That’s exactly why a broker helps. We track where immersion is deployed, which vendors are in use, and whether there’s room for your footprint at your target kW and term.

      X