To co-locate or not to co-locate?

03 March 2026

As UK enterprises rethink where their infrastructure should live, the old build-versus-buy debate is giving way to a far more complex set of choices around power, regulation, sustainability and flexibility.

For years, the question facing UK IT leaders seemed deceptively simple: build your own data centre or rent space in someone else’s. In 2026, that binary choice has quietly collapsed.

Rising energy costs, sustainability pressure, power constraints and the normalisation of hybrid cloud have reshaped the conversation. Co-location is no longer just a cheaper alternative to bricks and mortar. Nor is on-premise infrastructure the stubborn relic it was once painted to be.

Instead, UK organisations are navigating a far more nuanced decision: where should each workload live, and why?

As James Sturrock, Director of Systems Engineering at Nutanix, puts it, “this is less about a binary choice and more about flexibility and outcomes.”

Rethinking the build-versus-buy question

Traditionally, owning a data centre offered control, predictability and long-term financial logic — particularly for large enterprises with stable demand. But that equation is changing fast.

“Facilities have become far more sophisticated in terms of compute density, efficiency and scalability,” says David Watkins, solutions director at VIRTUS Data Centres. “The question is no longer just cost, but performance, control and long-term strategic advantage.”

Building a private data centre now requires not just capital, but deep technical commitment: specialist cooling, high-density power design, resilience engineering and continuous upgrades. For many organisations, that burden is increasingly hard to justify.

Stewart Laing, CEO of Asanti Data Centres, notes another critical factor: “Businesses need to think more about skills, OPEX and location, asking themselves where best they can access power and network, the latter of which can be extremely expensive depending on location to fibre backbone. For many, owning the hardware and putting it in a colocation data centre makes more sense, helping to shift location, power and connectivity risk to the data centre operator instead.”

Co-location, by contrast, offers enterprise-grade infrastructure without the operational overhead. Providers invest heavily in R&D, certified staff and multi-site estates, allowing customers to scale, migrate and rebalance deployments across locations as needs evolve.
For Sturrock, the real differentiators are “cost predictability, operational complexity, resilience, and the ability to optimise IT resources while avoiding vendor lock-in.”

It’s not about where your servers sit, but about how intelligently you can move them.

Power, price and pressure: energy reshapes the decision

If there is one force tipping the scales towards co-location, it is energy.

With electricity prices volatile and grid capacity tightening in parts of the UK, running a private facility has become an increasingly specialist pursuit. Traditional architectures are power-hungry, and sustainability targets now sit squarely on the CIO’s desk.

“Energy procurement is a multi-faceted, specialist process,” says Watkins. “Data centre providers have dedicated teams securing long-term supply, often through Power Purchase Agreements directly with renewable generators.”

These PPAs not only stabilise pricing, but actively increase renewable capacity on the market — something few single enterprises can realistically achieve alone.

Sturrock notes that modern hyperconverged platforms also play a role. “HCI-based architectures can significantly reduce energy demand and carbon footprint, which is why many organisations are re-evaluating where and how their infrastructure is run.”

Laing highlights the broader challenge: “Tightening sustainability targets mean you can’t stand still — you either fund upgrades every year or you lean on a colo that is already optimising power usage effectiveness (PUE) and renewables. What truly needs to be addressed for consumers and businesses alike is the cost of power. From a colocation provider perspective, this is as simple as being able to compete with our European counterparts. The uncomfortable reality is that if UK colocation facilities are outpriced by cheaper EU markets, it’s bad news for our digital economy.”

For organisations still operating legacy on-premise facilities, the challenge is stark: match hyperscale-grade efficiency, or accept growing operational and environmental penalties. (Spoiler: very few manage the first.)

Compliance without contortions

In regulated UK sectors — finance, healthcare, government — infrastructure decisions are inseparable from sovereignty and compliance.

Co-location can be a powerful ally. “It allows organisations to retain control, guarantee the physical location of data, and enforce residency policies,” says Sturrock. “That’s why we see strong demand from highly regulated sectors.”

Laing concurs: “Colocation simplifies the model: the data centre provider owns the building and is responsible for the uptime of the infrastructure; the organisation owns the hardware, data, platforms and controls. Keeping workloads in UK data centres allows businesses to comply with sovereignty and residency regulations which can be more difficult in hyperscale public clouds, and ultimately gives them greater control.”

Modern platforms now enable encryption, granular access controls and location-based policy enforcement regardless of physical site — offering compliance with far greater operational flexibility.

But co-location is not a magic wand.

“Complexity increases when governance isn’t applied consistently across environments,” Sturrock warns. Hybrid estates without unified policy often create more audit risk, not less.

And some workloads still resist migration. Highly sensitive data, ultra-tight security models and bespoke legacy systems often remain better suited to private environments.

Yet organisations persist in forcing them into co-location anyway — “due to legacy decisions, existing contracts, or risk-averse culture,” rather than technical merit.

Laing adds nuance on workload suitability: “Very few workloads are fundamentally wrong for colo; the real constraint is whether you have the inhouse capability or partners to run your own kit properly. Early stage startups with no IT team often find it easier and cheaper to begin in public cloud but as part of their business planning, they should keep an eye on costs and eventually plan to move in-house by way of deploying into a colocation data centre.”

Hybrid by default

Co-location is no longer a stepping stone on the way to “the cloud”. Nor is on-premise staging a comeback. Instead, hybrid has become the default operating model.

“Hybrid multicloud is no longer a transition phase,” says Sturrock. “It’s becoming the standard operating model.”

Watkins agrees. “Few organisations rely on a single infrastructure model today. Businesses are blending on-prem, colocation and hyperscale cloud to optimise resilience, cost and performance.”

Laing frames the new role of colo: “Colo is now the anchor, not just a stepping stone. Organisations are increasingly buying their own hardware, installing it in colocation data centres and tapping into public cloud for burst capacity and R&D rather than as the default home for everything. It’s less about cloud ‘maturing’ and more about customers becoming wise to cost and vendor lock in and using colo to regain control.”

Increasingly, architectures span core, regional and edge facilities — with workloads placed where latency, regulation, cost and sustainability align best. Looking ahead five years, neither expert sees a single dominant model emerging.

Laing adds perspective on long-term trends: “Inhouse builds are likely to remain the preserve of a small number of operators running operation-critical, high value infrastructure like major banks and trading houses. A broad on-prem comeback looks unlikely given the cost and complexity of build, plus the need for continual reinvestment.”

“What will define success is flexibility,” says Watkins. “Interoperability will shape AI infrastructure strategies. The world has gone hybrid — and for AI, it’s going multi-hybrid.”

Which leaves CIOs with a deceptively simple mandate: stop asking where your data centre should be, and start asking where each workload truly belongs. After all, the future of infrastructure is not about buildings. It’s about choices.