top of page

Feature 15 June 2022 (ComputerWeekly)

Multi-year datacentre planning is more constrained than ever, with equipment and construction lead times lengthening against a background of rising requirements around storage, power and compute. With hyperscalers sitting on resources in a global game of musical chairs, how can players plan to get ahead before the music stops?

Jinender Jain, UK and Ireland sales head at IT consultancy Tech Mahindra, notes that power, cooling and space parameters should be assessed as a whole – and says it’s a “no-brainer” to adjust datacentre capacity based on business needs, demands and dynamics and allow for spare capacity that can quickly come on-stream.

However, many datacentres designed for 200W per square foot are still operating at half that wattage or less, with rack power effectively stranded. “As any datacentre manager knows, capacity planning is as much art as it is science,” says Jain.

There is little chance of the return on investment on material costs, as Uptime Institute’s 2022 Outage analysis highlights that power and networking issues still cause multiple outages globally, with nearly 30% of major public outages in 2021 lasting 24 hours, compared with 8% in 2017.

“Operators still struggle to meet high standards that customers expect and service-level agreements demand – despite improving technologies and strong resiliency and downtime prevention investments,” says Andy Lawrence, Uptime Institute Intelligence executive director.

Steve Wright, chief operating officer at colocation and cloud provider 4D Data Centres, says concerns and risk factors should link to any multi-year plan – from data sovereignty, skills and systems, quantity and type of cloud or datacentre environment required.

Newer cloud-first deployments can require big data analytics or artificial intelligence (AI) testing, not least because multi-MW ramp-ups can cause “astronomical” cost blow-outs. Those that deal in “bigger” data may also need to replace 1,000 servers every two or three years. Some customers might be shrinking and others growing – and it is easier to expand than shrink capacity.

Yet for many customers, beyond about 12 months out or with a budget cycle ahead, things are “quite fluffy”, says Wright. “Six months before inception, they then say ‘we need to get this nailed down’,” he adds.

4D plans 15-20 years ahead around the lifespans of mechanical and electrical equipment in its own datacentres, matching requirements against the age and state of a location. The right size of land is needed, ideally near a high-voltage connection point, with capacity available, with dense fibre connectivity and access to a suitable workforce, with flexibility “designed in” to accommodate technological change, says Wright.

“With our Gatwick facility we thought about high-density cooling, tweaking the cooling system to enable that to happen,” he says. “Last year, we deployed immersion cooling for a customer; the year before we went with high-density, rear-door cooling on racks to support high-performance computing-type environments where a standard 7kW rack just won’t cut it.”

Supply chain constraints

Wright says large facilities may aim to plan as far ahead as 2050, but customers may have a relatively short-term view. That is on top of supply chain constraints, particularly on networking equipment, with lead times of 275 days from Cisco or Juniper.

“And if you put in for a power connection request for a datacentre right now and it’s in London, you’re probably looking at 2025 before you get your power allocated,” he adds. “Redesigns and networking are having to happen a bit more on the fly.”

Lewis White, enterprise infrastructure vice-president – Europe at CommScope, agrees that there is more pressure in today’s power-and-network access-centred conversations around capacity.

“Lane speeds have risen from 40Gbps to 100Gbps, even 400Gbps in larger enterprise and cloud datacentres,” he says. “Operators are now deploying optical fibre infrastructures that can support 800Gbps and beyond – going all-in on fibre investment.”

Simon Riggs, Postgres fellow at EDB, points out that squaring monthly performance or annually recurring revenue with a demand for multi-year plans might not sit comfortably beside an agility mantra. Also, accountants rarely tie their calculations to the actual costs of various specific IT solutions and how they are managed.

“I think it’s a little bit cheeky to talk in terms of long durations,” says Riggs. “The original USP in the cloud was that you had flexibility. If you really can predict it years in advance, then why not simply go back to the old datacentre? And it’s happening when people are questioning huge cloud costs.”

Capacity requirements depend on actual volumes of business – and in the past, no one was as worried about the cost of energy. That is why technical problems often occur – such as if a burst happens sooner than a year away and people are running to keep up, says Riggs, suggesting another look at consumption and technology efficiency.

“Really, too much inventory is out there and people aren’t properly tracking what they’re actually doing,” he adds.

Mark Pestridge, senior director – customer experience at colo provider Telehouse, points out that acquiring or building new sites takes years – even just to secure planning permission.

“You have to really build almost floor by floor, suite by suite,” he says. “You’ve just got to continue evaluating what your clients are trying to do and piece it together. It’s like building a jigsaw without all the pieces to start with.”



bottom of page