top of page

Feature 7 December 2022 (ComputerWeekly)

It is easy to argue that with cloud repatriation, the wrong workloads are migrated or managed – yet requirements such as skills, security and costs are not always predictable.

Some repatriations are inevitable – providers can always fail or regulations change, lower latency may be needed, and of course, kit has to be on-premise somewhere.

Paul Flack, director of solutions sales at public sector-focused Stone Group, says a hybrid model seems best overall for many, especially when customers worry about loss of “physical” control too. At the same time, risk in the cloud varies, depending partly on the provider.

Stone sees some repatriations that are typically down to costs, security or skill issues.

“They try to get everything in the cloud, and it doesn’t quite meet their expectations and they end up dragging things back in,” says Flack. “When it turns out expensively, the first thing is, they go: ‘help, we need that back on-site’.”

A lack of visibility can be disconcerting, and teams may not know how to manage cloud workloads. But recruiting cloud engineers and architects is expensive, adding to the cost.

“The cost of that person and that team goes up with it, as well as potentially the different types of cost that you manage on a consumption basis versus a capex model,” says Flack.

Bharat Mistry, technical director at security supplier Trend Micro, agrees, pointing out that some customers assume cyber security will all be taken care of by the cloud provider.

“The reality is, it depends on a number of things and your appetite for responsibilities,” says Mistry. “Often the dividing line isn’t clearly understood.”

If people take infrastructure as a service (IaaS), they can assume everything is protected, but typically there is a limit, for instance when it comes to patching and data responsibilities.

“The provider may have things like firewall services that you can use – but are they equivalent to the kind of firewall that you may have had on-premise?,” says Mistry. “Quite often, it’s rudimentary.

“If you haven’t done your due diligence and homework properly, you have to bolster it with something else on top.”

Traditionally, organisations would have done full risk assessment and penetration testing or go on-site to explore checks and balances. But you cannot easily walk inside Microsoft Azure or Amazon Web Services (AWS) datacentres.

Sold on massive cost savings

Jeff Denworth, co-founder of storage supplier Vast Data, notes that executives can easily be sold on the idea of massive cost savings. This is not entirely their fault – huge marketing efforts over the past 10 years have positioned the cost benefits of cloud up front.

“Everybody at C-level loves the idea, like ‘oh, we want to save trillions of dollars. Thank you, Amazon, for saving us, blah, blah, blah’,” says Denworth. “Then the IT team start rationalising how it can be executed – and have to go and refactor all their code.”

An overall “lift and shift” of virtual machines, storage, networking and so on into public cloud “may be about five times what they were spending” previously, while the likes of Basecamp went cloud native before things like Kubernetes on-prem were not an option, says Denworth...


bottom of page