Prioritising workloads in a hyper-converged infrastructure migration
A cost-effective approach to combining virtualisation and unified management probably means deploying it workload by workload, experts suggest.
A hyper-converged infrastructure (HCI) should be greater than the sum of its parts. Analysts note that migrating workload by workload can suit most organisations – so operators can base deployment decisions on the peculiarities of each workload, aiming for end-to-end management, visibility, resilience and integration.
Todd Traver, vice-president for resiliency at the Uptime Institute, says an organisation that is ready to make the move to HCI – as most will want to – can start by selecting an initial digestible chunk and move on from there. However, significant preparation and a view of the route ahead across all applications is needed for success.
“It requires you to rethink a lot of things you have already implemented, and the need to change organisationally to support it,” he says. “You can use HCI to deploy a private cloud or a hybrid cloud on your premises, which is great, but it brings similar challenges to moving to the cloud, because it is a form of cloud.”
Traver says it requires a close look at every application in detail. Probably about 20% will be virtualised already, and a further 60-65% can be virtualised “with reasonable effort”.
Some 20% of applications can probably be sent immediately to the back of the queue because of their architecture. These may be legacy applications that can be migrated once they go completely end-of-life, and some might be applications that may never really be suitable for a move to HCI, he adds.
Incorporating legacy issues
Examples might include COBOL or old banking-type applications running on an IBM Z-series mainframe that might just not port themselves over. Or an application might have multiple dependencies on legacy databases, which means it cannot easily be put into a virtual machine (VM) or container and migrated.
“Also, for an application that is completely CPU-bound, you would be kind of wasting your money putting it on an HCI platform. Hardware is not inexpensive,” says Traver.
HCI migrations will require staff with relevant skills who are not already fully utilised. This, too, is easier if more applications are already virtualised and running on bare-metal servers. Staff need to understand the applications, how they are built, their interdependencies, and the workloads to be migrated. If not, it might be time to bring in a consultant or consider outsourcing.
Traver highlights that it would be a mistake to simply let an environment run down ahead of time by putting its maintenance on the back-burner. The entire infrastructure needs to be ready to migrate, and the hardware must be suitable. Unfortunately, he says, a lot of organisations have a growing “technical debt” – in other words, they have not invested enough in their IT.
If the operation is short-handed, the skills are not available and the environment is a mess – with servers in need of replacement, code in dire need of rewrites or maintenance long overdue – it will almost certainly be better to delay an HCI migration until these issues are fixed.
“Doing an HCI migration is still possible, but it’s going to take a lot of work,” says Traver. “There needs to be buy-in from the top as well, because managing the environment going forward will be different.”
Doing it right will not only hide “complications of the hardware”, he says, but will deliver a single control plane and a different approach to developing applications that enables testing and monitoring them in real time. Hardware and data costs can be massively reduced, depending on the organisation’s starting point.
“Depending on who you talk to, there can be a 10:1 reduction in hardware in some cases,” says Traver. “In data, I have seen numbers indicating 50:1, due to things like deduplication and compression.”
The application process
Dominic Maidment, technology architect at B2B gas and electricity supplier Total Gas & Power, has seen the dust begin to settle on a part-complete HCI migration, covering some 100 applications, which began five years ago.
Nutanix enterprise cloud software was layered on top of fresh-bought hardware to achieve, among other things, ease of management through a “single pane of glass”. Mostly, this has been successfully managed in-house with key supplier support, says Maidment.
“It’s by no means finished,” he adds. “We still have workflows on the legacy infrastructure. But we are in a staged approach across workloads onto a converged infrastructure. Some things – such as the Tru64 servers – had to be emulated, the really difficult stuff, because they didn’t move so well. But it was nice really, because in fact the industry had an answer.”
The new platform will support digital transformation and multicloud operations (including AWS and Microsoft Azure integration), as well as continuing to support legacy applications where needed, combining VMware and AHV hypervisors with the Prism management console.
The firm wanted to jettison its old Sun SPARC boxes and x86 systems and, if possible, the SAN storage “to bring everything in”, but it is a “complicated puzzle”, says Maidment. Legacy assets include Oracle databases, middleware, and bespoke apps that were written over the top of those legacy applications.
Some databases have very specific licensing stipulations that made it tricky to simply migrate them to virtualised infrastructure. Other legacy systems are out of support and so would be difficult to rebuild if required.
Maidment says the company typically opts to extract the apps, recode them and then retest everything before migration on to a modern infrastructure if possible, but this will not work for every organisation.
Total Gas & Power is still to choose the ultimate fate of its database automation. It first considered a move to public cloud for scaling and redesign, but in the end opted for refactoring to prepare it for an eventual migration, with the location yet to be decided.
“The general virtualisation stuff, however, has been an absolute no-brainer,” says Maidment. “That was done pretty much in two weeks, moving 100 machines. First, just your generalised workstations, certain servers, Windows and Linux things, or with domain controllers.
“The only things left are those that require a lot of testing, and with bespoke code that again would need to be emulated.”
Workloads to drive requirements
HCI is typically about abstracting from the hardware to align to the workload or application, instead of being dictated to by the back end, as Lee Dilworth, VMware’s chief technologist of cloud platforms for Europe, the Middle East and Africa (EMEA), puts it.
“It’s ensuring you can deliver the right service to the right application at the right place and time,” he says. “You can start with a very small environment, then scale it out east to west, across the datacentre, and even within the nodes themselves. You can start off half-populated and add devices as you are growing.”
Organisations should ask what their platform should look like for the next five to 10 years, says Dilworth, and they should think about their infrastructure from top to bottom, starting from the application layer instead of building up from the storage layer.
...
Comments