top of page

Feature 4 July 2023 (Computer Weekly)





A service mesh is a layer of IT infrastructure that controls service-to-service communication over a network to enable separate parts of an application tocommunicate with each other. It is often used as an elegant approach to deliver communications between containers in a cloud-native architecture.


But while the optimisation of data communications between applications in containerised, hybrid environments can help master complexity, and even cost, thisdoesn’t necessarily mandate a service mesh architecture.


Steve Judd, solutions architect at machine identity management supplier and Kubernetes consultant Venafi, warns that using a service mesh to separate application logic from the networking side won’t necessarily reduce complex IT management headaches for every organisation. But it does provide somesecurity benefits.


Kubernetes containers need to talk to each other over some kind of network. But having a flat communications architecture, where any container can “talk” to any other container in a cluster, is problematic, he tells Computer Weekly.


“For security reasons, you probably want to isolate some of those workloads, so they’re not free to talk to whatever they like,” adds Judd. “You might want different containers to be able to effectively authenticate with each other through mutual TLS [transport layer security].”


Malicious actors may target cloud-native architectures, so security and compliance considerations remain key when thinking about network optimisation and management. Chief information security officers (CISOs) may require mutual TLS. Service mesh by default can help manage those security certificates in aself-contained way.


Major service mesh options, including Linkerd or Istio, provide mutual TLS porting out of the box, with Istio being more of a “kitchen sink” approach thanLinkerd, says Judd.


Complexity plus – or minus for some

For now, highly regulated industries and sectors such as banks and other financial services providers are likely to be the biggest adopters of service mesh technologies. This is because a service mesh offers a way to meet certain regulatory compliance requirements in a Kubernetes environment.


Judd explains: “You don’t have to touch the applications running in the microservices applications. They don’t need to care that you’ve got mutual TLS and encrypted traffic, because that’s done externally to those microservices. All they know is that they can talk to this kind of port and the thing at the other end responds.”


By capturing every single request that goes between two containers or workloads in a cluster, such a network architecture also delivers strong observability.


“You can do things like traceability, you can get much better metrics for performance in terms of latency and volume and bandwidth and so on, in addition to kinds of network policies for traffic management and control,” says Judd.


Such traceability offers IT departments the ability to implement “circuit breaker” patterns for greater resiliency, for example enabling a shift to a different workload if the first one fails to respond within a specified number of seconds.


According to Judd, a service mesh also offers more options when it comes to upgrading microservices.


Suppose an application is made up of five “version one” microservices, with all the network traffic happily going between them. If you want to upgrade one of those microservices to version two without committing to send all the traffic in the first instance, you can send a tranche of traffic, according to a specified metric.


Alongside gradual roll-outs of microservices, because service mesh “understands” network traffic, you can add that into the mesh configuration while testing.


“That’s called a ‘canary release’, like the canary in a coalmine to see if the canary dies,” Judd says. “You just say, ‘Send 5% of all my traffic to this new service’, and monitor it. If it looks like it’s performing, you can say, ‘Now send 50%’.”


But, as Judd points out, Istio’s more “kitchen sink” offering can be quite complicated, requiring a huge amount of configuration and related knowledge to get right. “Then you have to spend ages troubleshooting, maybe. So that’s kind of a downside,” he says.


Linkerd offers a much slimmer mesh out of the box, yet has integrations with various other technologies. In canary deployments, you’ll integrate with a differenttool to provide that aspect as part of the mesh, says Judd.


However, requirements can be met some other way if you don’t specifically need to opt for service mesh. Probably only about 15% of all the Kubernetes clusters in the world today run a service mesh. Of that 15%, a majority share is Istio-related service mesh, with the remainder largely Linkerd, says Judd.


Outsourcing remains an option

When it comes to getting started with a service mesh, Kai Waehner, field chief technology officer (CTO) at Confluent, notes that IT departments have a number of choices: either work with Istio and Envoy, or Linkerd with Sidecar, or target some service mesh capabilities

via software-as-a-service (SaaS) provision “under the hood”.


“Our customers have told us, ‘We just want to use your service and you should optimise that internally’. The software providers can do that for you, so you don’t have to care,” he says.


A core engine can manage networking and routing optimisations, achieving internal terminations, rate-limiting connections and linking or routing physical cluster traffic internally for a lower total cost. Providers such as Confluent internally implement service mesh patterns, Waehner points out.


With optimised network connections and transfers across network performance patterns, with the right configuration, cost can be reduced. That’s a top use case for cellular networking, beyond “big contract cloud”, he says.


Manish Mangal, global head of 5G and network services business at Tech Mahindra, says the company uses a service mesh when migrating telco networks’ cloud-native platforms towards 5G for optimised communication and multicloud deployments.


“We recommend service mesh architectures like Istio to our customers to secure, connect and monitor services at scale and across geographies with few or no service code changes,” says Mangal.


He says a service mesh can assist network performance across distributed systems with efficient traffic routing, load balancing, traffic encryption and security, observability and monitoring, service resilience, and service discovery, helping reduce downtime, and potentially cost as well. ...




bottom of page