Network monitoring in a hybrid world

06 May 2021

In 2021, we face more network vulnerabilities, security attacks and data breaches than ever before in the history of IT and communications. This has increased the demand for stronger network performance and the tools used to manage the network. Given this, it’s imperative businesses find a way to accommodate all of these systems and networks with the right network monitoring and security tools to protect our corporate assets and - of course - the bottom line.

Neil Collier, co-founder and technical director of GCH Test & Computer Services explains how using the internet for all users has changed significantly since March 2020 when the Covid-19 pandemic spread across the globe.

“Millions of people in most market sectors were used to working centrally but have been relegated to working from home,” he says. “One of the consequences of this shift was that instead of internet traffic primarily being LAN-based, concentrated in the data centre, connections became WAN-based. This shift brought new challenges; laptops and desktop computers in homes replaced the in-house LAN environment. Every home worker became dependent on broadband and network administrators had to handle increasing calls from staff who may not know how to fix connection problems.”

It’s also important to know that it isn’t an Orwellian approach monitoring, so before you scramble to delete your internet search history, it’s not “that” kind of monitoring, according to Mathias Hein, consultant at Allegro Packets.

“Network monitoring may sound like ‘Big Brother is watching you’,” he says. “In fact, it is not like this at all. Networks are complex -  the internet, a massive number of interconnected networks is even more so; efficient tools are needed to ensure they function as designed, so problems can be detected and fixed as quickly as possible.”

Hein adds that monitoring does not interfere with the flow of data across networks and that it displays and often stores some or all voice, video and other data traffic generated by devices. “It is relatively easy to locate network faults that, for example, may drop a connection for a long period,” Hein continues. “It is more challenging when faults occur sporadically and in short time spans. That is when efficient, easy-to-read network monitoring equipment is a vital asset to an organisation. It is even more valuable if the monitoring device can store back-in-time data, particularly when searching for a transient problem.”

Time now, then, to take a look at how things have changed in the past year. After all, most UK office workers were forced to connect to the company network from home. Did that mean network monitoring had to evolve at short notice?

Adrian Rowley, senior director EMEA, Gigamon says the dramatic shift to remote working caused networks to turn inside out and become significantly more complex. “Unsurprisingly, with the increase in personal and unsecured devices connecting to a company intranet from around the world, visibility has been clouded and network vulnerabilities have been exacerbated,” he says. “In fact, according to a recent survey, businesses cite remote worker endpoints as their biggest current security risk. With this move to remote working, traffic paths have changed and organisations need to ensure their existing tools are still working effectively and efficiently with the new traffic flows in their infrastructure.”

“With the vast majority of employees working from home, IT teams had to change the way they monitored network traffic,” says Chris Bihary, CEO and co-founder of Garland Technology. “Did employees still have access to all of their resources on cloud-based applications, or were they using a VPN to access the corporate resources behind a firewall? Software defined perimeter technology became prevalent with users only having access to authorised applications to help reduce threats from compromised devices.”

Chris Labac, vice president, global sales engineering, Viavi Solutions adds that in today’s remote working world, where many IT departments are limited to remote access to users and their endpoints, the need for comprehensive infrastructure monitoring and insights into the remote end-user has never been greater. “An integral part of this is the ability to flexibly instrument and maintain service health where users are located or applications are hosted,” adds Labac. “The State of the Network 2020 uncovered that last year, during the start of the remote working crisis, the surge in remote users challenged 58% of IT professionals to seek more network visibility in order to manage bandwidth load, monitor application performance and avoid VPN oversubscription.”

Labac added that this year, “we are seeing an increased investment in IT technology”, with more than 70% adoption of emerging technologies such as SASE, SD-WAN, 5G, and IoT. “In the post-remote workforce world, the need to have visibility into these now mainstream technologies and ensure that the end-user’s experience is maintained before and after deployment has become critical,” he says.

Now that the UK has a so-called ‘clear roadmap’ to cautiously ease lockdown, many, if not most of the British workforce will see a ‘new normal’ in the form of hybrid working. With that in mind, Chris Bihary, CEO and co-founder of Garland Technology, opines that ‘SASE is here to stay’ as it gives IT teams greater flexibility to deal with changing needs of remote workers. “Using SASE administrators combine security technologies such as Zero Trust Network Access and Firewall as a Service (FWaaS) with network technologies such as SD-WAN,” he says. “This produces a flexible network that can create secure connections between users, defended by a security implementation that’s both lightweight and powerful. Under SASE, security is consumed as a service and partly managed by vendors. Because users no longer need to route their connectivity through the data centre to access their tools, they can gain an advantage in terms of reduced latency and thus increased productivity.”

Hodgson says that Paessler doesn’t “see a fundamental change for network monitoring” due to hybrid working. “Nothing completely new happened but the focus has changed. Cloud applications, collaboration tools, and video conferencing have become vital parts for many companies and will not vanish once the pandemic becomes a thing of the past,” he continues. “But, also on premises will remain part of our daily business. Performance and security reasons make it indispensable for the foreseeable future. Network monitoring will need to improve when it’s about monitoring cloud applications and services, but this still needs all the on-premises features.”

Then, of course, there’s technology. To successfully carry out network monitoring, key technology is required.  Rowley says “the hero technology of the past year has been cloud”, which he says has been the bedrock for remote workers separated from their teams.

“However, many businesses did not have the time or resources to implement the correct infrastructure needed for the rapid digital transformation initiatives required by the pandemic and were instead forced to upgrade their solutions in a patchwork fashion,” he continues. “For these organisations, migration to the cloud was rushed, yet it continues to play a critical part in business agility. The issue now lies in the fact that legacy network monitoring systems struggle to cope with a hybrid cloud model.”

Rowley adds that “it’s impossible to sufficiently monitor the cloud with tools made for on-premise systems, while cloud tools themselves can often rely on application-level telemetry, and therefore lack visibility into the critical data moving across the network”. He continues: “For network monitoring in this new environment, NetOps teams must find visibility solutions that span the entirety of the hybrid cloud, thus eliminating blind spots and reducing security risks. Ultimately, the key technology here is a platform which allows complete visibility from the core to the cloud.“

It’s a view shared by Labac, who says the latest network monitoring tools implement automation and machine learning (ML) to automatically flag root cause of problems, making it easier for staff to address them.

“For example, the Viavi Observer platform uses automated workflows to provide an end-user experience score – a patent-pending technology that leverages ML to combine more than 30 key performance indicators (KPIs) into a single score that includes problem domain isolation,” he adds.

Even with all the right kit in place, problems can still occur – so what can be done about outages, alert fatigue and excess tools? Hein explains that  “outages can be disastrous,” and that “we are all dependent on computers for our work life and outages”, depending on the organisation affected, can create financial meltdown or be life-threatening.

“While security mechanisms and software are an essential component of networks, the same is true for monitoring solutions,” Hein adds. “Powerful, easy-to-use and manage monitoring technology is not an option, it should be recognised as a mandatory component in a network. Alert fatigue can be the result of having to watch too many tasks at the same time due to insufficient or inadequate fault reporting tools and poor User Interface design. As for the tools themselves, well crafted, intuitive User Interface design can simplify the task of the administrator and help locate and fix problems in a timely and efficient manner.”

Hodgson explains that many monitoring tools have a very in-depth approach to delivering deep analysis even if it’s not necessary for avoiding failures. That, he explains, leads to many specialised tools, each of them with its own alerting functionality. “You could then use alert management tools to handle all those specialties, but you will have to pay to maintain all those single tools,” Hodgson continues. “The other approach is to use one solution with a broader approach that can replace two, three or even more of those specialised tools and combine it with specialised tools for vital areas like advanced traffic analysis.”

Future-proofing is key for all businesses and that is also true when it comes to network monitoring. After all, in the modern era, no network means no business. That means companies will find it incumbent upon them to review their existing set-up.

“It may be time to re-evaluate your monitoring tools or platform when you are upgrading your network speeds, say from 1G to 10G or even 100G, if you regularly are experiencing network degradation, slowdowns, or lack of capacity,” says Bihary.

“If you find it difficult to monitor all of the different areas of your network, both on prem in the data centre and in remote sites, and in the cloud. After you experience a breach or hack, you may want to look at the monitoring tools you have in place to see how you can mitigate the effects of that breach, and work to prevent future attacks.”

For Hodgson, the “key indicator” for companies to re-evaluate is when they realise that the number of monitoring tools is constantly rising.

“Usually you don’t replace an existing tool as long as it works, but more likely add another one for additional tasks,” he adds. “Depending on the size of the company this can sum up to five, ten, or even more separate monitoring tools. At a certain point managing those tools will eventually take more and more effort, which should be the prompt for you to start thinking about replacing some of them with broader, more efficient tools.”

As far as Labac is concerned, there are many signs that it may be time to re-evaluate a monitoring tool but the “canary in the coal mine” indicator will be declining customer and stakeholder satisfaction with poor IT user experience. “Others include frustrating interactions with services and support team. However, the most critical issue may simply be that the tool is not easy to implement,” he continues. “As the skills gap is expanding and IT resources are already spread thin, ease of use and accessibility is the single most important indicator that you’re using the right tool. If your team is finding themselves overwhelmed with KPI overload, or with an inability to easily drill down into forensic level data, then it’s a clear sign that you may want to seek a different network monitoring platform.”

The future of the network is far more complex than we could have imagined. The consumerisation of IT, mobile devices, big data, virtualization and cloud computing are just some examples of today’s rising trends. But these trends also bring their own new complexities and challenges for the average enterprise.

Rowley says that as IT budgets remain tight following the financial uncertainty of the previous year, many IT teams are also facing the challenge of doing more with less. “In order to ensure efficient network monitoring is possible, it is important to consider where legacy, on-premise tools can be optimised and where existing technology can be upgraded to improve visibility,” he continues. “By leveraging visibility to minimise the traffic being shared with each network tool, they can be optimised and budgets are more likely to be signed off by an organisation’s financial decision-makers, as a cheaper and less dramatic overhaul of current infrastructure is possible.”

Regardless of what we do for a living, we are all dependent on computers and networks and as Collier points out, even the smallest fault can become costly to locate and rectify. He argues that this task becomes more complex when, instead of a centralised topology, internet traffic is pushed to the edge, a paradigm shift from normal network configuration. “Not every home worker benefits from high-speed broadband or the latest computer/internal infrastructure, so increasing pressure is put on network administrators,” he says. “Two important software applications have escalated as a result. virtual private networks (VPNs) have become standard for many to help improve end-to-end security, and VoIP coupled with online conferencing has come of age. However, along with these applications, end-user misunderstanding, mis-configuration and network complexity means powerful, easy to use, network analysis equipment is crucial.”

Collier argues that while many have prophesied that the conventional workplace is history, “perhaps such statements may prove to be inaccurate.” He continues: “People are social creatures and need to be together; online conferencing, useful as that is, it cannot take the place of real face-to-face gatherings. Even so, for some, home working and dependency on reliable data connectivity is here to stay, and so is the need for smart network monitoring and analysis technology.”

What we do know is that a strong security posture is only possible when there’s pervasive visibility across the entire network and this is why solutions vendors are focused on providing today’s enterprises with advanced network monitoring and security solutions that provide intelligent visibility into the network in real-time.

Network outages and other problems can be very costly for businesses to address, not to mention very infuriating. To help enterprises operate as healthily as possible, networking monitoring has long been an invaluable service – as old as the networks themselves – in keeping businesses going in the face of adversity. It will also change with the times.