06 December 2023
Jon Fielding, managing director EMEA, Apricorn
The ability to quickly restore backed-up company data following an incident such as a network breach or technical failure is pivotal to the ability to continue operating. In the wake of a ransomware attack that involves the theft of information, for example, backups enable the organisation to recover quickly using a clean data set and resume their activities.
Every business should have a comprehensive backup regimen in place. However, almost two thirds of UK companies have lost data due to failed backups, according to a recent survey of security leaders in large enterprises carried out by Apricorn.
The majority of respondents (90%) said that their company had been forced to recover data from a backup in the last year. However, only a quarter (27%) were able to recover everything; a drop from 45% in 2022. It appears we may actually have taken a backwards step in terms of the efficacy of our backup strategies and – as a result – business resilience.
‘Head in the sand’ isn’t the issue
Almost a third of respondents attributed the unsuccessful recovery to inadequate backup processes, while 22% admitted ‘we don’t have sufficiently robust backups in place to allow rapid recovery from any attack.’
This acknowledgement suggests a healthy level of awareness exists around the limitations of current backup strategies. So, if denial or lack of understanding is not the problem, why are fewer companies able to successfully restore all their data when they need to? And what are the possible flaws in the approaches that have been put in place?
Other findings from Apricorn’s research point to the impact the increasing decentralisation of IT could be having on the issue.
“Every business should have a comprehensive backup regimen in place. However, almost two thirds of UK companies have lost data due to failed backups.”
The case for automation
One clue as to the worsening situation is the increase in backups being carried out manually, which has occurred in parallel with a marked drop in automated backups. At the companies surveyed, backups are automated at 50%, with manual backups the chosen policy at 48%. This corresponds with a rise in employees making local backups of the data they create and handle, for example to personal storage devices.
The drivers for this change are logical: giving users more autonomy and control over routine tasks reduces the workload that falls to already overburdened IT teams. Having a local copy of data also acts as a failsafe, particularly when an employee is working away from the office, allowing them to restore their information fast if something goes wrong. However, this strategy is likely to be exposing organisations to human error.
Implementing automation in addition to requiring local backups will mitigate the risk of people forgetting to execute the process or doing it incorrectly. It’s also essential to ensure that all data is backed up to a central repository, in addition to a personal one. This is because depending on any one sole form of backup creates a single point of failure in the system.
Back up the backups
The time-honoured advice is to stick to the ‘3-2-1 rule’: have at least three copies of data, stored on at least two different media, at least one of which should be offsite. A multi-layered solution ensures that if one copy is compromised, lost, damaged or stolen, at least one other will be intact. This enables information to be quickly and fully recovered following any disruption.
Ideally there should be more than one offsite storage location: one online, in the cloud, and one offline. A straightforward way to fulfil this role is with an encrypted removable hard drive or USB which can be disconnected from the network to create an ‘air gap’ between the data and the threat.
Crucially, wherever data is being stored, and wherever it flows, it should always be encrypted. While this won’t prevent information being accessed or stolen, it will render it unreadable to anyone without the decryption key, keeping it safe and intact.
Don’t just ‘fit and forget’
Backup processes should be regularly rehearsed and tested to make sure they remain fit for purpose, and continually tweaked and improved where gaps are identified. Testing can easily be incorporated into the organisation’s overall incident response testing plans and scenarios; for example through simulated exercises such as those carried out during red teaming.
Process is equally as important as product for businesses that are seeking to bolster their incident response capabilities. When it comes to responding effectively to an incident that disrupts critical data, investing in appropriate technology tools and solutions is only half the battle. These need to be wrapped up in a multi-layered backup plan, which is built on proper procedures and policies that cover all bases, and which are rigorously tested. Failing to pay enough attention to process will lead to more data being lost, and more recoveries being delayed.