Optimising your storage strategy to thrive in the digital economy

14 April 2022

Florian Malecki, senior international marketing director of Arcserve

Florian Malecki, executive vice president, marketing, Arcserve

Data is the new gold. We hear this statement a lot. Social media, the Internet of Things, Cloud services, Cybersecurity, and many other relatively new sectors are generating oceans of data far above anything we have seen before.

Companies are under incredible pressure to store, manage and protect that data as a business-critical requirement. They need to take new and creative approaches to storage to transform their operations and thrive in the digital economy.
 
IDC predicted global data creation will increase from 33 Zettabytes in 2018 to 175 zettabytes (ZB) by 2025. Adapting to this jump in data creation has pushed companies into updating their storage strategies with some urgency. But it's not just about having enough storage space. With ransomware threats growing and the increase in home or hybrid working driven by the pandemic, it's more important to ensure that data is secure and accessible.
 
Put data front and centre of your storage strategy
 
Not all data is created equally. Some of it is business critical, but the vast majority is less important. Organisations need to establish which pieces of data are more critical to their success than others, putting them in a better position to manage their storage and leverage their data.
 
With so much data, without careful and comprehensive management, likely, companies will inadvertently end up putting critical data on less critical servers. When this happens – it's a real problem. It typically takes longer to access slower, secondary machines and therefore be able to leverage that critical data. This lack of speed and agility will have a detrimental impact on business.
 
It boils down to the core of the issue. Organisations typically take a server-based approach to their data backup and recovery deployments. Their focus is on backing up the most critical machines – not the most critical data.
 
The change sounds simple, and in many ways, it is. Don't base your backup and recovery policies on the criticality of your servers; base them on matching your critical servers with your most important data. Make the actual content of your data the key decision driver from a backup point of view. 
 
You need to implement storage and data management policies based on the value of your data – not your server hierarchy. 
 
Is the cloud helping with data?
 
Rapidly becoming a perfectly acceptable and standard platform for data storage, the cloud is here to stay. However, many companies have quickly realised that moving to the cloud is not as cost-effective, secure, or scalable as they initially thought. They also realised that they need to look to return at least some of their core data and applications to their on-premises data centers for maximum benefit and security.
 
The fact is, data volumes in the cloud have become unwieldy. Organisations are discovering that storing data in the cloud is more expensive than they thought. It's also hard to access that data expeditiously due to the cloud's inherent latency.
 
As a result, it can be more beneficial in terms of cost, security, and performance to move at least some company data back on-premises.
 
With the realisation that the cloud is not the panacea they all thought it would be, organisations are embracing the notion of cloud data repatriation. They're deploying a hybrid infrastructure where some data and applications remain in the cloud while more critical data and applications come back home to on-premises storage infrastructure.
 
We need more storage!
 
As well as repatriating cloud data, there is also the continuing rise of technologies like IoT, artificial intelligence, and 5G. Where is all this storage going? In many cases, traditional disk storage isn't up to the task. Disk drives are like your family's old estate car—reliable but dull, slow, and unable to turn on a sixpence. But we're increasingly operating in a highly digital world where data has to be available the instant it's needed, not the day after. In this world, every company needs high-performance storage to run their business effectively, not just the biggest and wealthiest companies.
 
It drives the need for ever-greater high-performance storage, such as flash storage. Unfortunately, flash by name – flash by nature! Flash storage can be somewhat likened to a (flashy) high-performance car—it's cool and sexy and does have impressive performance, but the price puts it out of reach for most. 
 
As the cost of flash storage drops, storage vendors can increasingly bring all-flash arrays to the mid-market. Quite simply, more organisations are now able to afford this high-performance solution. This price democratisation enables more and more businesses to benefit from the technology.
 
Looking to scale-out
 
Today, there are challenges with traditional data storage methods, including little flexibility when a company needs to add more storage and hardware is expensive. Managing storage is time-consuming, and since conventional storage often lacks deduplication and compression, data isn't stored efficiently. Plus, migrating data is a massive undertaking when it's time to upgrade, and adding backup and disaster recovery into the equation is a challenge.

Many organisations adopt scale-out storage solutions that eliminate many of the issues afflicting traditional data storage methods. Scale-out storage is network-attached storage (NAS) where the amount of disk space can be expanded by adding more drives to individual storage clusters and more clusters as needed. Scale-out storage builds on clustering by adding features like data deduplication and compression, simplified remote management, and built-in backup and disaster recovery options. Ultimately, scale-out isn't just another way to store data; it's a better way to manage, protect, and recover it.A business taking this approach will see time savings, increased efficiencies, and downtime reduction in most cases. 

Traditional storage isn't practical anymore. With the explosive growth of data, legacy systems quickly hit their limits. That leaves businesses with a few options. Should they move data to the cloud and put their trust in a third party? Or should they continue supporting their infrastructure through expensive upgrades? For many, the answer is making the transition to scale-out storage. Specifically, with object-based scale-out storage, businesses can future-proof their storage infrastructure. Instead of having storage scattered across locations and hardware, object-based storage lets companies treat all storage as one global pool. When it's time to upgrade, companies can add nodes and drives to the storage cluster.

Scale-out storage offers more simplified management, which is important because of the pace at which data is growing, the increase in data silos, and the different types and sensitivity of some data—managing data isn't easy. By centralising the entire data infrastructure, organisations can be more efficient, create uniform policies, and even run backups and recoveries, saving IT admins time and effort.

Keep ahead of the hackers 

Of course, more storage and more data mean a greater risk of cyberattack. Ransomware, in particular, has become a considerable scourge to all companies. Hackers didn't take long to realise that data stored on network-attached storage devices is extremely valuable, so their attacks are becoming increasingly more sophisticated and targeted.  

Backup data is the last line of defence, and having it breached is a very serious problem. Hackers will exploit every possible avenue of this by also attacking unstructured data. They do this because if the primary and secondary (backup) data is encrypted, victims have to pay an additional ransom if they want their data back. 
 
Without an immutable recovery plan, organisations will have to pay one or more ransoms to regain control over their data.
 
With so much threat, it is a question of not if but when an organisation will need to recover from a 'successful' ransomware attack. Therefore, it's more important than ever to protect this data with immutable object storage and continuous data protection. 
 
Organisations should look for a storage solution that protects information continuously by taking immutable snapshots as frequently as possible (e.g., every 90 seconds). Even when data is overwritten, older objects remain as part of the snapshot—the original data. Because the object store is immutable, even if ransomware does infect primary storage, there will always be another, immutable copy of the original objects that constitute the company's data that can be instantly recovered …even if it's 100s of terabytes. 
 
Green storage

This is 2022. We can no longer keep kicking the plastic bottle of sustainability down the road. Global data centres consume staggering amounts of energy, and this must be a key consideration in any data management strategy. 
 
Data centers now eat up around three percent of the world's electricity supply and are responsible for approximately two percent of global greenhouse gas emissions. These numbers put the carbon footprint of data centers on par with the entire airline industry. 
 
Most companies develop supply chain-wide strategies to reduce their carbon footprint and be good corporate citizens. As part of this effort, they are increasingly looking for more environmentally-friendly storage solutions that can deliver the highest performance and capacity at the lowest possible power consumption.
 
Data is not going to stop being the new gold any time soon. More likely, it will continue to increase in volume and value. Organisations need to work hard to get the most from the data they create and store. By leveraging the latest technology and adopting a modern approach to data storage, companies can reduce energy consumption and benefit from increased efficiency, tighter security, and the ability to thrive in the digital economy.