Navigating cloud costs and security challenges? Learn how to optimize your cloud environment, improve data portability, and protect against threats like ransomware.
As businesses look to optimise their costs to weather economic downturns, ramping up cloud spend can cause some headaches. While there are many different options to help mitigate this, from moving workloads to a more cost-effective environment (or even back to on-premises) or re-architecting to save costs, organisations often lack the technical agility to make the most of them.
With modern businesses carrying so much data, some legacy or homegrown applications not allowing for transfer and cloud lock-in all to contend with, it can quickly feel like trying to fit a thousand square pegs through a thousand round holes. All of this is against the backdrop of cyber threats like ransomware – so the right balance between cost and security needs to be found for every workload. To avoid this, IT teams are increasingly designing and adjusting their environments with portability in mind, but there are some questions to ask yourself first.
Why move data at all?
To state the obvious, modern enterprise IT environments are vastly complex. They can be monolithic and highly dispersed, with the growing data gravity of some environments making many companies essentially “digital hoarders.” This is problematic, as holding on to data you don’t need exposes you to unnecessary cybersecurity and compliance risks. However, data bloating in the cloud also brings severe financial consequences and dreaded “bill shock” when that invoice lands.
So, even though many companies moved to the cloud in the first place to optimise costs, the flexibility that the cloud gives businesses can be something of a double-edged sword. While the attractiveness of the cloud is that you only pay for what you need, the flip side is there is no “spending cap,” so costs can easily get out of control. To solve this, better data hygiene can help, but for the data you need, it’s about picking the right platform for the workload. This may involve re-platforming or re-architecting to optimize costs. This is where data governance and hygiene come in – before looking to move data or improve processes, you need to know exactly what data you have and where.
What data can we move?
So, once you’ve established what data you should think about moving, either to a different environment, server, or storage tier, the next, more difficult question is what data you can move. Unfortunately, this is where many organisations face challenges. Having data portability is crucial to move things around as needed and to maintain data hygiene in the long term. However, several factors can make moving or transferring workloads from one location to another difficult. The first is “technical debt” – essentially the extra work and maintenance required to update older or scratch-built applications to get them to a point where they are transferable and compatible with other environments. The cause of these issues might be taking shortcuts, making mistakes, or simply not following standard procedures during software development. However, leaving it unfixed makes it impossible to optimize environments and can cause additional problems with things like backup and recovery.
The other, perhaps more infamous, issue affecting data portability is cloud lock-in. It is a well-known fact at this point that businesses can easily be locked into using specific cloud providers. This can be due to dependencies like integrations with services and APIs that can’t be replicated elsewhere, the sheer “data gravity” it might have in a single cloud, and a simple knowledge gap, meaning teams know how to use their current cloud but lack the expertise to work with a different provider. Of course, this will only affect moving workloads out of the cloud, so it’s still possible to build for better portability to give you better storage options and promote better data hygiene. Essentially, businesses need to create some standardisation, across their environments, making data more uniform and portable and mapping and categorizing it. Hence, they know what they have and what it’s for.
The (constant) security question
Finally, it’s crucial when building and capitalising on data portability that security is included. Of course, improving security can (and should) be a motive for moving workloads in the first place. Still, if you’re migrating workloads to optimise costs, this must be balanced against security considerations. Security needs to be part of the data hygiene process, so teams must ask, “What do we have?” “What things do we not need?” and “What are the critical workloads we absolutely cannot afford to lose?” Beyond this, continue to patch servers, and when moving data to colder storage, etc, remove internet access when it’s not needed.
Having backup and recovery processes in place is also key when moving workloads. To come full circle, having easy data portability is also important for disaster recovery. In a critical event like ransomware, the original environment, be it a cloud or on-premises server, is often unavailable to recover damaged workloads (via a backup) as it is typically cordoned off as a crime scene, and the environment might still be compromised. To recover quickly and avoid costly downtime, workloads sometimes need to be recovered in a new temporary environment, like a different cloud, for example.
As organisations strive to manage their IT environments and avoid financial and cyber security surprises, it’s important to constantly assess your data and applications and where they are kept. However, to manage this and adjust as needed, businesses must build with portability in mind. By doing this, businesses can create a more agile and cost-effective cloud environment and will find it easier to bounce back and recover from disasters like ransomware.