News stories abound these days of the latest organization hit by a ransomware attack, and paying tens of thousands, if not millions, of dollars to get their data back. The most recent high-profile victim was Colonial Pipeline who paid nearly $5 million to recover their stolen data. Remarkably, this payment sum isn’t the first of its kind. In just the past year, US travel services company CWT paid $4.5 million, travel insurance provider Travelex paid $2.3 million, and chemical distribution company Brenntag paid $4.4 million in data ransom payments.
It’s what makes Fujifilm’s refusal to pay a ransom and experience limited operational disruptions so remarkable. The company’s backup processes, and their ability to have “sufficient backups in place as part of its normal operation procedures,” meant they were able to get operations up and running in days with zero ransom fees paid.
Fujifilm represents an exceptional test case for what those of us in data protection work towards every day: developing the right backup policies, executed consistently and correctly, to effectively safeguard critical data.
Backup Procedure Best Practices
Any organization can protect itself against ransomware attacks like Fujifilm, so long as they are implementing key best practices in their backup operations.
1. Develop Department or Vertical-Specific Backup Schedules
Backup policies that call for regularly scheduled full backups as well as backups to occur more frequently are protocols that balance the need for data protection and broader government-mandated regulations with organizational demands to control IT resource costs like storage usage. However, a one-size fits all approach for all backup assets may not be effective.
Evaluate the speed with which critical data is created in different parts of your organization, and the financial and time-related cost to your organization of losing a given days’ worth of data within each of those parts. The greater the cost, the more frequent backups should be performed. While this may represent some incremental storage and systems costs in the short term, in the long term it will better protect you against ransomware attacks.
2. Encrypt & Store Backups In Networks Outside Your Organization
A backup is essentially a copy of valuable data in your organization. If it’s easy to find any particular backup, cyberterrorists will be able to access and hold ransom not just the original file but its backup too, making the backup no longer accessible or restorable.
This is why both encrypting your backups and storing them in separate networks and locations is a proven way to ensure backup access, if needed. Encrypting a backup at the client level prior to saving it in its final destination ensures that only designated personnel with cypher keys can access it and third parties cannot tamper with it. Further, storing these files in separate networks like cloud destinations ensures that even if your organization’s servers are attacked these outside networks, and the data housed in them, will remain secure.
3. Institute Backup Failure Escalation Protocols
Data backups will inevitably fail at some point in time. The more important element to focus on is how critical any given piece of data is to the organization as well as how critical a particular type of failure is to data protection. With this understood, you can put escalation protocols in place for your team to triage more critical failures sooner, and less critical failures later.
For instance, a backup that fails due to a locked file error is likely no cause for concern. It doesn’t represent a broader system issue and therefore will likely succeed on an automatic retry without intervention. However, a media error in which backups fail to write to a specific media is concerning. It likely reflects a bigger system issue that needs attention as soon as possible. Understanding failure types that warrant immediate attention and having automated alerts in place to identify those failures is just one example of an escalation protocol to get ahead of systemic failure issues.
A separate approach would be to automate identification of repetitive failures. As we mentioned above, a locked file error will likely correct itself with automated retries. However, more serious underlying issues will result in consecutive failures. Determining a consecutive failure threshold (i.e. the number of failed backup jobs in a row) and creating alerts around that threshold, is another way to efficiently manage failures and escalate only the most concerning issues to the team for faster remediation.
4. Automate Critical Backup Failure Ticketing
It isn’t enough to just have critical backup failure identification processes in place. You’ll also want a process for resolving the most critical failures in your environment as soon as possible. This is where automatically generating incidents comes into play.
Using your escalation protocols, which could be determined by type of failure, number of times the backup failed, or even the department or application type the backup is in, you’ll be able to automatically identify items which require immediate attention. With failures correctly tagged, you’ll have a trigger to automate the creation of a backup failure ticket. Rather than hours, if not days, for that incident to be produced, your system will now have a failure ticket in seconds.
With automated notifications, your team can address critical failures faster. This ensures that in the event of a cyberattack, your most essential backups will be available to restore normal operations.
What To Expect When Implementing This Approach
No business continuity plan is going to safeguard 100% of your data in the event of a ransomware attack. This is especially true since research shows the average time to identify that a data breach has taken place is 196 days.
As a result, organizations must instead take a page out of Fujifilm’s book and ask themselves what is necessary to ensure limited operational disruptions in the event of a ransomware attack. Reviewing your backup assets and assessing their relative importance within your organization while developing automated approaches to identify and fix impediments to their backup schedule is a time-tested way to ensure business resilience against these events.