Big data is all around us and getting bigger by the minute. As enterprises and organizations, we produce big data ourselves through internal system logs, online sales sites, and “digital exhaust” of different kinds, although our individual output is only a drop in the ocean compared to all the big data out there.

But does it make sense to plan for the disaster recovery of big data, given its huge size and variety, and the need to ensure DR for other non-big data, critical, operational systems for production, finance, payroll, and so on? The answer is simple.

To understand if your big data merits disaster recovery planning and management, ask yourself if your big data is mission critical for your organisation.

Companies moving towards big data, but shying away from DR preparations, are likely to see their big data as non-mission critical, less important that operational systems and data, too big to back up anyway, or some combination of these things.

However, big data will become mission-critical for an increasing number of businesses, if only to fight back against competitors using big data and analytics to win more market share, steal customers, and apply price pressure.

Backup options will then vary over data replication (copy big data as you acquire it), snapshots at sufficiently frequent intervals, and classical database backups to disk or tape. Running big data operations in the cloud may simplify these options.

Disaster recovery managers will then have to figure out what DR objectives like RTO and RPO should be, whether big data can be separated into critical and non-critical parts, and if backup and DR procedures comply with any applicable legislation.

One thing at least will not change, however. DR plan testing for mission-critical big data will be essential, just as DR plan testing is today for “ordinary” data.