As systems and technologies become more and more complicated, they also become more vulnerable to failure. It may big or it may be little, but when you have 100 moving parts where before you only had 10, your chance of breakdown is likely to increase. On the other hand, some technologies aim specifically to prevent such failure. Others, by good fortune, have resistance to failure built in. IT applications for example can now run in the cloud, where virtualisation and data replication increases overall reliability and resilience dramatically. “Phew”, you say to yourself, “at last technology that won’t put my business at risk”. Unfortunately, there is an even bigger risk that technology cannot help you with.

According to a recent disaster recovery survey, human error is still the biggest danger to IT system availability. This has been going on since computing started. Remember when you accidentally clobbered a system floppy disk (a long, long time ago) on your departmental minicomputer and had to run over to your friend in the neighbouring department to get a new one made? Things have not got better since then; they have got worse. Now, one careless configuration error in storage area network backup routines can wipe out terabytes, not just kilobytes of data.

The survey, compiled by CloudEndure, put human error at 8.1 out of a possible 10 as a threat to system availability, where network failures scored 7.2, cloud downtime 6.9 and external threats 6.7. It seems there is nothing as effective as a human being for messing up a system and possibly a business. Or rather, there is no bigger threat than a human being combined with a lack of availability awareness, proper training and basic process checks and balances to ward off errors in the making. Technology (which is also made by humans) will continue to advance, but the people who use it must advance with it. The future is not “people or machines”, it is “people and machines” and people will need to hone their skills and knowhow continually for the future too.