Computers are as old as the hills, when you consider early technologies like the abacus. Short of picking an abacus up and shaking it, it must have been hard to make it crash. When the first gigantic computers with their valves and switches arrived, so did the first crashes from bugs, which were literally the insects flying into the electricals and causing problems.

When the first Internet network arrived (all four nodes of it), its first crash came from a user typing the letter “g” in “login”. Now crashes can happen anywhere and everywhere, leaving IT managers nervously biting their fingernails at the thought of the data and uptime they might lose. But what if the crash-proof computer became affordable reality?

A new type of computer system was announced over a year ago by the MIT Computer Science and Artificial Intelligence Lab. In fact, it is not crash-proof, as much as data-loss proof.

The file system has been redesigned for complete reliability and zero data loss (proven mathematically), whether the computer is functioning normally, or crashes, suffers power failure, hardware errors, software bugs, or malfunctions in any other way. This should be music to the ears of those who even today are still uncovering errors in file systems from crash recovery done in the past.

Even in this age of cloud computing, shadow storage, and redundant virtual machines on stand-by, the possibility to guarantee zero data loss in the event of an accident is elusive. There is always some computer instruction or disk operation that could lead to data loss or corruption, if the timing of an interruption is right.

Granted, many database systems, for instance, have built in ACID properties to prevent such loss occurring as part of database transactions, but other parts of the computer system have no such protection. The MIT design could change this for more reliable, more efficient, and equally affordable computers, if technologists, vendors, and customers start to integrate it into their designs, production, and daily use.