There is something attractive about containers (the software versions) in terms of resilience and recovery from disasters. The idea is to package up applications with everything they need to run, except for the basic operating system.
Thus, a container will contain the application executable, configuration files/scripts, system libraries, and anything else that needs to be nailed down to avoid being caught out by those niggling differences in an OS between one server and another. When this is done, a container can be sent from one platform (an on-premise server, for example) to another (a cloud server, for instance) and run again immediately. But is this enough?
A common complaint is that not all business-critical applications lend themselves to containerisation. The answer may then be to run applications as virtual machines. The drawback is that virtual machines carry their own operating system with them. This makes them hungrier for system resources, and they also take longer to “spin up”.
For accountancy firms with good data backup policies, the extra time may not be an issue. For online commerce systems on the other hand, even short outages can be highly visible to customers, who then relay their reactions over social networks and other channels.
While there is no point in being too ambitious to start with, the answer may be to reconsider the IT architecture of the organisation. New applications can be made containerisable, especially if they follow a microservices architecture in which the app is really a collection of mini-apps working together for the occasion.
Legacy apps may resist efforts to containerise them. However, by definition, they belong to the past and will need to be replaced sooner or later, possibly via a gradual process of building microservice apps to mimic different functionalities and wean the organisation off the monolithic legacy version.