It hardly needs repeating that data volumes are increasing. Indeed, they are growing at a prodigious rate: estimates suggest that data volumes are doubling every eighteen months. This is partly because there is simply more data that we can usefully collect and use, and also for governance and compliance reasons where it is important to retain information for longer periods than might previously have been the case. At the same time, while disk capacities are getting bigger, disk drives themselves are not getting much faster. This means that, all other things being equal, performance will deteriorate in an unacceptable fashion. Moreover, service level agreements are becoming ever more stringent, thereby requiring performance improvements.
One remedy to this problem is to throw more hardware at it. However, at the same time that data volumes are growing and performance demands are increasing, we have overflowing data centres with escalating power and cooling costs, not to mention data centres where there is simply no more power available. Another way of doing this is to use data archival software.
IBM Optim provides facilities that ensure that the data won't break and it offers a variety of mechanisms for accessing the data. In this white paper we will discuss data breakage and access in more detail, and then briefly consider how IBM Optim works, particularly with respect to System z, though Optim's capabilities are also generally applicable to distributed systems.