Thin provisioning: storage when required

Written By:
Published:
Content Copyright © 2007 Bloor. All Rights Reserved.

‘Thin provisioning’ is growing in popularity with enterprise storage managers who find it eases their management burden and with internal and external users who are charged for the amount of storage they use. It also sits comfortably with capacity-on-demand initiatives because, unlike conventional storage virtualisation, allocation of storage space only occurs when it is actually used.

As an example of the trend, EqualLogic, which specialises in enterprise-class iSCSI SAN solutions, has now added thin provisioning to its PS Series of storage arrays through a no-cost firmware upgrade designed to improve utilisation, scalability and ease-of-use.

An obvious attraction in thin provisioning exists for ISPs and ASPs who may provide hosted storage from a large bank of capacity, then charge for usage. The contentious issue of users paying for unused capacity allocated to them should melt away with this approach. With this market obviously in mind, EqualLogic has also introduced role-based management which can limit access to authorised personnel for particular pools of storage and provide customer views limited to their own resource.

In a sense, thin provisioning is a logical step on from storage virtualisation, now a well-established part of storage provision, which helps both simplify storage management and drive up percentage utilisation. The virtualisation process presents logical disk ‘volumes’ to the user which are mapped to physical devices behind the scenes; typically few logical volumes cover many physical disks which may include a mix of disk capacities and types. Overall utilisation is driven up because spare space for expansion does not need to be allocated on each and every physical device.

However, decisions about logical to physical mapping have to be made ahead of time, before new data begins to populate the disk space allocated. Then, if space is running out, further capacity needs to be added and/or the data needs to be switched around—which means the IT staff have further decisions to make and implement.

Thin provisioning looks at things differently. Virtualisation still applies and a total storage pool needs to be designated. However, the mapping and allocation of space only occurs when the data actually arrives ready to be written out to disk. So space allocation becomes an automated, real-time process. As long as sufficient overall capacity is present, no manual intervention is needed.

Percentage utilisation can further increase over standard virtualisation since overall capacity can be drawn by different logical pools only when they need it, so there needs only to be one ‘buffer’ of spare capacity. Meanwhile, overall capacity forecasting also becomes easier. EqualLogic’s firmware upgrade assists this in a small way by also offering historical performance (IOPS and latency) measurement and trend analysis.

However, as with virtualisation, hardware suppliers are very good at applying it to their own equipment but not so good at including other equipment types in the mix as many enterprises would like. If it comes, thin provisioning across a mix of manufacturers’ disk hardware will likewise be more attractive to large enterprises wanting to realise the full benefits of this approach. I suspect that is some way off.

Despite this, there is good reason for storage managers to consider the value they can gain in terms of ease of administration, and better utilisation which in turn leads to lower costs in power and cooling.