Will we reach true storage agility in 2013?

Written By:
Published:
Content Copyright © 2012 Bloor. All Rights Reserved.

I would be interested to hear from any organisation which believes it will have implemented a truly agile data storage and protection infrastructure by this time next year. I do not expect many (if any) positive responses, and there are good reasons.

In the past few years the barriers to quickly reconfiguring server usage have melted away. With VMware, for example, one can create a new virtual server, or remove one, in a matter of seconds – cloning another already functioning server’s attributes. Consolidation to maximise physical server usage is old hat and adding physical capacity into the existing pool can be a non-disruptive off-line function. Oh that the storage to connect to the virtual machines could match this.

Had it been a practical option, I am sure VMware would have come up with complementary storage technology fit for purpose from the outset, so that configuring virtual machine (VM) storage would be part and parcel of server creation and breakdown. Yet there has been far too much in the way of that.

Difficulties and solutions
An early discovery was that, when servers that shared storage were consolidated, storage access became much more random so contention multiplied and performance plummeted. This helped hold back server consolidation from mission-critical applications initially. The same effect, often more pronounced, was also experienced when desktops were consolidated in a VDI environment. Some storage vendors such as virsto and Nimble have found ways to alleviate this problem – for instance, one borrows a database technique to buffer disk writes sequentially in a log file then actually update the storage in background. It’s a nice fix but does not remove the root cause.

The ever-burgeoning level of storage has never been matched by comprehensive solutions to extract unwanted data (which is not helped by compliance rules that make it easier simply to keep all the data “in case”). The emphasis has been on finding more efficient ways of storing and backing up the data – virtualisation so that data silos on different hardware types appear to the VMs as one big data pool, thin provisioning to avoid carrying unused capacity, compression and de-duplication to reduce the data footprint, and WAN optimisation to reduce transmitted data for faster remote replication and disaster recovery (DR). All of these have helped manage an ongoing problem but, again, not solved it.

The trend is also for systems to be ever more scale-out as well as scale-up, allowing non-disruptive addition of incremental storage “lego-blocks” and almost limitless expansion (as long as extra controllers are also added to maintain access performance). This is tailor-made for cloud environments and eliminates storage capacity ceiling concerns. Yet, these improvements are invariably proprietary; users of one vendor’s storage may struggle to switch in order to gain benefits from another’s “breakthrough” technology – and commoditisation of the hardware is a long way off.

There has been good progress in other areas too. Automated tiering based on a set of pre-defined rules can now keep the critical (tier 1 or tier zero (0)) data to a minimum to maintain performance even as the data volumes continue to rise. Tier 1/0 access performance has also been boosted by the greater prevalence of solid state disks (SSDs). Policy-based orchestration may be the best way forward for extending automation and making fast storage tweaks to complement rapid VM amendments – and we should see more of this in 2013.

Innovation and commoditisation
Everything I have mentioned has helped improve an end-user experience of storage access, but there is a more fundamental problem. Under the covers are many layers of legacy complexity that keep equipment and software expensive and stand in the way of true innovation. The status quo also suits major storage system suppliers with large client lists which project dramatically dwindling revenue streams if things were to become greatly simplified and commoditised ‘too’ quickly.

Some promising technologies do exist. For instance, I have this year written on Fusion-io’s approach. Unlike the other SSD vendors that use flash as a disk drive substitute, it treats NAND flash as memory which, at a stroke, removes at least four layers of storage access logic and obviates the need for an I-O scheduler and its software. So its ioDrive2 and ‘auto-commit memory system’ has been demonstrated executing a billion IOPS on a rack of eight servers. This is despite having to include software to ‘fool’ the system into thinking it is dealing with disk storage.

This is an elegant concept which mirrors the servers’ own memory-based data processing but, so far, it only replaces the fastest drives – so the legacy complexity has a good few years to run.

I should add that NAND has several advantages over the faster DRAM; it holds 100 times the data in the same space, is more reliable (using data loss protection), costs less per megabyte (MB) and, with no power needed to retain the data, running costs and heat output are also reduced.

It will not be until SSD technology matches the price-performance of the slower drives (along with proven reliability) that an entirely memory-based on-line storage environment will happen. Then, it could process the data so fast that complex software to tier the storage could be eliminated. Not next year I fear – but it will surely come.

A complementary ‘cut through the complexity’ technology is ATA over Ethernet (AoE) (which I have also written about) to transport the data using Ethernet packets instead of layering another transmission protocol over it (as in FC[fibre channel]oE or iSCSI for instance). This is in the public domain and in the Linux kernel, but most vendors have extensive investment in the protocols it could obviate. (Coraid is so far the only company to exploit its speed and simplicity to assist petabyte scale-out expansion capabilities.)

Then, consider AoE for data transport with flash memory for storage. I can start to see on the horizon an underlying infrastructure perhaps suited to truly agile storage…but not in 2013 I fear.

Another part of the jigsaw is to have an end-to-end view of the heterogeneous storage capacity for forecasting, auditing, compliance – and to offer meaningful charge-back (as now demanded by some cost-conscious executives). It would, of course, be much easier if storage capacity was commoditised (so cheaper and common to most vendors), with capacity decisions then instantly implemented (I can dream, can’t I?).

This type of software does already exist (apart from the speed of implementation); it is just that it has to handle the complexities of multi-layered, proprietary storage solutions from many vendors. One such is from Aptare; it carries out agentless data collection from hosts and produces a logical and physical connection map showing data space allocated but unused and high and low volume usage. So this helps management see a complex storage infrastructure as simple; it’s just that it could actually be simple.

Nor is that it for 2013 and beyond
I have not even considered other important factors. I have only touched on data security which ties into the privacy of personal data and retention for compliance purposes; this typically stands against the desire to delete unwanted data quickly to free up disk space, and it means encryption and de-cryption are prerequisites of systems. (I won’t go into the minefield of rules for multinationals affecting what data can legally traverse country boundaries.)

There’s also an increasing company headache in protecting data created and/or stored on mobile devices frequently on the move with the workforce; this adds a whole new dimension to data security, backup and recovery – yet, arguably, it is inherently a part of making an enterprise more agile in meeting changing demands.

Data continues to grow. The new name on the block is “Big Data” which is concerned with accessing the information for business advantage. This is by no means limited to large enterprises as even small companies can need to trawl through TBs and PBs of stored data. The constant is: the more the data the worse the performance in locating required data or writing out new data – and most innovations are about countering the tendency for performance to keep degrading and reduce storage capacity costs.

2013 will undoubtedly bring more innovation to assist agility, but I am pretty confident nobody will get to a truly agile storage environment by this time next year (I would of course love to hear I’m wrong.)