Analyst Coverage: Peter Williams
The Agile storage concept is that an enterprise’s storage, data access and update requirements must change in response to fast-changing business imperatives – quickly (in minutes), easily (using non-specialist IT or even business staff) and non-disruptively (with live production systems unaffected by background changes automatically applied as appropriate).
Ideally, this will work seamlessly in line with server and network infrastructure changes. Creating or removing virtual machines (VMs) should automatically acquire or release unused storage for reallocation, and virtual networks should easily re-optimise throughput after amending connections. The key is to automate as much as possible.
Agile storage is a goal, not a current reality, partly because organisations embarking on agility projects typically begin with an eclectic mix of storage hardware, software and network infrastructure that is both hard to modify and too valuable to jettison en bloc for a new solution (which will itself still fall short of the ideal).
The best approach pools all enterprise storage in one repository (in practice virtualised across multiple storage devices then accessed through a storage area network (SAN)); all enterprise servers then connect to this pool (i.e. with suitable access security and firewalls) facilitating global and local data management, security and data protection policies, and changes to physical and virtual devices.
Barriers to rapid VM reconfiguration have melted away and virtual private networks (VPNs) and soft switching now assist rapid reconfiguring of network infrastructure – but storage is a tougher task.
Agile Storage has the most value to large enterprises that need to be lighter on their feet; and, for instance, to accommodate mergers where IT systems must integrate quickly. Being able to make quick “tweaks” to their IT infrastructure facilitates making rapid responses to business requirements changes, in order to gain competitive edge.
Every organisation, of whatever size, has burgeoning storage issues – so all will benefit from some of the developments.
To combat the data explosion, vendors focus on storage and backup efficiency and cost-savings – but innovation also helps in achieving greater agility.
Technologies include: virtualisation with thin provisioning (to shrink data silos into one pool and minimise unused capacity); de-duplication and compression (to reduce the data footprint); WAN optimisation (to reduce transmitted data speeding remote replication and disaster recovery (DR)); automated tiering using pre-defined rules (to minimise critical tier 1/0 data and boost performance).
Hardware is “scale-out” (across nodes) and “scale-up” (more capacity per node) to non-disruptively expand storage in incremental “lego-blocks” (minimising unused capacity) almost limitlessly; by also adding extra controllers it maintains access performance. This can eliminate storage expansion concerns making it ideal for cloud environments.
Emerging technologies include:
- Policy-based orchestration to extend automation, including automated tiering and the making of fast virtual storage adjustments.
- SSD (Solid State Disk) storage, as “memory” rather than pretend disk, can remove four layers of storage access logic and achieve superfast performance with simpler access. (“Server-side cache” also refers to this approach.) [Note: This may use NAND rather than faster DRAM because: NAND typically holds 100 times the data; is more reliable with data loss protection; costs less per MB; needs no power to retain the data – thereby reducing running costs and heat output; this picture is rapidly changing as SSD capacities and speeds increase and the cost per MB reduces.]
- Advanced automated tiering which moves granular blocks or pages (i.e. not whole files), using metadata. (This complements SSD for the top tier, 7,200rpm or slower spinning disk for other tiers.)
- Inline data compression at source (processed using SSD).
- ATA over Ethernet (AoE) transports data in Ethernet packets, removing a protocol layer (e.g. FCoE (Fibrechannel over Ethernet) or iSCSI (Internet Small Computer Systems Interface)); it is simple and fast with infinite scalability. (AoE is in the public domain and in the Linux kernel.)
- End-to-end heterogeneous storage capacity management view, for forecasts, compliance, audits, meaningful charge-back (increasingly demanded) – with changes instantly applied.
- DR (Disaster Recovery) for virtual environments with non-disruptive testing; it is economical and ensures that DR works in practice.
[Note: There is a tension between emerging Big Data technologies and agile storage in that the optimum agile storage formats may well be very different to those for Big Data analytics.]
Vendors promoting these emerging technologies include (respectively):
- Dell Compellent, EMC, Imation
- Nimble Storage
Major vendors often talk agile storage (“thought leadership”) but tend to offer only point solutions that only cover parts of the bigger picture. Imation has now entered this space by acquiring Nexsan. Major storage vendors may also advance their capabilities by acquisition (as in the past).
Fujitsu has launched Fujitsu Agile Storage Foundation (ASF); its concept is to move away from project-based storage acquisition to a scalable, flexible platform (connectivity, disk types) with layered functionality; its single platform and management layer is covered by one licence agreement.
Symantec’s Veritas Storage Foundation is a heterogeneous approach for building resilient, flexible and agile private clouds; this includes a set of integrated management tools from Veritas Operations Manager.
Imation, an old-style storage company, is attempting to revitalise its portfolio with the acquisition of Nexsan, an up-and-coming startup with robust portfolio of disk-based and hybrid disk-and-solid-state storage systems, lots of customers and a strong partner program (but it had no direct sales force). This probably represents a wider trend.