Storage with SSD: key questions to ask – now

Written By:
Published:
Content Copyright © 2016 Bloor. All Rights Reserved.

The writing is on the wall for storage as we have known it over the past decades – with flash (solid state disk (SSD)) developments the key reason. Yet, to minimise risks and maximise benefits we must think differently about data access.

The advent of SSDs did not immediately revolutionise storage and backup. It helped with some disk latency and access speed problems, but small capacities, high pricing and reliability concerns slowed adoption. This was no bad thing because inserting SSD as a straight replacement for spinning disk was poor usage – as it still carried all the legacy storage layers of abstraction.

This forced vendors to think hard about how and where to best use SSD technology. That only added to confusion because all sorts of flash formats then appeared…but now SSD prices are reducing, capacities multiplying and reliability now surpassing spinning disk. This was emphasised by speaker after speaker at the first annual “Flash Forward” conference in London last month. So, we might summarise storage’s direction by saying, “flash is the future”.

Yet that leaves organisations, not least those with large amounts of legacy storage, asking how to get from here to there. SSD covers a group of related technologies that all speed data access becoming ever-more mainstream; the challenge is how to best use them, and not only as a performance fix. (For example, they can do nothing for data at rest not being accessed.)

3D NAND technology means 30TB SSD devices soon; they will begin to blow away disk, not least because of space, power and heat savings. 3D XPoint is also due for imminent release and, as reported, it has 100 times the speed of NAND read and write, 10 times DRAM density and, most importantly, 1000 times NAND endurance – potentially to perform for 15 years (although further SSD advances may overtake that).

While big-capacity SSD prices may stay high for some time, a life-span of, say, 10 years with immediate power savings hugely alters the cost of ownership calculation. So, every sizable IT storage user should be making or updating its long-term storage plans – now. It will allow for the write-down periods of existing equipment, and may need to include interim use of some flash technology to address short-term performance needs; but a fundamental systems architecture change is needed.

One reason is that a more efficient use of SSD is as memory, as this by-passes existing storage approaches. Where access speed is critical, a short-term fix may be SSD as tier 1 (or an inserted tier zero); but, longer term, tiering can vanish alongside spinning disk (except for a deep archive tier using low-cost tape or long-lasting non-volatile flash). I also doubt that SANs will stay cost-effective. Users of in-memory databases should now see an opportunity to massively multiply their sizes, giving a boost to Big Data – and so on. There are so many new issues.

An obvious question to ask is: what are my primary aims (affecting prioritisation)? The answer(s) will depend on your organisation, and vary even in different parts of it: Is it an operational cost saving, improved IOPS or reduced latency (as SSDs can help all of these)? That means identifying workloads likely to benefit most from flash, and this will form part of a broad evaluation of where to automate to improve productivity and reduce cost.

Remember also that, if storage access is a bottleneck that SSDs remove, this will inevitably expose another infrastructure bottleneck so the overall performance boost might not be as expected. In this regard, data takes time to travel across a network (e.g. an electron can only travel 11 inches in a msec). As data multiplies so this becomes significant. So, one design criterion is to place data as near to where it is processed as possible. In turn this may mean various processors across a network capturing data and processing it there and then.

Other factors include altering applications. For example, get away from time-wasting locking of data in transactions, and replace with reversible transactions for when, very rarely, two transactions try to update the same record at once. One app at the edge may need only to process its tiny piece of data, and discard all that is not needed, then forward only the key information to the data centre (if this still exists). How about backup? If the media is all SSD, are super-fast snapshots all that will be needed?

I am barely scratching the surface of course. There is also one piece of (potentially) bad news in all this. With the demand for SSDs multiplying, there could be a lack of fabric capacity to produce the right flash types in the quantities industry will need. That could mean delivery delays and slow down the implementation of plans and/or push up prices again as demand outstrips supply.

Yet, make no mistake. SSD technology needs to be central to your plans going forward.