Content Copyright © 2012 Bloor. All Rights Reserved.
Every now and then technology advances allow us to re-think how we do things – and Fusion-io is building a business by taking a radically new approach to high-speed data access using NAND flash.
We are now used to solid state disk (SSD) typically utilising DRAM for extra fast “tier zero” storage access; it pretends to the software that it is another spinning disk drive so can be integrated into existing systems fairly seamlessly. But Fusion-io – and especially co-founder Rick White, Steve Wozniak (now chief scientist) and CEO David Flynn – saw this as highly inefficient.
“We are different from everyone else,” Flynn told me. “They see NAND as a disk drive substitute whereas we see it as memory.” The background is that Wozniak once identified five layers of logic for storage access, then realised that four out of the five could be removed by treating flash chips as memory instead.
For instance, the I-O scheduler and the software for it can be removed, with no SCSI or ATA interface. There is then less complexity, less cost and more reliability – and the ability to program the chip.
Conversely, there is new complexity because the system has to recognise and drive a different type of device; but the proof of the pudding has come in staggering throughput. Fusion-io’s ioDrive2 product and its “auto-commit memory system” was demonstrated early this year executing a billion inputs-outputs per second (IOPS) using a rack of just eight servers!
So, you may then ask, why did they go for NAND flash when raw DRAM speed is actually greater? First, recent development means NAND now holds 100 times DRAM capacity in the same space (translating, for example, to 1.6TB versus 16MB) so it costs far less per MB; secondly, it is non-volatile so that no power is needed to retain the data – reducing running costs and heat output.
A more complex debate is whether NAND also achieves greater reliability than DRAM, but Flynn anyway explained Fusion-io’s built-in adaptive flash-back data-loss protection, which compensated for chip failure writing data across the chip.
Notice, however, that I did not say it directly replaces storage (even SSD). Flynn explained Fusion-io’s product positioning; of the two purposes to storage – i) to retain it through time (capacity and data management) and ii) delivery to the microprocessor (access performance) – Fusion-io is firmly focused on the second.
The speed of server microprocessors nowadays means that waiting on disk (achieving only 10s of accesses per second) is an age, he said; conversely, NAND allows millionths of a second for access. At such speeds even the operating system gets in the way to slow things down. So the Fusion-io boffins (including core Linux architects for instance) made the NAND storage look like a non-volatile memory add-on but nevertheless mapped it as data. “Now we are restoring data supply speed to match processor speed,” Flynn said.
This impacts the way Fusion-io supplies its products. Typically, it supplies solutions based around ‘ioMemory’ (that uses ioDrive2) and ‘ioCache’, and software such as ‘ioTurbine’ (which drops into VMware environments) to OEMs. This can range from creating sizzling multi-terabyte database performance to greater consolidation of virtual servers (because NAND can provide far more capacity in a physical server than DRAM caching). Fusion-io is building a reseller channel, but the resellers will typically sell these pre-built solutions.
The barriers to market-entry for potential competitors are high because of very advanced technical knowhow and skills (not to mention some patents); Flynn explained that the technical problems being addressed were non-trivial and operating system-related.
However the potential market size is huge and Flynn’s simplistic summary of the value proposition is: “We get two times the performance at two times less price.” When substantiated that is pretty compelling and may be seen as a threat by some big server and storage vendors. The fact is, Fusion-io’s solutions can reduce the amount of equipment and the number of software licences used – what some call a “utility multiplier”; so, don’t be surprised if they throw in some marketing FUD.
May I be so bold as to suggest that this technological approach points the way forward for computing itself.