Tegile Systems’ special SSD-SAS mix optimises storage performance-capacity-price

Written By:
Published:
Content Copyright © 2012 Bloor. All Rights Reserved.

Each of the many new storage start-ups has a different angle on performance and capacity plus virtualisation support – and where in the food-chain to pitch its solutions – which can be confusing to potential buyers. Some approaches are smarter than others – and I think VC start-up Tegile Systems deserves a mention.

Data protection companies typically offer comparatively expensive SSDs to assist super-fast data access and leave spinning disk to handle high storage capacity. They also store the data and the metadata that describes it together (which tends to fragment as data is changed or moved).Tegile (think “technology” plus “agile“) organises things very differently for its Zebi storage arrays.

It has deeply embedded its SSDs to extend the cache to a massive 800-1200GB (1.2TB) while retaining 2TB SAS (7200RPM) disk. Storage data is auto-tiered. An ASIC is used to separate the metadata so it can be stored separately, while a RAM-based metadata engine makes data access much faster. The icing on the cake is that Tegile has developed technology to de-duplicate and (optionally) compresses the data in-line as received; this cache-based process achieves a massively reduced storage footprint for live data without performance loss.

So Tegile claims it can achieve seven times the performance of more traditional storage solutions at the same time as reducing the storage footprint by 75% (also obviating the need for de-dupe during backup). However, this prompted my obvious question about system crash data risks.

The answer from Tegile Systems’ VP of marketing Rob Commins exposed further design features: “The system mirrors the data, through dual active-active controllers, and there is an asynchronous copy written to spinning disk; so there are always three copies of data.”

That part sounds a little expensive, but Commins said that the Zebi arrays were performance competitive with high-end arrays with low latency and thin provisioning (from vendors such as HP 3PAR), yet typically cost about 10% of these. Referring to the “traditional NAS/SAN market” (competing with EMC, NetAPP and HP for instance) he said, “We are replacing them every day.”

There is also flexibility. The Zebi arrays support NAS and SAN together, block protocols on Fibre Channel and iSCSI, and NFS and CIFS file protocols in NAS environments. This enables Tegile to compete in several mid-market segments – and not least to assist organisations using a mix of these who want to remove their storage silos.

Also integrated are snapshot and remote replication functionality, the latter only transmitting data changes to minimise WAN traffic. This obviates the need for separate backup software or backup windows. This combination of features  also makes storage array management a doddle versus many legacy solutions.

Commins said Tegile Systems beat the new “all-flash” storage vendors on price and could be twice as fast as smaller peer storage vendor solutions who typically provided only a passive second controller.

The Zebi arrays are now going through VMware vCenter approval, which Commins recognises is vitally important. “VMware on NFS is very expensive, but we can plug in as NFS now” he said. He admitted that there was still work to do on snapshot integration in virtualised environments, primarily VMware (probably to be completed as part of the vCenter approval), MS Exchange and Oracle – to be done this year. Conversely, the architecture will allow SAS to be replaced by SSD very easily if (or when) SSD pricing reduces sufficiently.

I was less convinced by the company’s approach as it begins marketing internationally, including here in the UK, but using a mix of direct and channel. However, I think I now understand why investors felt able to stump up $10m in extra VC funding just last month. I shall be watching Tegile with interest.

[There are three Zebi array packages: 1) Entry level, usually only as a test box (only one controller so a single point of failure) 10TB raw capacity but 30-50TB of data (de-duped), 2) 30TB raw achieving 30-50K IOPs – used especially in server virtualisation to enhance performance, 3) 20TB raw but achieving 75K IOPs – used more for VDI as higher performance is typically needed.]