Scale Computing: designed for SMEs, but perfect for the Edge computing market

Written By:
Published:
Content Copyright © 2019 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

It’s great to be genuinely surprised and delighted by something you thought was going to be mundane and a little boring. I certainly didn’t have high expectations recently, as I dialled into the briefing from Scale Computing. Hyper Converged Infrastructure (HCI) isn’t exactly new and, with a somewhat jaundiced analyst’s view, I wasn’t expecting much.

An hour later my brain was buzzing, and I was asking to have a further follow up. The HCI story is well known but, for context with what follows, it is worth re-stating something. For all the advantages of consolidating the three tiers of compute into a single integrated enclosure, HCI systems still come with the array of server, storage and network virtualisation software overlays that are common in three tier systems.

Scale is a relative newcomer, having been founded in 2006. Rather than take existing concepts and implementations of HCI, the founders started with a clean sheet of paper and the stated objectives of providing “highly available, scalable compute and storage services while maintaining operational simplicity through highly intelligent software automation and architecture simplification.” They have achieved this by developing their own, Linux based operating system, HYPERCORE. It has an embedded KVM based, bare-metal hypervisor that runs directly in parallel with SCRIBE, a very innovative clustered block-storage layer. This has created a cluster configuration, with few of the performance overheads, and none of the additional licencing costs of traditional VM based HCI solutions.

So far, so good. Scale have an HCI system that is easy to manage and has an excellent price performance advantage in the market. Unsurprisingly, this has proven particularly successful in the small and mid-size enterprise market. But that wasn’t what got me buzzing.

Scale Office of the CTO, Alan Conboy’s excitement about the stateful nature of all Scale clusters was palpable. Because the systems have been designed from the ground up as “state machines”, i.e. each cluster knows what state each node should be in…and is in, they not only report on anomalies but can also remediate problems on the fly automatically.

A blog post like this isn’t the place to go into detail about all the ways compute, storage and network nodes reconfigure automatically, or the ease with which disaster recovery systems can be created and activated. But what got me buzzing was my growing sense that these systems would be ideal in the emerging world of the Internet of Things (IoT), 5th Generation mobile networks (5G) and the Edge.

This new Edge computing environment will put pressure on IT costs, as organisations understand the need for additional compute and storage capacity near the customer and the plethora of IoT devices capturing and processing sensor data. IT architectures will become more complex. Performance and availability of these systems will become a critical business issue. Yet skilled IT resource will remain in critically short supply.

Whether you are a telco operator, faced with the need to install large numbers of new systems, potentially at the base of existing antennae to take advantage of 5G; a manufacturer moving from older Operational Technology (OT) to newer Industrial IoT; or a retailer needing more store based compute without the latency and cost of routing all transactions across the network, Edge computing  will demand cost effective, small-footprint, agile, resilient systems that can be run in “dark” micro-datacentres. I think that describes Scale’s systems perfectly and I expect to see further developments in the coming year that place them very strongly in the forefront of the Edge computing market.