Towards effective digital experience management - Devo masters the Data In(de)gestion challenge

Written By:
Published:
Content Copyright © 2019 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

The research I undertook a couple of months ago into vendor capabilities in AIOps and Hybrid Infrastructure Management, combined with the follow-on discussions with those who did and didn’t take part, have revealed interesting insights into what differentiates the various solutions in the market.

The focus of my research was to identify those solutions that could handle high volumes of transactions in complex, interconnected cloud and on-premises infrastructures, where the impact of performance degradation and outages on business outcomes was both immediate and severe. A key requirement here is to capture data in real-time from a wide variety of sources and make it available immediately for queries and predictive analytics.

Devo, a company that claims to unlock the full value of machine data for the world’s most instrumented enterprises by putting more data to work now, responded to the Bloor survey and scored very highly. But it is not just the fact that Devo can ingest huge amounts of data from right across the infrastructure stack in real-time, it is how they do it in such a way as to minimise the volume of data that actually gets stored and used. Devo is very open and transparent about how its platform works. Basically they don’t index data at the moment of ingestion. Rather, an out-of-band tokenised micro-index is created from the raw data and written to disk asynchronously. At the end of each day the micro-indexes from each tenant are made immutable. These non-traditional indexes radically speed up the query process and also enable the solution to maintain its query performance, even as data volumes rise rapidly. You can find more detail about that on the Devo website.

Devo’s micro-indexes are also very small in size; the compressed raw data plus the indexes are just 10% of the original raw data set. Devo claim that this storage efficiency and much more efficient use of CPU resources delivers a 75% reduction in CPU and storage than solutions that use traditional indexes built at the point of ingestion. This provides them a significant cost advantage over those other solutions.

But, ultimately, the real question is, how does this capability translate into effective real-time monitoring and management of complex, high-volume, customer-critical applications and services? During my recent briefing from Devo, I was given a great case study example. One of their customers is the Spanish telco, Telefonica. Streaming paid-for TV services is a highly competitive market. Telefonica wanted to improve the customer experience for their 3 million Movistar+ subscribers. While they could monitor individual components, they found it hard to get a unified end-to-end view, from the back-end loading of content, across the network and down to the monitoring of picture quality on the customers’ set-top boxes. The Devo solution has enabled Telefonica to not only improve customer satisfaction and reduce churn, but now gives them the ability, through use of correlation of the huge amounts of data they collect, to start to proactively identify and resolve problems before they become an issue.

Clearly, there is a lot more going on under the hood of the Devo solution than efficient data ingestion, storage and querying. They have a clear understanding of the importance of the network in delivering effective business applications and services. They are building  security capabilities around cloud-based next-gen security information and event management (SIEM). This same competency is also extended to advanced visualisations and solutions designed for IT and DevOps teams who are now responsible for delivering effective digital experience management in a complex hybrid IT infrastructure. In my experience, when you get both these competencies in a company, you can usually be sure that they know how to capture and deliver insights from the large amounts of data produced every second.