A quiet revolution

Written By:
Published:
Content Copyright © 2016 Bloor. All Rights Reserved.
Also posted on: Accessibility

I’ve been thinking about this blog for some time. Mostly about how to describe it in a way that makes sense. I’m still not sure that I’ve got this right but here goes.

I first started to notice a shift in master data management (MDM) and then I realised that actually it is happening in data governance more broadly, and even in things like metadata management.

To what I am referring? I am referring to the fact that vendors in these areas are starting to put the sort of facilities that you might expect in self-service analytic environments for business users, into the products I’ve just mentioned. And that’s pretty weird because however much we may have extolled the fact that things like data quality are really business issues, it has usually been IT that has been fixing the problem. Which suggests that either IT is starting to eat its own dogfood or that data governance and associated tools are being targeted more directly at business users. Or, of course, both.

Okay, that’s all very vague and if you are not familiar with the sorts of tools and products I am discussing then you may not have any idea of what I am talking about. So let me give you some specifics. Let me start with MDM. Traditionally that’s been about the consistency and accuracy of your data, which you might then use to support a single view of the customer or similar. What vendors (Reltio, Riversand and InfoTrellis are all good examples) are starting to realise is that you want things like sentiment analysis in that single view in order to get a better understanding of customers. So they are building analytics directly into their product offerings. Another common characteristic, shared with Pitney Bowes, is that a number of suppliers are using graph databases to help to understand the relationships between customers and products and influencers. Calculating recommendations (next best offer) becomes much easier once you understand those relationships.

It is the use of graph analytics that is also appearing elsewhere. Diaku has used it in its data governance product for some time, Global IDs uses it in conjunction with data profiling to help (IT) users understand the relationships that exist between data elements across databases. Informatica uses graph visualisations to help to understand metadata relationships and one of the graph database vendors has recently cited a number of its clients using its product for the same purpose.

I think this is going to become more widespread. Both with respect to analytics and graphs. Analytics will increasingly be built-in wherever it makes sense and especially where there is a case for business usage. As far as graphs are concerned I expect to see these deployed much more widely: in version control systems, in test case management, for enterprise architecture, and a lot more within the data governance arena. I would go so far as to suggest that wherever there are complex relationships in play then in the not too distant future, graphs will be a tick-box item. As a general rule I don’t like tick boxes in this sense, but it is such an obvious technology for understanding and exploring complex networks that I think it will become ubiquitous.