Re-thinking data security

Written By:
Published:
Content Copyright © 2014 Bloor. All Rights Reserved.
Also posted on: The IM Blog

It seems as if hardly a day goes by but you hear about someone or other’s security being hacked and a bunch of private data being stolen.

One approach to this problem has been to implement SIEM (Security information and event management) that identifies attacks and blocks them. There are two technical problems with this: firstly, most SIEM solutions can’t identify attack types in real-time (because most vendors—there are exceptions—don’t implement complex event or stream processing for this purpose); and secondly, there is a time lag between attack identification and the implementation of remediation, because this final stage is essentially a manual process.

Clearly, in so far as data privacy is concerned at least, this approach is not working. So it is time for a rethink. Informatica has done just that. It announced earlier this year, though it will not be available until 2015, its Secure@Source application, which is built on top of the Informatica platform. The fundamental principle of this is to ensure that sensitive data is masked/secured at source—before being proliferated around the organisation—so that if it does get stolen it won’t do the hackers any good, because the data won’t mean anything.

This strikes me as a very sensible idea. Actually, I’ll go further: I think it’s a really, really good idea. It doesn’t obviate the need for SIEM and network security because you still want to prevent DDOS (distributed denial of service) attacks, discover low and slow attacks, use historic data for forensic analysis, and so on. Thus Secure@Source would be complementary to SIEM.

Of course, data masking has been around for a while but the vendors in that space have tended to focus on a) development and testing environments and/or b) dynamic data masking to replace (typically missing) role-based access control. What Informatica has done is to build some additional capabilities around its existing data masking capabilities to make it more general purpose. For example, it has included heat maps that identify data risk so that you can prioritise what to mask first and monitor the results. Of course, there is more to it than that but the product won’t be generally available for a few months: I’ll return to this subject with some details closer to the release date.