Content Copyright © 2017 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt
At IBM’s Smarter Risk Summit last year, it made the point that assistance with managing regulatory risk in an ever more complex regulatory environment is going to be one of the key deliverables from Cognitive Computing and Watson. AI is coming of age after a decade or so in the doldroms – but IBM says that AI is coming to mean “Augmented Intelligence” rather than “Artificial Intelligence” (IBM redefining acronyms on the fly, whoever would expect that). IBM included GDPR in the risks it was talking about – but the speaker emphasised that you had to start with people, with citizens, with people’s rights to privacy; not with compliance for its own sake and with the fines GDPR non-compliance can attract. Cognitive technology may be an enabler but this is not, at its basis, a technology problem – starting up a “Citizen Interaction Office” may be the first step to take, before you buy any cognitive technology.
That isn’t to say that technology won’t be part of dealing with GDPR, of course, but iland (provider of secure Cloud services) has just told me about a new release of its Secure Cloud Services (including support for Model Contract Clauses and the U.S. and U.K. Privacy Shield Frameworks), which has reminded me of the bigger context around GDPR technology.
Model Contract Clauses, as part of cloud agreement, are a possible way (which may be recognised by GDPR) to ensure that the liability for a breach is shared between the cloud provider and the customer; and Privacy Shield is the replacement for Safe Harbour and should ensure that data privacy practices outside the EU are considered adequate for the protection of EU PID. They are both attempting to ensure that, if necessary, companies can store Personally Identifiable Information (PID) on EU citizens outside the EU (in particular, in the USA) without infringing EU GDPR regulations. Model Contract Clauses, in particular, are probably “the coming thing” in GDPR. They are inserted into the contract between the cloud provider and customer, detailing behaviours during a breach, and ongoing reporting and audit requirements. Using these, you can probably avoid the “regulatory melee” and build your own legal framework for your behaviours, based on (if necessary) court litigation rather than arguing with a regulatory body. This is, however, going to be for larger organisations – and hasn’t really been tested in practice yet.
Once again, a reminder this isn’t primarily a technology issue, it is a TRUST issue. If the EU data regulators don’t trust the US authorities to adhere to, in all circumstances, any commitments to maintain the privacy of EU PID that they’ve signed up to, then a company storing its PID on EU citizens in a database in America risks that its actions won’t be judged “adequate”, if someone complains (using GDPR) that the privacy of their data was infringed. And the eyewatering fines that are possible (up to 4million euros) probably aren’t as big a problem as the loss of customer/partner trust that may ensue, if a GDPR infringement hits the press.
You may have noticed the POTUS (President of the United States) is tweeting some quick-fire edicts these days, one of which (I think) may indeed destroy the TRUST that has to underlie Privacy Shield – I’ve blogged about this here. The issue is that section 14 of an Executive Order, “Enhancing Public Safety in the Interior of the United States”: “Agencies [CIA, FBI, NSA etc] shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information”, which doesn’t explicitly destroy Privacy Shield (note “consistent with applicable law”), could nevertheless destroy any trust between the EU GDPR regulators and the US authorities (not that there was all that much trust there in the first place). I thought that I’d get a second opinion on my “Death of privacy shield has probably not been exaggerated” blog, from Frank Krieger (Director of Compliance for iland), since iland’s GDPR-compliance technology, in part, prompted this article. According to Frank:
“I agree with David as well – the core tenets of Safe Harbor were never really recognized when it was in effect, Privacy Shield at least superciliously addressed these through the creation of a grievance process for EU residents but it too was really just window dressing for a poor replacement. With the court challenges in the EU and the uncertainty here in the United States I don’t see it making it through this year. Either the courts in the EU will nullify it or the US with its current administration will force the EU Article 29 Working Party into a situation that forces them to nullify it due to loss of privacy for non-US citizens”.
So, while iland’s advanced cloud management functionality, with on-demand security and compliance reports, as well as enhanced billing visibility and resource management, and recognition for Model Contract Clauses and Privacy Shield, is all going to be useful to managing the GDPR risk, it is far from addressing the whole problem. Scott Sparvero, CEO and co-founder of iland, says: “As cloud adoption continues to advance, organisations are faced with more and more complex cloud and compliance regulations and time-consuming audits on an international scale.” He continues by pointing out that iland can help its customers to address the issues (but doesn’t, and I’d point out can’t, promise to remove them). An iland partner agrees: “IT organisations need an experienced cloud provider to work through today’s regulatory and data security challenges,” according to Travis Ruiz (Director of Cloud Services and Support at MasterControl, provider of enterprise quality management systems (EQMS)), who continues, “iland gives us the security features, reporting capabilities and visibility into cloud resources we need to ensure cloud compliance as well as take full advantage of the many benefits that cloud computing offers.”
There are a lot of companies offering to help with managing the GDPR risk, of course. Returning to IBM, I was particularly impressed by a recent IBM workshop called “Creating Value with GDPR: Practical Steps”. This featured IBM’s technology solutions because, apparently, the audience at a previous workshop complained that it concentrated too much on the issues and what the customers had to put into addressing it, quite apart from buying technology. I find this a little worrying because (as IBM’s Jessica Douglas – Executive Partner for GDPR, IBM UK & Ireland, freely points out) addressing the GDPR risk involves a lot more than just buying a technology fix. Address the issue, put the right people and process in place – and then buy the technology you know you’ll need.
That said, IBM does have some interesting GDPR technologies, for when you have worked out what GDPR actually means to you. I was especially taken by Sima Nadler’s talk on Data Policy and Consent Management, which I think goes to the nub of satisfying GDPR. Sima is Senior Program Manager Privacy & World Wide Retail Research Leader and she became fascinated with what “consent management” means in practice – it deals with the “purpose” a data subject has consented to, and “purpose” is not something addressed by most current IT systems.
She is proposing, I think, an extra “purpose-based data access” layer [URL], which delivers data if and only if it satisfies the data usage policies currently in effect and the list of “consents” given by the data subject.
When a new service is defined, the product manager obtains a “purpose certificate” from the “consent manager” app and produces a “purpose token” that identifies the purposes for which consent has been given, and which can be checked if things change. Sima says that the new consent management layer can be added to any application with a web access interface, with a manageable impact on performance (I suspect, only as long as you have enough RAM, but RAM is cheap).
However, adding the extra purpose-based data access layer won’t be trivial for legacy applications without a web interface, and some of these may be mission critical for large enterprises (although consent management seems, to me, to be an ideal candidate for a service running on a specialised hardware co-processor on a mainframe). In any case, the devil may be in the detail – an audit trail will be needed, and the non-trivial question of what happens if consent is removed after data has been acquired and some use made of it must be addressed. Sima suggests that a phased approach will be essential: first, you make consent and purpose visible; then you store consent and policy in a Trust platform; then, using this platform, audit access and consent asynchronously for ad hoc reporting and violation alerts; and, finally, implement runtime enforcement.
It is worth noting that GDPR turned up at IBM Interconnect 2017 too. As I understand it, IBM’s current strategy is to provide access to “Cognitive-as-a-Service” from all of its platforms – and it sees this as facilitating innovative (often natural-language-based) solutions in many areas, certainly not limited to GDPR. However, the use of cognitive “augmented intelligence” could be a major factor in making GDPR solutions practical. I could imagine matching the wording of GDPR with the policy statements inside an organisation; analysing consent statements provided by PID subjects against proposed new purposes, and so on. What “Augmented Intelligence” can’t do – yet, and for the foreseeable future – is to automate the GDPR issue away. It can, however, help reduce the load on the humans who have to manage it.