Content Copyright © 2014 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt
I have been getting some insights into Fraud at an iSMG Fraud Summit. With my Governance hat on, I think that we too often neglect the possibilities for Fraud when building automated business systems. We make a big fuss over external hacking, which (even in the worst scenario), is a one-off thing that can be managed. Yes, reputation risk is a serious problem on top of any losses, but if you think about the risks in advance they can be managed or mitigated.
Fraud is a different sort of crime, as Steve Strickland (Academy Founder and Senior Police Lead, City of London Police) points out. Modern computer-enabled fraud is more like paedophilia or terrorism than theft, in many ways, he says. It goes on for a long time, it causes continuing harm while you are investigating it, it involves perpetrators who set up and manage false identities, who build misleading online personas—and who ‘groom’ other employees into being, possibly unknowing, collaborators in the fraud.
Worst case, fraud can sap the life out of an organisation and it is corrosive of peoples’ trust and working relationships. Another speaker at the conference, Jeremy Strozer (Exo-Endoparacologist with the CERT program at Carnegie Mellon SEI) mentioned a 20 year fraud where someone with drink and gambling problems to start with had taken her company for millions over the years—at the same time as ‘socially engineering’ herself into a position of trust; and building a reputation as a kind person who would always help you out when you were in trouble. I wonder how her fellow workers felt when her fraud was discovered—even those who hadn’t been sucked in as more-or-less willing collaborators. Strozer was talking about the use of advanced analytics to highlight the potential for such situations before they get started; but when I asked, he agreed that promoting good company ethics and responsible people management is as much part of the solution as prosecuting the criminals (which last, by the way, is a really good idea—otherwise they just go and commit fraud somewhere else).
So, combating Fraud—or, at least, not facilitating it—should be designed into automated systems from the start, in a well-governed development organisation. This means robust identity verification; providing transparency into the operation of the systems using analytics; providing exception reporting that highlights possibly dysfunctional behaviour patterns; and so on. But, don’t put this into a ‘fraud-control silo’ and see it as anther cost of doing business. See it as managing the end-user or customer experience, with the potential for improving it and reducing customer churn—with the discouragement of fraud as a very useful side benefit. Never forget that the vast majority of your visitors are honest (at least, if you don’t throw temptation in their way) and it makes no sense to make the lives of your (potentially) paying customers miserable just to cause minor irritation to a small number of fraudsters. Think of those security questions (for example) which so annoy legitimate customers, because they can never remember the answers, and which drive them into the arms of your competition. The fraudster has done his/her research in advance and just has to leaf through a few papers to supply the answers. Perhaps not being able to answer the security questions shows that you aren’t a fraudster?
Alisdair Faulkner of ThreatMetrix, which uses advanced analytics to deliver context-aware authentication (based on millions of anonymised user and device profiles collected from the networks), highlighted some very important issues in his talk. For a start, he prompted me to think about how well you can say that you know your customer if you will happily trade with someone impersonating them. Know your customer (recognising their behaviour profiles and device capabilities/limitations) is good for business—but it should also make fraudulent account registration and account takeovers much harder. Faulkner suggests that you never let on when you have detected a fraudster from their behaviour profile, just stop short of sending them the money or the goods. This drives them nuts—and makes their activities less efficient—and, if you’re lucky, I guess they’ll write in and complain that their fraudulently-ordered goods haven’t arrived and you can pass their address on to the police.
As an aside, however, if you use any automated fraud-prevention tools (and I’d suggest that claiming any such thing is sometimes a bad sign; I’d prefer ‘fraud management’ or some such), just be very aware of the issue of false positives. Some tools may claim a very high detection rate at the cost of identifying lots of interactions as fraudulent when they aren’t, and this can lose you customers that you’ve spent a lot of money/resources acquiring. You really need to think through fraud control holistically—as I’ve already suggested, in the context of managing and optimising the end-user experience.
Which brings me back to Steve Strickland and changes in the way the Police view fraud. In the past, the police might let fraud continue, even if this exposed new victims to harm, in the interests of maximising the chance of a conviction. The view now is that this is unacceptable (just as it wouldn’t be in a paedophile or terrorist investigation) and the Police should act so as to minimise harm and the risks of new victims being drawn into fraud. This is something that commercial organisations could learn from—don’t concentrate on fraud control and catching fraudsters (although, when you do, please do prosecute as a policy), concentrate on ‘fraud-proofing’ your organisation by making sure that you vet employees on entry, run an ethics-based organisation, provide support for disaffected and unhappy employees, and use technology (analytics) to ensure that you know where your money is and can recognise dysfunctional behaviour patterns and provide a supportive user experience. All that will be good for business—and (with only a little extra effort) make things a lot harder for fraudsters.
Finally, however, Strickland did have a harsh warning for his audience. Apart from ‘accidental’ and opportunistic fraud, usually from employees, there really are organised criminal organisations running fraud as a business. As these become more adept, they are increasing as a threat; and as controls and technology get better, such criminal organisations are increasingly targeting your disaffected or vulnerable employees (something I’ve already highlighted here). If the employee is lucky, this will be a ‘friendly’ social engineering exploit (which might still wreck the employee’s life or career); if unlucky, blackmail, kidnapping or physical coercion might be used. Not nice to think about, and not as common a threat in the UK as in some other parts of the world, but criminals operate globally these days (so we are not so insulated from foreign criminal practices as we once were) and it is much better to think about managing employee vulnerability as a fraud vector (there are well-understood ways to do this) now, before you actually experience it.