Most surveys I come across, even professionally produced ones, don't really stand up to close examination—and I have a fear that many people fill them in with the answers they think they should be giving, in order to maintain an (unconscious) self-image of professionalism. Nevertheless, they can be a catalyst for useful insights.
An example is a recent analyst workshop prompted by an Entrust survey (Entrust is now part of the Datacard group) which found that user approaches to mobile security could be divided into 3 groups: the 'Careful' (who always use a PIN to protect their mobile phone, for example); the 'Cavalier' (who are happy to share their PIN with their friends) and the 'Cantankerous' (who make a point of not using PIN codes and ignoring security wherever possible). I'm sure that there might well be 3 such groups, although it is not clear exactly what effect this has on security (perhaps even a cantankerous trouble maker has little impact if his/her firm has installed strong mobile device management systems).
And it would be a pity if people took away the impression that mobile technology is inherently insecure—as Mark Reeves (SVP, International Sales at Entrust) points out, mobile device technology, and the ipv6 infrastructure it uses, is inherently far more secure (because it was designed from the start with security built-in) than most desktop systems using ipv4. Windows security was compromised from the start, in the interest of performance, and, although Microsoft now takes security extremely seriously (since Vista and, particularly, post Windows 7), it is extremely difficult to bolt security onto something insecure (especially if backwards compatibility is a commercial necessity). At which point someone, of course, suggests that desktop security isn't a problem because Windows XP is no longer supported....
So, insight number 1 is prompted by this—security is affected by what is actually going on not what you would like to think is going on. According to this May 2014 article, Windows XP was still number 2 behind Windows 7 with 15% or more market share. I'm sure most of the Fortune 500 have upgraded by now (although I wouldn't be surprised if they have pockets of XP too, no matter what their policy) but most of UK plc runs with SME businesses—and how many of these still run XP? Obviously, no business should ignore mobile security—but neither should it assume that Windows XP security is irrelevent, just because Microsoft no longer sends security fixes to stop you working on one morning a week. Some organisations probably don't know exactly how many PCs they have, let alone what OS they are running, and a portable running XP might represent a bigger threat than any iPhone.
There are enough Windows XP PCs, and even poorly secured Windows Vista and 7 PCs, still around to power some pretty large botnets. Insight number 2 is that we tend to worry about the most visible threats and ignore more serious threats that are less obvious. It seems to me that the big threat facing eCommerce is the botnet-powered denial of service attack and the phishing and spam industries. A lot of this is presumably undirected malicious vandalism, but some presumably isn't (that is, it drives a profitable 'business'). But our browsers and ISPs hide all of this stuff from us, so the consequences of running a poorly secured computer aren't obvious to its owner—which may explain the Cavalier class of mobile user found in Entrust's survey. In a former life, I had the misfortune to work in internal control in a bank which refused to admit it had ever suffered from fraud, data theft or electronic sabotage and maintained that its expensive security technology made any such exploits impossible. None of this was true but it made it very difficult to make ordinary workers take security seriously. Contrary to popular belief, secrecy and obfuscation is a great enemy of good security practice.
So much for 'Cavalier'. Now, what about 'Cantankerous'? Is it possible that most people want to do the best for their employer but are made cantankerous by poorly implemented, clumsy security technologies, applied without any attempt to think through the human aspects of the security issue? For example, confirming a banking transaction with a mobile phone message gives a huge boost to security and 'everyone' these days has their mobile phone to hand at all times, so we can just roll out this 'improvement' without much notification, can't we?
How annoying is that to people who do their banking from the bedroom but leave their phone in their jacket downstairs? Or people who have poor mobile coverage at home and don't care because they have a wired phone at home? Or older people or less-abled people who can't use a mobile phone comfortably? How many of the 'Cantankerous' group got that way because of poorly designed or implemented security measures they've met in the past? I myself have found that the more hoops I have to go through to set up a 'highly secure' password (passwords by themselves are never highly secure, by the way), the simpler and less secure my password suggestions get, until they become the least secure possible combination of characters and digits need to pass the quality tests—and yet I think of myself as reasonably security aware.
Entrust has some pretty impressive security technologies that, with well-designed security, can be set up to have minimal impact on legitimate users of a system—but insight number 3 is that security is a people issue, not a technology one, and you have to understand the people using a system and their needs, as well as the threat environment, before choosing and installing security technology. Employees should meet (well-designed and supportive) security awareness training before they meet your security technology, not afterwards—or you risk breeding a cantankerous set of users.
At least, the 'careful' group of users won't be a problem? Well, do you want to employ a careful, risk-averse, sales team that would rather fill in the documentation correctly and spend time on a really secure authentication dialogue than grab a new customer? Or, could a low-performer hide their inadequacy behind 'working to the [security] rules'? Be careful what you ask for, you might get it. Insight number 4 is that you employ people to do business, not to follow security procedures. Don't get me wrong, good authentication of users and access security is important but it is not an end in itself. Fortunately, the technology is now available, especially for mobile platforms, to let you design appropriate and usable security into applications from the start (and if security is getting in the way of business, you should find this out in testing, when you can do something about it)—but you have to inculcate a development culture that builds in security and gets the security people involved from the start of analysis and design. As Microsoft found out with Windows Vista, bolting on security is difficult and expensive.
Insight number 5? Well, someone did mention in the workshop that there are directed threats and also generally malicious threats (vandalism), but the discussion rather mixed up the two. Dealing with untargetted attacks, often by 'script kiddies' requires general baseline security good practice; dealing with someone that is crafting a one-off attack for you specifically, usually with significant commercial or monetary (or political) objectives, needs a different approach, on top of baseline security. If someone really wants to get access to your systems, you can't ignore personal attacks, kidnap and 'social engineering' blackmail, even if such approaches aren't common—probably because general security is so lax that there's no need to resort to them. As you tighten up automated systems security, you make it more likely that targeting employees will start to become cost-effective for professional criminals. You need to address such issues in good time (by advertising dual-key authentication procedures, so a single employee can't access anything valuable, for example; by providing 'coercion keys' that silently advertise coerced access; and by including non-career-limiting responses to blackmail attempts in your security awareness training) before staff start to think that their welfare isn't important to theit employer. If employees come to see themselves as possible attack targets, and that their employer isn't worried by this, they can get very cantankerous indeed.
All of the above is, of course, part of 'people-centric computing'. At Bloor, we think that people should have the Freedom to work in any way that they want to, for the benefit of their employer, using whatever device that they feel comfortable with. But Freedom is built on Trust (which implies the existence of a light-touch authentication and security framework) and Actionable Insight (from, for example, analysis of 'big data' generated by automated systems). Analysis of user behaviours and access patterns using big data tools may provide your best approach to dealing with directed attacks—but make sure that you get informed buy-in from your staff first, before they see their employer as a 'Big Brother' snoop and start getting cantankerous again.