AI and cybersecurity: puppies grow up to be guard dogs

Written By:
Published:
Content Copyright © 2019 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

I was recently invited to a conference organised by NetEvents, where I was asked to introduce the subject of AI in cybersecurity and to moderate a panel on the subject. The panellists included Saqib Chaudry, CISO of Cleveland Clinic Abu Dhabi, Ali-Reza Moschtagi, group chief enterprise architect for Nando’s UK, Roark Pollock, CMO of Ziften Technologies, and Andrzej Kawalec, European director of strategy and technology for Optiv.

To set the scene, one definition of AI is “a task performed by a machine that would require a great deal of intelligence if performed by a human.” The dream of AI has its roots way back in history, first postulated by Ancient Greek philosophers. Fast forward to the mid 20th Century and it started to be the subject of academic debate and from the 1960s onwards it has been the centrepiece of much science fiction. There was a ‘trough of disillusionment’ in the 1980s and funding for AI projects largely dried up, taking many dreams with it.

The rise of machine learning for cybersecurity

But, sometime around 2012, as data volumes spiraled, the need to make sense of extremely large data sets started to become a pressing concern, leading to a renaissance in AI. Much of the focus is now on machine learning—a subset of AI. Machine learning refers to the use of computers to run algorithms to undertake reasoning that was previously seen as the preserve of humans. It’s about using maths and algorithms to enable computers to sort through the chaff so that we can actually find what the data and information really represents. The aim is to better detect threats and to automatically respond to them in a way that humans are capable of doing. But machine learning depends on algorithms, which must be written by humans, using maths to make predictions.

In terms of cybersecurity, machine learning aims to provide a better defence against threats, breaches and attacks. Organisations today are facing too many threats and security events to manage without automation and there is the further problem that it is cost prohibitive to hire new staff, especially those with the required knowledge and expertise. There is a need to close the human protection gap. The model that had been around for years is to identify problems and to then write signatures to defend against them, which must then be distributed and deployed to all. On average, this leaves a protection gap of around two weeks. With machine learning, it can be almost instantaneous. It extends beyond just what it knows about to make predictions about what is likely to be problematic, and to come up with better answers than humans could in far less time. It can be used to prioritise actions and learn from events for better predictive capabilities, providing a better defence against threats, breaches and attacks.

In cybersecurity, SIEM systems have come to be seen as a foundational technology for many organisations. But they were cumbersome, complex and limited in what could be achieved from the information that they provided. It was clear that what was needed was a higher level of automation for cyber threat response systems. The onus needed to be changed from a focus on preventing threats, to automated detection and response.

As a result, a number of useful technology capabilities have been developed that incorporate machine learning. These include behavioural analytics, which focus on identifying patterns of behaviour that would be anomalous when compared to baselines of expected behaviour. They provide detailed contextual information for greater visibility over what is actually happening.

Endpoint detection and response technologies provide visibility into activity occurring on the network and endpoints, using machine learning and continuous monitoring to search for behavioural patterns that appear to be suspicious or anomalous. Rich, contextual information enables more efficient, prioritised remediation. Such technologies can also be extended so that security teams can proactively hunt for threats.

One further category of technology that incorporates machine learning for cybersecurity is security orchestration, automation and response, which provides automation for improved incident response by defining, prioritising and standardising incident response functions.

Machine learning is not without limitations

At this point, there are still limitations to the use of machine learning. The quality of data is important. As the adage goes, “garbage in, garbage out”. There is also a need to train systems so that they can not only identify patterns, but also learn from what they have seen over time to predict problems and provide better outcomes. New technologies must also fit into a technology landscape that is complex and fragmented, often comprising multiple point solutions from a myriad of vendors.

Down the line, further complications are likely. Legal safeguards will need to be developed regarding areas such as the transparency and safety interfaces of autonomous decisions. This will be an issue in many industries, such as healthcare, where patient health and even lives may be put at risk if there is no element of human control.

What is the status quo?

So where are we now? The answer is, surely, that we are just at the beginning. Some AI is still just really old-school analytics and pattern matching. The goal is to get from human-driven analytics to automated predictive analytics. The first step is to achieve supervised analytics, where machine analytics is used to see what is happening and to predict what is likely to happen so that humans can be prompted as to what is the best course of action to take.

Eventually, we will get to cognitive analytics, where AI is performed in a more unsupervised manner, learning not only to detect and predict, but to run orchestration using robotics process automation. The ultimate goal is an autonomous immune system—rather like the human body. But that is still some way off.

As the panelists were discussing, AI is a bit like a young puppy. It is cute and everyone wants to pet it, but they know that if they take it home, it will likely make a mess and jump around, bark all the time and destroy things. It needs to be taught what is right and what is not, and how it is expected to behave. But puppies can grow up to be extremely useful guard dogs, given time and the right training. The best is yet to come.

Is AI still the stuff of science fiction, or is it becoming a reality? A subset of AI, machine learning, is starting to underpin many technologies in the cybersecurity space and they are showing much promise. However, there is much more to be done before the dream of AI will be reached.

If you would like to know more about this topic, please contact us on +44 (0)1494 291 992, by email at info@bloor.eu or via our contact form.

Post a public comment?