Features Hub

AI and cyber security: Separating hype from reality

Tue 16 Jul 2019 | Pascal Geenens

Is artificial intelligence revolutionising cyber security or is it just another brick in the wall? Pascal Geenens, security evangelist for Radware, thinks there’s cause for caution when it comes to AI

If we’re to believe the most enthusiastic backers of AI-led cyber security, trained security specialists could soon be replaced with sci-fi like defences that will intelligently zap attacks out of cyberspace before any human could even see them coming.

On the other hand, many will say that we are still some way off being able to completely depend on AI security, and we should instead consider the advancements made elsewhere, like in machine learning, to boost defences in the near future.

One thing is for certain: AI’s role in cyber security divides opinion. And, as ever, the most likely answer lies somewhere in between these views.

Fit for purpose

My own view is that we have yet to have to see a true AI security application or system that can intelligently adapt and evolve to different situations and not just continuously perform a single, repetitive task.

And this is crucial, because with digital transformation, migration to the cloud and serverless architectures becoming the norm, many will look at AI and automated defences as a silver bullet to solving all security worries. To illustrate this point, a recent Radware survey showed that 82 percent of organisations have shifted budget towards automated security over the past two years and, on average, 37 percent of security budget is now dedicated to automated systems.

But confidence in the current state of AI may be somewhat misplaced. In fact, I’d call it a case of blurred lines as many are failing to understand the difference between AI and machine learning.

Machine learning

Machine learning, which technically is a subdomain of AI, is more than just neural networks and deep learning. These terms are all the rage in the industry, but only represent one class of algorithms within a large domain. It’s machine learning, minus the neural networks and deep learning, that has been proven in the field.

But machine learning is generally being applied to smaller, less complex classification tasks for anomaly detection. The emphasis for machine learning is on low to medium complexity in modelling expected behaviour, and then autonomously improving the model with small increments over time as it ‘learns’ through data the specifics of its environment.

“The fact is that attacks themselves are becoming increasingly automated, and the only way to defend adequately will be to fight automation with automation”

While proven and successfully applied, machine learning focuses on a very specific task and performs it over and over. The only improvement over time is getting better at predicting the outcomes.

Neural networks

Meanwhile, neural networks and its deep learning branch are one type among many machine learning algorithms. While the ‘traditional’ (non-neural network and non-deep learning) machine learning algorithms are modelled and coded by humans, working on low to medium complexity problems, neural networks and deep learning can be applied to highly complex problems.

In general, you can look at deep learning as a way to program using data instead of programming languages or state machines. If the data is good and there is a sufficient amount of it, the resulting model will be able to classify, and in the case of security, detect anomalies from legitimate behaviour.

State of play

While it still has its challenges, deep learning is able to find associations in data we humans would never be able to find, helping us reach new levels of detection we couldn’t before with traditional models and machine learning.

Most applications in use today (successful ones, that is), are based on supervised learning neural nets. The idea behind supervised learning is very simple. A rather generic model is trained using a set of labelled data, data for which the outcome from a given input is known and known to be correct. Once trained, the model can take any input and predict the output as a probability ratio between the fixed set of labels.

For example, it is a common use for email spam filters. It works because the sheer volume of historically labelled emails provides enough data to ‘learn’ and ‘understand’ which messages are spam, and given enough data, a deep learning neural net will be able to ‘generalise’ its understanding in such a way that it can classify new messages it has never seen before with a fair probability.

Given lots of historical data, the neural net will most of the time make the right ‘decision.’ These sorts of supervised nets can be considered an advanced form of automation. Instead of coding rules into the automation, the automation is coded through data samples and learns by example.

They are highly efficient, and we shouldn’t underestimate them – they provide a solution for many domains where coding rules would be virtually impossible because of the complexity and our limitations to understand and maintain such complex code as humans. As such, supervised learning opened the door for new applications that were deemed too complex for traditional algorithmic coding.

But there are a number of challenges that limit deep learning’s utility:

“A recent Radware survey showed that 82 percent of organisations have shifted budget towards automated security over the past two years”

Deep learning needs a lot of data: practically this means that deep learning requires a lot of resources for training. Once trained, the model can run on limited resources to perform predictions based on never seen before inputs and can do it in near real time.

Deep learning needs GOOD data: Data must be labelled correctly and be free of any potential bias. Practically this becomes a problem if you would deploy a new model in a real-world scenario and have learn based on its environment. In security, this means a model will have to train in an adversarial environment, one where attacks are a reality. Making deep learning resistant to learning in presence of adversaries is still ongoing research

Finally, once a model is successfully trained, it has to competently perform in a real-life environment. Deep learning models are only presently performing well in static environments. Real networks are environments that are continuously changing and evolving. Deep learning cannot work fully autonomously in such environments, at least not without humans continuously improving the training sets, re-training and evaluating the model, improving the learning and resizing, re-architecting the neural networks, all the while sanitising the outputs.

Separating hype from reality

The search for the universal neural network, one capable of producing low rates of false positives (such as we are accustomed to from more traditional machine learning algorithms) is still a holy quest. Until then, deep learning remains a tool that assists security experts in filtering out noise and focussing on the really important events, while feeding back information into the model to improve and adapt it to new situations.

I’m perhaps a little negative on the advancements seen in AI security so far, but I will say this: I am convinced that this technology and the innovations it brings – and the automation of cyber security in a broader and general sense – will be a requirement for keeping ahead of future attacks as attackers are maturing and their attacks getting more complex every day.

The fact is that attacks themselves are becoming increasingly automated, and the only way to defend adequately will be to fight automation with automation.

Whether defensive automation comes through incremental advancements in deep learning or breakthrough innovation in AI, remains to be seen. But we are on the right path, and I’m confident it will lead to a more secure future.

Experts featured:

Pascal Geenens

Security Evangelist


adversarial a deep learning machine learning neural networks
Send us a correction Send us a news tip