False positives are only one part of a bigger cybersecurity problem
Tue 18 May 2021 | Mike Campfield
False positives are a symptom of a problem, not the cause
Security ‘alert fatigue’ is a real issue. According to a recent report from the SANS Institute, alert fatigue is one of the biggest barriers to retaining top security talent.
Many of the most common security tools, including security incident and event management (SIEM) tools and intrusion detection systems (IDS) are notoriously “noisy,” registering numerous false positives that security analysts must then investigate.
The prevalence of false positives can mean that real detections get overlooked or missed due to lack of time and resources.
To be sure, false positives are a real problem, straining already tight security resources. But false positives are a symptom of a problem, not the cause. So what’s at the root of the alert problem? And what can be done to rectify it?
It’s no secret that IT environments get more complex each day. Multiple cloud and hybrid cloud environments increase agility, but they make it harder to get a unified picture of activity across the environment.
Remote workers introduce new devices that may be poorly managed or not managed at all. The introduction of 5G and advances in smart devices mean that more and more systems rely on and connect to network resources, and these devices may not be manageable through traditional means at all.
All of these interconnected systems themselves contribute to the ‘noise’ – a challenge compounded by the fact that, using many traditional security tools, it’s very difficult to correlate behaviour across these disparate environments and device types, if you can access that data at all.
In security, the speed and fidelity of data are critical. Security analysts need real-time information about what’s happening in their environment, but they often struggle to get a complete picture. That’s because the data on which many tools, including SIEM, have traditionally relied, is incomplete.
SIEM tools are essentially log data aggregators. They collect logs from across the infrastructure, and then fire off alerts, typically without context, for security teams to try to sort and prioritise. Compounding this challenge is the fact that logging itself is almost always incomplete. The majority of organisations don’t have logging enabled for every piece of the infrastructure.
Common attack patterns leverage things like DNS, which is nearly impossible to log, leaving major gaps in visibility. It can also be difficult to add new data feeds to SIEM products and track existing feeds easily.
As a result the alerts issued by most SIEM tools are based on only a snapshot of data, making it difficult to determine which alerts are false, and leaving security teams sifting through (and too often ignoring) thousands of potential security incidents. And it’s not just SIEM. Firewalls and antivirus software often get their information from the SIEM, resulting in similar data quality issues.
But it’s not just the quality of the data that’s at the root of false positives. It’s also the static nature of many common security tools. SIEM products historically require a lot of work to configure and use, and the logs on which they rely are also manually configured and don’t self-adapt.
Like SIEM, IDS and firewalls must be manually configured to detect threat activity based on rules and signatures. While this is important for detecting known malicious behavior, it can also wind up alerting repeatedly and insistently on behavior that’s normal for a particular environment, while missing previously unidentified malicious behavior patterns altogether.
Endpoint protection platforms and anti-virus tools only provide visibility for devices that can be instrumented, and while the vast majority of data centre endpoints can be managed this way, the exploding populations IoT and OT devices cannot. Even for devices that can be managed by these tools, instrumenting them with the correct agent typically requires a manual process.
Respond to alerts that matter
Legacy technologies like IDS have given network security a bad reputation, but the network is actually incredibly valuable when it comes to detection and response.
Advances in machine learning (ML) and behavioural analytics have made it possible to not only analyse network traffic, but to derive high-fidelity detection that keeps teams focused on threats that matter most, rather than chasing false positives.
By analysing the network traffic based on how complex the anomalous behaviour is, the likelihood of an issue and the frequency, the risk of an alert can be scored. This system scores from low, medium and high to statistically determine the risk of the alert, meaning security teams can gain back the time they spend on investigating false positives.
ML and behaviour-based detections also have the advantage of being able to detect unknown attack vectors because they do not rely on signatures, like IDS and SIEM tools. They are able to detect IOCs which haven’t been widely identified by using ML to create predictive behaviour profiles.
Behaviour-based detection provides teams with more conclusive insights into security events and gives forensic-level evidence teams can use to understand and report the scope of the incident.
The network also has a major advantage against other data sets. Unlike logs, which can be erased, or agents, which can be detected and disabled by threat actors, the network is passive and out-of-band.
With more advanced threats and complex environments, security teams need to increase visibility, alert context, and have the ability to respond to legitimate threats quickly. The network takes away the guesswork and means security teams are able to focus their efforts on threats which need more investigation or intervention, rather than low-level false positives.
Clearing the queue in an efficient way, based on real-time analytics, leads to happier SOC analysts and ultimately, better security for the organisation. Behaviour-based detection is the anecdote for security team ‘alert fatigue’.