The Role of AI in Threat Detection & Hunting – Part 1
The Pros and Cons of AI
The ever-evolving threat landscape is becoming increasingly complex and sophisticated, posing a significant challenge to organisations seeking to protect their critical assets. In this landscape, artificial intelligence (AI) has emerged as a transformative tool for threat detection and hunting, providing organisations with a proactive approach to cyber security.
In the context of threat detection and hunting, AI can be leveraged in one of two ways. First, AI is baked into the endpoint and network detection tools to discover anomalies and other pattern recognition tasks that would be impossible for a human analyst to detect, given the huge amounts of data to be analysed. Secondly, and more recently, AI can be used to search the internet for the latest threats, vulnerabilities, exploits and techniques, it can then take the results of these searches and propose threat detection rules which it can also translate into various detection languages depending on the toolsets you would like to apply them to. These rules can then be deployed, tested, and ideally, verified by a human analyst before going live.
The Benefits of AI in Threat Detection and Hunting
AI offers a multitude of benefits that empower organisations to combat cyber threats effectively:
- Scalability: AI algorithms can process and analyse vast amounts of data, far exceeding the capabilities of traditional security tools. This enables organisations to detect and respond to threats with greater efficiency, even as the volume and complexity of data continue to grow unabated. This data can be in the context of data being fed into a SIEM tool, or in the context of threat intelligence available on the open internet.
- Automation: AI automation could streamline the threat detection and hunting process, freeing up security analysts to focus on more complex investigations and strategic initiatives. This automation improves response times and optimises the overall cost of security operations. Automation of the creation of detection rules reduces the risk of new emerging threats being left undetected. This work can be carried out by a threat intelligence analyst and detection engineer however the process is time consuming and never ending. AI reduces the time cost and time delay of these activities.
Anomaly detection algorithms exemplify the practical application of AI in threat detection. These algorithms analyse network traffic, system logs, and user behaviour, identifying anomalies that could signal malicious activity. For instance, an anomaly detection algorithm might detect a sudden surge in network traffic from a specific source, indicating a potential compromise attempt.
Anomaly detection also leverages AI’s power through machine learning algorithms to analyse threat intelligence data. These algorithms learn from extensive historical data, including known threat signatures, attack patterns, and vulnerability information. Enabling them to identify emerging threats that may not yet be recognised by security analysts. These AI capabilities are baked into the tools used by SOCs, and as AI continues to improve, we can expect to see fewer false positives from these algorithms, as well as fewer false negatives (malicious activity not detected as malicious).
The Limitations of AI in Threat Detection and Hunting
While AI has revolutionised threat detection and hunting, it is not without its challenges and limitations:
- Hallucinations: In its current form AI is susceptible to hallucinations, particularly in the context of Large Language Models (LLMs). A hallucination is when the LLM presents false information as fact, and unfortunately, it does this confidently. LLM’s have not yet learned to say when it does not know the answer. For this reason, while AI can greatly increase the efficiency of a human analyst and carry out much of the leg work, most AI output needs to be verified in some way, usually by a human analyst, or have other safeguards put in place.
- Opacity: The inner workings of AI models are often opaque, making it difficult to understand their decision-making rationale. This opacity hinders troubleshooting and can lead to confusion when interpreting security alerts. This can be engineered by training the AI not only to provide an answer or an output, but also to explain its reasoning. For example, in the context of creating a detection rule, which is usually in the form of code, the usage of code comments will help the analyst understand how the code is achieving the goal which will allow logical mistakes to be picked up easier. In the context of anomaly detection, a clear explanation of why such an anomaly is a risk will reduce the time required for an analyst to determine if the anomaly poses a threat.
In part 2, we delve into getting the most out of AI in threat detection and hunting, providing insights and recommendations on how to maximise the benefits of AI while working alongside human analysts.
Let's talk
London - 14 Shepherdess Walk, Hoxton N1 7LB
Cardiff - Socura, Floor 5, One Central Square CF10 1FS