The Future of Cyber Security: AI-Powered Playbooks
In the fast-paced landscape of cyber security, organisations are constantly looking for new and innovative ways to protect their critical assets from evolving threats. AI is making its presence felt in the realm of playbooks, automating tasks and simplifying the daily lives of security analysts. In this blog, we will delve into the exciting developments in AI-powered playbooks and look at what the future holds.
Streamlining Playbook Creation with AI
Traditionally, playbooks in Security Orchestration, Automation, and Response (SOAR) platforms have been designed by human analysts. In the near future models will be able to be trained and fine-tuned with playbook JSON, along with a description of what the playbook is achieving, and with this training, the model will be capable of creating new playbooks dynamically. It will be important, as the speed of new threats emerging increases, that playbook responses be able to adapt dynamically in real-time.
Automated Suggestions and Recommendations
A step further from models being able to create playbooks from a given description is an AI agent observing and learning repetitive actions. From these observations, recommendations for playbook and automation enhancements will be given automatically, with the ability to implement these enhancements if approved by the end user.
Recommendations for Improvements to Detection Logic
As SOCs move to a detection as code model, AI will be able to have a real time impact on the detection as code logic. For example, upon ingesting an incident and this incident is found to be a false positive, the detection rule may be tuned to exclude this detection in the future. With intelligent Large Language Models (LLMs), the model could ingest and review the detection rule, and make suggestions as to how the rule could be improved to remove not just the current false positive, but potential future false positives. Importantly it could also make suggestions to ensure the detection logic does not overlook real incidents.
The Rise of Feedback-Driven and Chain of Thought Reasoning
AI systems are beginning to embrace feedback-driven learning, a feature that offers critiques of their output. This approach involves chaining prompts together and inviting self-evaluation. If an output is deemed incorrect or incomplete, the AI system refines its response, resulting in an iterative improvement process. Notably, this aspect is seamlessly integrated into SOAR platforms, allowing security teams to build their feedback-driven AI models. The chain of thought capability allows models to interrogate their own answers with additional questions, which can have a large impact on accuracy metrics. In the future, it’s likely that ‘chain of thought’ will be built into the LLMs themselves, but for now, this can be recreated using playbook tasks chained together in a playbook.
The future of cyber security looks to be in embracing the promise of AI-powered playbooks. These playbooks will streamline security operations, study repetitive tasks, and offer invaluable recommendations. As the integration of AI continues to grow, so too will the capabilities of SOAR platforms, ensuring that the defenders of the digital world are always one step ahead. With these new advancements analysts’ skills will continue to evolve, mastering the AI prompts and investigative skills needed to continue remediation and upskilling first line support.
Let's talk
London - 14 Shepherdess Walk, Hoxton N1 7LB
Cardiff - Socura, Floor 5, One Central Square CF10 1FS