"Cybersecurity Telemetry Integration and Its Impact on AI Model Performance" – Brock Bell
“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42 with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how PAN protects itself and its customers, as well as implications for the future of cybersecurity.
In these heady days of generative artificial intelligence (GAI) innovation, it seems like everything is possible. Large language models (LLMs) trained on massive data sets will help organizations work smarter and faster with fewer people. Intuitive interfaces and automated actions will lessen our reliance on specialized expertise, and CISOs will finally gain relief from the perennial talent shortage. What could go wrong?
Of course, in the world of security, “What could go wrong?” is never a rhetorical question. As we’ll see in the coming months, the rapid rate of tool development and adoption will (in some organizations) outpace due diligence, exposing them to unanticipated risks and unintended consequences. In 2024, enthusiasm about the transformative potential of AI for SecOps will be tempered by cautionary tales of blind spots, self-inflicted wounds and thinned-out security operations center (SOC) capabilities.
The Justified Excitement about AI for Cybersecurity
First, the good news: AI really can transform cybersecurity. ChatGPT-style interfaces will help SOC analysts answer questions more quickly and accurately about threats, vulnerabilities and anomalies. Silos will be broken down for a more unified view of the attack surface. Greater efficiency will vastly increase productivity, while AI-powered prevention, detection and response will close windows of vulnerability with unprecedented speed.
AI will also play a valuable role in democratizing cybersecurity. Brock Bell, director of global consulting operations for Unit 42, observes:
“As AI becomes normalized within cybersecurity in the next couple of years, it will be a big help for small and midsize businesses that are generally left behind by higher-end technologies. The introduction of AI will help reduce the complexities and overhead that has often existed and has made it more challenging for smaller organizations to reap the benefits of the rapid development and innovation in cybersecurity.”
When Innovation Outpaces Quality Control
While Bell is bullish on the promise of AI for cybersecurity, he also encourages caution for organizations along the way:
We’re going to have a big ‘uh-oh’ moment because many companies are rushing things to market so quickly that they may be overlooking something in the data or the data models.
As vendors and organizations pump data into their models, it’s easy to become complacent in the vision and lose sight of the particulars. Bell says:
"People start to get comfortable and forget to go back and truly quality check the implementations. We already see this kind of thing in investigations where people think they’re looking at all of the details of the traffic flowing in and out of their network, but they’re not paying enough attention to SSL decryption. These models are not going to be any good if you forget to flip that switch, and it can’t actually see inside that traffic.”
The real acceleration of AI comes if your model can see all the data it needs to.
The Self-Inflicted Wounds of Automation
The vision of detection and prevention that many organizations have, at this stage of AI development, have remained works in progress. We continue to press on with innovation, but the implementation of these technologies are not yet as quick as many may think or hope. Each implementation of AI-enhanced detection and prevention platforms still takes hundreds of hours of onboarding, playbook creation and fine-tuning to genuinely reap the advertised benefits. Organizations that overestimate the current state of automation may be doing more harm than they realize.
Bell explains further:
“I’ve seen a lot of situations where things are contained, shut down, reversed or otherwise altered in an automated way because a risk has been perceived, and it creates self-inflicted wounds by damaging production services or keeping people from being able to do their jobs. It’s a net plus in concept, but it doesn’t work very well if we don’t have appropriate ways of testing playbooks and carving out exceptions. In general, the more you try to leverage AI and automation, the fewer exceptions you can have, because the models aren’t quite ready for them. As the pace of engineering presses on, it remains critical to have a human element reviewing actions taken periodically, so that the platforms can be tuned to an appropriate level to create success for an organization.”
As you roll out automation, be sure to have clear lines of communication across your stakeholders. Ensuring that your business units, application owners and end user populations have a clear way to escalate issues based on impact is critical to mitigate undesirable business impact and a negative perception of the security and IT teams involved in the rollout.
The Dangers of SOC Dumbification
In the longer term, AI may not only empower junior cybersecurity personnel or supplement their ranks but take on higher-level roles in its own right. Bell says, “The speed of engineering is incredible. Many years out, we could see AI truly thinking like a SOC manager, asking its own questions, and feeding the answers to the CISO itself.”
Yet the advance of artificial expertise can have a less than salutary impact on the human expertise available in the SOC. “We lean into all these wonderful cybersecurity platforms and processes that take so much data and abstract it to a more simplistic level of detail to make things easier for SOC analysts to act quickly,” notes Bell.
“But then, as soon as you need to break that model, because you’ve got a legacy business segment or an acquisition or a group that wasn’t worth the fancy new technology, your SOC analysts don’t know what to do. They don’t know where to get the data or how to interpret it. I see this all the time in client environments, where their SIEM or other security platform isn’t receiving the telemetry for one specific part of the organization. This creates a blind spot, which often leaves SOC analysts with a gap in their playbooks. Ultimately, this becomes one of the core reasons why incident response retainers are triggered to fill in the gap.”
In that sense, the best way for CISOs to properly prepare for their implementation of AI is to never forget the importance of due diligence with any new addition to your SOC. Think about the time to implement the technology for new business segments, such as acquisitions, and augment your playbooks with flows for data that may not make it into the SIEM for analysis because of cost/bandwidth trade-offs. If your organization is leaning into the implementation of these incredibly powerful platforms, be sure to enable your SOC to still have success for the occasional issue that falls outside of them.
Running table top scenarios that break the mold of the playbook and supporting continuous education around digital forensics and incident response are cornerstone components to mitigate the risks of SOC dumbification. If those aren’t viable for your organization, at least ensure you have a quality incident response partner on retainer to assist in times of need. From being vigilant about blind spots and realistic about automation, to preserving vital institutional expertise, the success of AI-powered cybersecurity will depend on the mindfulness and judgment of the personnel flipping the switch.
For more information visit our interactive page, The Resilient SOC, Essential Reading for CISOs.