Kyle Wilhoit — Co-pilots in AI
“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42, with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity.
In our continued dialogue on AI’s impact on cybersecurity, we interviewed Kyle Wilhoit, director for threat research at Unit 42 Threat Intelligence. Kyle leads teams that perform threat hunting, research, as well as intelligence gathering and analysis for targeted attackers. These are from a nation-state espionage and a cybercriminal perspective. Kyle shares his thoughts and predictions on current and future implications of AI on the threat landscape.
Short-Term Impacts of AI in Cybersecurity
When discussing AI's role in cybersecurity, it's essential to consider the short-term impacts, as it’s somewhat uncharted territory requiring a keen perspective beyond the hype. As such, Kyle believes that AI will play a crucial role in extensive disinformation campaigns on social media platforms. These campaigns may occur before, during and after major global events, such as geopolitical conflicts and elections, which undermines trust in verifiable facts. While concrete evidence may be lacking, open-source intelligence suggests an increasing trend in the use of AI in disinformation campaigns.
Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution, recently sounded a warning on the potential for this use case, stating, “There's going to be a tsunami of disinformation in the upcoming election. Basically, anybody can use AI to create fake videos and audio tapes. And it's going to be almost impossible to distinguish the real from the fake.”
In our conversation, Kyle echoes that sentiment, highlighting the creation of deepfakes. Although not directly related to traditional IT security, deepfakes are starting to be combined with social engineering attacks. This intersection of technology and manipulation is a concern for security practitioners, especially as algorithms become increasingly honed to bypass traditional defenses, unable to thwart more sophisticated attacks.
Mid-Term Impacts — Sharpening the Tools
In the mid-term, Kyle predicts that threat actors will sharpen their tools with AI. For example, attackers could instruct AI models to identify an organization's internet-exposed assets with respective services running to help pinpoint vulnerabilities that may be present in those services. Kyle dives in a bit further:
"I anticipate a continued rise in the sophistication of threats, with malicious actors refining their skills with the assistance of generative AI. It’s important to note that while generative AI technologies, such as language models, are currently being used as assistants in existing operations, they haven't yet been fully utilized for creating malware or conducting large-scale malicious campaigns.
However, I do anticipate an increase in the use of these technologies for enhancing attacks in the near term. This might include the improvement and refinement of spear phishing email text, which could become more convincing by using correct geographical references, slang and grammatically accurate text to target native language speakers, as an example.
Additionally, I foresee the short-term potential use of generative AI for reconnaissance purposes, such as attackers-specific victim information, pre-compromise, like netblock ranges, open services/ports and possibly even assisting in vulnerability scanning."
Long-Term Impacts — AI-Powered Security Co-Pilots
Looking further into the future, Kyle envisions the possibility of threat actors developing their own AI-powered security co-pilots as one example of progression. These co-pilots could assist attackers in several parts of the attack chain, such as lateral movement and privilege escalation within compromised environments. While this concept is not yet a fully-realized reality, there is growing attention and discussion in underground forums and social networks about how attackers can leverage AI technology. Kyle shares his thoughts on co-pilots in security:
"Realistically, over the past six months, throughout the InfoSec community, we've heard a lot about security co-pilots, meaning a co-pilot that can help you with day-to-day operational tasks that some would consider oftentimes mundane. From this perspective, I could foresee in the future, meaning a more long term future, that threat actors might develop their own security co-pilots that might actually assist them in lateral movement or might assist them in privilege escalation, etc. A co-pilot that would be sitting next to you, basically giving you hints and helping you along the way as you continue to compromise a victim environment.
Imagine having a security co-pilot sitting in the SOC with you that does a lot of the rudimentary priority one [P1] type of alert triage. That right there cuts down not only the necessary headcount that you would need to do that role, but it takes out the manual time that it takes to investigate a false positive or false negative. So, I don't know if it would necessarily reshape the future, but I think what it would do is certainly impact budgets.
I also think it could certainly impact workflows and could ultimately speed up the time that SOC could respond to incidents just because it does cut down some of that manual effort that would be required to do that log alert analysis, at least for like P1 style of alerts."
AI and the Future of Cybersecurity
In summary, AI is poised to play an increasingly significant role in the field of cybersecurity. As the technology evolves, it will likely shape the way security practitioners and SOC personnel approach their roles. While short-term impacts may include disinformation campaigns and deepfakes, mid-term developments will focus on sharpening tools throughout the attack chain, such as for reconnaissance and spear phishing. In the long term, we may see the emergence of AI-powered security co-pilots, used to enhance malicious activities.
Kyle’s insights provide a glimpse into what is possible, emphasizing the need for security professionals to stay vigilant and adapt to the evolving threat landscape. Additionally, his mention of the potential risks associated with AI models in public repositories underscores the importance of securing AI assets and ensuring their integrity.
As the cybersecurity field continues to evolve, it's clear that AI will be a driving force in both defensive and offensive strategies. Organizations and security practitioners must stay informed, adapt their defenses, and be prepared to counter AI-driven threats effectively.
Learn more about how AI is improving SecOps. Take the Cortex XSIAM Product Tour and see a day in the life of a SecOps analyst. Experience firsthand the power of AI to simplify security operations, stop threats at scale and accelerate incident remediation.