• Tue. Jun 25th, 2024

GPT has entered the security threat intelligence chat 


Apr 11, 2023
ARMO shows how ChatGPT can help protect Kubernetes 


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

In enterprise security, speed is everything. The quicker an analyst can pinpoint legitimate threat signals, the faster they can identify whether there’s a breach, and how to respond. As generative AI solutions like GPT develop, human analysts have the potential to supercharge their decision making. 

Today, cyber intelligence provider Recorded Future announced the release of what it claims is the first AI for threat intelligence. The tool uses the OpenAI GPT model to process threat intelligence and generate real-time assessments of the threat landscape. 

Recorded Future trained openAI’s model on more than 10 years of insights taken from its research team (including 40,000 analyst notes) alongside 100 terabytes of text, images and technical data taken from the open web, dark web and other technical sources to make it capable of creating written threat reports on demand. 

Above all, this use case highlights that generative AI tools like ChatGPT have a valuable role to play in enriching threat intelligence by providing human users with reports they can use to gain more context around security incidents and how to respond effectively. 


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

How generative AI and GPT can help give defenders more context 

Breach detection and response remains a significant challenge for enterprises, with the average data breach lifecycle lasting 287 days — that is, 212 days to detect a breach and 75 days to contain it. 

One of the key reasons for this slow time to detect and respond is that human analysts have to sift through a mountain of threat intelligence data across complex cloud environments. They then must interpret isolated signals presented through automated alerts and make a call on whether this incomplete information warrants further investigation. 

Generative AI has the potential to streamline this process by enhancing the context around isolated threat signals so that human analysts can make a more informed decision on how to respond to breaches effectively. 

“GPT is a game-changing advancement for the intelligence industry,” said Recorded Future CEO Christopher Ahlberg. “Analysts today are weighed down by too much data, too few people and motivated threat actors — all prohibiting efficiency and impacting defenses. GPT enables threat intelligence analysts to save time, be more efficient, and be able to spend more time focusing on the things that humans are better at, like doing the actual analysis.”

In this sense, by using GPT, Recorded Future enables organizations to automatically collect and structure data collected from text, images and other technical shortages with natural language processing (NLP) and machine learning (ML) to develop real-time insights into active threats. 

“Analysts spend 80% of their time doing things like collection, aggregation, and processing and only 20% doing actual analysis,” said Ahlberg. “Imagine if 80% of their time was freed up to actually spend on analysis, reporting, and taking action to reduce risk and secure the organization?” 

With better context, an analyst can more quickly identify threats and vulnerabilities and eliminate the need to conduct time-consuming threat analysis tasks. 

The vendors shaping generative AI’s role in security 

It’s worth noting that Recorded Future isn’t the only technology vendor experimenting with generative AI to help human analysts better navigate the modern threat landscape. 

Last month, Microsoft released Security Copilot, an AI powered security analysis tool that uses GPT4 and a mix of proprietary data to process the alerts generated by SIEM tools like Microsoft Sentinel. It then creates a written summary of captured threat activity to help analysts conduct faster incident response.  

Likewise, back in January, cloud security vendor Orca Security — currently valued at $1.8 billion — released a GPT3-based integration for its cloud security platform. The integration forwarded security alerts to GPT3, which then generated step-by-step remediation instructions to explain how the user could respond to contain the breach.  

While all of these products and use cases aim to streamline the mean time to resolution of security incidents, the key differentiator is not just the threat intelligence use case put forward by Recorded Future, but the use of the GPT model. 

Together, these use cases highlight that the role of the security analyst is becoming AI-augmented. The use of AI in the security operation center isn’t confined to relying on tools that use AI-driven anomaly detection to send human analysts alerts. New capabilities are actually creating a two way conversation between AI and the human analyst so that users can request access to threat insights on demand. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Leave a Reply

Your email address will not be published. Required fields are marked *