“Adolescent” AI is not enough for cyber threat intelligence

esteria.white

Digital Security, Ransomware, Cybercrime

Current LLMs are simply not mature enough for high-level tasks

Black Hat 2023: “Adolescent” AI is not enough for cyber threat intelligence

Mention the term “cyber threat intelligence” (CTI) to mid-sized and large enterprise cybersecurity teams and the words “we are starting to study the opportunity” are often the response. These are the same companies that may be suffering from a lack of quality, experienced cybersecurity professionals.

At Black Hat this week, two members of the Google Cloud team demonstrated how the capabilities of Large Language Models (LLM), such as GPT-4 And Palm can play a role in cybersecurity, particularly in the area of ​​CTI, potentially solving some resource issues. This may seem like a future concept for many cybersecurity teams, as they are still in the exploration phase of implementing a threat intelligence program; at the same time, it could also solve part of the resource problem.

Related: A first look at threat intelligence and threat hunting tools

The Essentials of Threat Intelligence

A threat intelligence program needs three essential elements to be successful: threat visibility, processing capability, and interpretation capability. The potential impact of using an LLM is that it can significantly assist with processing and interpretation. For example, it might allow additional data, such as log data, to be analyzed that, due to its volume, might otherwise have to be neglected. The ability to then automate production to answer business questions removes an important task from the cybersecurity team.

The presentation raised the idea that LLM technology may not be suitable for all cases and suggested that it should focus on tasks that require less critical thinking and where large volumes of data are involved , leaving tasks that require more critical thinking in the hands. of human experts. One example used was where documents needed to be translated for attribution purposes, an important point as an inaccuracy in attribution could cause significant problems for the business.

As with other tasks that cybersecurity teams are responsible for, automation should be used, at this time, for the lowest priority and least critical tasks. This is not a reflection of the underlying technology, but rather an indication of the evolution of LLM technology. It was clear from the presentation that the technology has its place in the CTI workflow, but at present it cannot be fully trusted to return correct results, and in more critical circumstances, a response false or inaccurate information could cause a significant problem. This seems to be a consensus in the use of the LLM in general; there are many examples where the generated the result is somewhat questionable. A Black Hat lead presenter put it perfectly, describing AI, in its current form, as “like a teenager, it makes things up, it lies and makes mistakes.”

Related: Will ChatGPT start writing killer malware?

The future?

I am certain that in just a few years we will entrust AI with tasks that will automate some of the decision-making, for example changing firewall rules, prioritizing and remediating vulnerabilities, automating disabling systems due to a threat, etc. For now, while we must rely on human expertise to make these decisions, it is imperative that teams do not rush and implement technology that is in its infancy in roles as critical as cybersecurity decision-making.

Leave a comment