Researchers discover new “conversation spillover” tactics

esteria.white

Threat researchers have revealed a new cyber attack using hidden emails to fool machine learning (ML) systems, enabling infiltration of corporate networks.

An advisory published today by SlashNext calls this tactic a “Conversation Overflow” attack, a method that bypasses advanced security measures to deliver phishing messages directly to victims’ inboxes.

Malicious emails are made up of two distinct components. The visible part prompts the recipient to take actions, like entering credentials or clicking links. Below, many blank lines separate the hidden section, which contains innocuous text resembling the content of a regular email.

This hidden text is designed to trick machine learning algorithms into classifying the email as legitimate, allowing it to bypass security controls and reach the target’s inbox.

This technique has been observed repeatedly by SlashNext researchers, indicating potential beta testing by malicious actors to evade artificial intelligence (AI) and ML security platforms.

Learn more about AI-powered security: RSA eBook details how AI will transform cybersecurity in 2024

Unlike traditional security measures that rely on detecting “known bad” signatures, machine learning systems identify anomalies based on “known good” communication patterns. By imitating benign communication, malicious actors exploit this aspect of ML to hide their malicious intentions.

Once infiltrated, attackers deploy credential-stealing messages disguised as legitimate re-authentication requests, primarily targeting senior executives. Stolen credentials fetch high prices on dark web forums.

According to SlashNext, this sophisticated form of credential harvesting poses a significant challenge to advanced AI and ML engines, signaling a shift in cybercriminal tactics in the evolving AI-driven security landscape.

“From these results, we should conclude that cybercriminals are transforming their attack techniques in the emerging era of AI security,” it reads. the board. “As a result, we are concerned that this development reveals a whole new toolkit currently being refined by criminal hacking groups in real time. »

To defend against such threats, security teams are recommended to improve AI and ML algorithms, conduct regular security training, and implement multi-layer authentication protocols.

Leave a comment