The Stack Archive News Article

Researchers identify new AI-powered malware threat

Wed 8 Aug 2018

AI malware

Researchers at IBM have identified a new class of cyberthreat, that uses artificial intelligence to power malware that is both specific and evasive.

While DeepLocker has yet to be seen outside of the research lab, all of the tools used to create it are readily available: existing malware, and AI tools that can be trained to recognize a target.

DeepLocker malware can remain undetected for lengthy periods, inactive until presented with an AI trigger – through facial or voice recognition, or geolocation – that indicates a specifically targeted individual. When the trigger is recognized, it acts as a key, activating the dormant malware on the system.

In a blog post describing the DeepLocker threat, researchers noted that it is particularly dangerous for two reasons: the fact that it avoids detection until activated, and also that “like nation-state malware it could infect millions of systems without being detected. But, unlike nation-state malware, it is feasible in the civilian and commercial realms.”

Additionally, a further level of concealment makes DeepLocker difficult to analyze: commands are buried within the AI model network, protected from attempts to investigate and translate the trigger condition and the attack payload either before or after the malicious code is executed.

As a demonstration of DeepLocker, IBM researchers inserted WannaCry ransomware into a video conferencing application, ensuring that it could bypass standard malware detection. Then, an AI model was trained to identify a specific person through facial recognition; when that person was detected on a video call, the ransomware was unlocked and executed on the system.

The implication is that, in similar circumstances, malicious code could be injected into an innocuous application and distributed and downloaded on a large, international scale. The application would behave normally for all users until the target is recognized by the AI model: with the individual’s face acting as the key to unlock the malware. This undetectable malware could exist on thousands of devices and only execute when activated by a targeted individual.

IBM will present its research on DeepLocker at the Black Hat USA 2018 conference, held this week in Las Vegas. The presentation will involve a demonstration of the potential threat, as well as a discussion on how security professionals can mitigate the risk of next-level cyberthreats involving AI.


AI hacking IBM malware news security U.S.
Send us a correction about this article Send us a news tip