DeepLocker: new breed of malware that uses AI to fly under the radar

IBM researchers are seeking to raise awareness that AI-powered threats are coming our way soon. To that end, they’ve created an all-new breed of malware to provide insights into how to reduce risks and deploy adequate countermeasures.

DeepLocker was showcased at Black Hat USA 2018, the hacker conference that provides security consulting, training, and briefings to hackers, corporations, and government agencies globally.

Researchers Marc Ph. Stoecklin, Jiyong Jang, and Dhilung Kirat demonstrated how a piece of malware can be specifically targeted at one person and not others by training a neural network to recognize the victim’s face. The malware is obfuscated and hidden inside a legitimate program, in this case a video conferencing app.

When the AI finds its target, it triggers the unlock key that de-obfuscates the hidden malware and executes it. For this proof of concept, they used WannaCry itself – the infamous ransomware that made headlines last year.

“What is unique about DeepLocker is that the use of AI makes the ‘trigger conditions’ to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model,” Stoecklin writes.

“The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims. The neural network produces the “key” needed to unlock the attack. DeepLocker can leverage several attributes to identify its target, including visual, audio, geolocation and system-level features. As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, including the attack payload and the specifics of the target,” Stoecklin explains.

The novel method allows for three layers of concealment: target class concealment; target instance concealment; malicious intent concealment.

When launched, the video-conferencing app feeds images of the subject into the embedded AI model, while behaving normally for all others using the app at the same time on their respective terminals.

“When the victim sits in front of the computer and uses the application, the camera would feed their face to the app, and the malicious payload will be secretly executed, thanks to the victim’s face, which was the preprogrammed key to unlock it.”

The aim of the team’s briefing is not to give bad actors ideas, but to raise awareness about the rising AI-powered threats. Defenders too also need to leverage AI to create defenses against these new types of attack, the team said.

“Current defenses will become obsolete and new defenses are needed,” the trio conclude in their presentation.

Read the original article here

You may also like