Remember that old saying: “if you can’t beat them, join them.” Well, as far as cyber warfare is concerned, the hackers have turned this rusty saying into a credo.
In an article published on The Next Web, Isaac Ben-Israel, ICRC’s director and the Chairman of Cyber Week, noted that AI is both our salvation and downfall – it’s inconsequential how advanced or hardened our cyber defense techniques are because hackers are always one step ahead.
So, what happens when the AI that’s supposed to protect our networks and computer goes rogue?
Fighting fire with fire?
It’s now in the least surprising to find out that hackers have begun using our own weapons against us. What did you think? That they would stick to outdated methods like DoS, malware, and Man-in-the-Middle?
Of course not; times are changing, and only a fool would stay put while everything around him changes and evolves. To affirm that the very fine and very illegal art of hacking has evolved would be a major understatement.
According to Ben-Israel, hackers are now using machine learning techniques and deploying rogue AIs in a bid to counter any and every security measure we can come up with. And yes, they can also make AI turncoats.
As far as these state-of-the-art methods for committing cybercrime are concerned, it would appear that, by far, the most ‘popular’ is AI co-option. Basically, it coaxes the AI into turning against you by making friends.
To do that, hackers need to disrupt the machine learning patterns. Let’s backtrack a bit – the very core of AI is machine learning, a technique used by an AI to make an assumption based on patterns observed in data streams.
By analyzing these patterns, the AI’s ability to learn, grow and get better. Now, hackers can theoretically disrupt these patterns by injected fake data.
The AI will simply label these infiltrations as safe activities and ignore them. Obviously, once the big daddy turns his back, the hackers will be able to steal data unhampered.
Sounds like something out of a science-fiction movie or novel, but it’s not. Unfortunately, we are at war; not the kind you would fight with guns and tanks and grenades, but one fought from behind the screen.
And this is hardly the only AI-based hacking method. As Ben-Israel stated, another way of “outfoxing” AI security systems would be to make very small changes in the various virtual object.
For instance, a team of cybersecurity researchers from Japan’s Kyushu University managed to trick an AI into believing that the image of a kitten was actually depicting a dog or a stealth fighter.
Last, but not least, is the so-called bobbing and weaving method. This implies inserting process, signals, or a combo to the AI.
As the signals do not impact security, the AI will simply ignore them. Of course, they will use this to launch full-scale cyber attacks with the AI not lifting a finger.
Is this a reason for concern? It most certainly is. However, there’s no point in burying our heads in the sand. Not this time because it’s only going to get worse.
Let this be a wake-up call for all those who think that they’re out of harm’s way just because they’ve spent a lot of cash for cybersecurity. So, what’s your take on Ben-Israel’s story? Head to the comments section and let me know.