Robert Naples is a nationally recognized business strategist writing on cybersecurity and digital transformation.
getty
When it comes to AI, we live in the calm before the storm, a tipping point before everything changes.
Consider the last two months in science. ChatGPT by OpenAI – the newest and most advanced chatbot readily available to the public – has threatened to revolutionize plagiarism with its ability to generate content on almost any topic based on simple prompts. Just over a month after its release, a survey of over 1,000 students found that 89% were already using it to help with their homework. Although these reported cases were decent, the same software can help students write schoolwork in minutes or even pass rigorous written exams in any subject.
AI can benefit society in incredible ways. At the same time, it can enable misdeeds, malicious actors, and outright crime. Just as students will use it to cheat in class, politicians could use it to spread propaganda, and corporations could use it to mine our data.
And hackers, probably more or more than any other demographic, will use AI to make cyberattacks more dangerous and on a scale never before imagined.
How AI will change hacking
In a way, future AI cyberattacks will confirm our deepest concerns.
For example, look at what happened on a Friday afternoon in March 2019. A managing director of a UK energy company received a call from his manager. The director was ordered to urgently wire over $240,000 to a supplier in Hungary to avoid late fees. It was an odd request, admittedly, but the CEO on the phone sounded perfectly normal, aside from being so concerned. And so the director did as he was told.
The money went straight to the hackers who cloned this CEO’s voice using readily available software.
The story was ominous, but what we Really Having to worry isn’t that cut out for Hollywood. The greatest threat to our modern internet – and perhaps to society at large – may come from ordinary, almost mundane ways that criminals can leverage AI to scale their attacks.
In December, Check Point Software researchers watched everyone use ChatGPT to write poetry and schoolwork, and they wondered: Could it be writing a phishing email? Or how about a whole chain of attacks?
They experimented by first asking ChatGPT to write a decoy and encouraging victims to download an Excel file. After a few iterations, the email achieved all the tricks hackers love: false legitimacy, a sense of urgency, and an effective call to action.
Next, the researchers had ChatGPT write a macro that ran custom code whenever a victim opened the Excel file. Then they used another OpenAI program – Codex – to automatically generate a working reverse shell through which they could run commands, collect data, and send more malware to the targeted computer. They also had Codex develop a port scanner and virtual machine identification tool.
The result was quite a paint-by-numbers attack, but not bad for an afternoon’s work. And theoretically any number of non-OpenAI programs could have helped improve it.
For example, there is PassGAN, a neural network-based program that trains on past data leaks to efficiently brute-force passwords. In a 2019 study, researchers outperformed even cutting-edge machine learning-based password guessing tools.
Even more insidious than PassGAN is DeepLocker, another deep learning-based malware that, according to its IBM ancestors, “can infect millions of systems without being detected.” Only after the intended target has been identified – through factors such as geolocation and voice and face recognition – does it reveal itself and hit with laser precision.
What the tech industry can do
We won’t be able to prevent AI-driven cybercrime, but we can work to prevent the worst and make our internet even safer than it is today.
Barely a month after ChatGPT was made public when teachers complained about its potential for enabling cheating, a student took it upon himself to fix the problem. Edward Tian, ​​a 22-year-old Princeton senior, developed an app called GPTZero that can quickly and effectively detect whether ChatGPT or a human wrote an essay.
Expressed in a simple way; AI can be armed by the good guys just as easily as the bad guys.
To complete their experiment, Check Point researchers used Codex to scrape key data from large datasets on YARA and VirusTotal. That was just a limited use case. Across all of our inboxes and IT networks, AI tools have long played a pivotal role in better detecting malicious emails and identifying anomalous behavior far faster than humans ever could. Not to mention the role of AI in collecting and managing data, automating tasks, and informing risk assessments.
Like Godzilla versus King Kong, we can only fight AI-powered cybercrime with AI-enhanced cybersecurity. This is the next frontier. It is already unfolding before our eyes.
The Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology leaders. Am I Qualified?