Advertisement
Generative AI like ChatGPT will be weaponised by scammers in cybersecurity arms race, experts warn
- Generative artificial intelligence is allowing scammers to mimic voices, write more sophisticated phishing emails, and create malware
- Cybersecurity firms are deploying AI themselves to keep up with rapidly evolving threats
Reading Time:3 minutes
Why you can trust SCMP
Generative artificial intelligence (AI) poses a slew of cybersecurity risks, particularly in social engineering scams that are prevalent in Hong Kong, according to a cybersecurity firm that is also deploying AI to counter the threat.
Advertisement
The emergence of advanced generative AI tools such as ChatGPT will enable certain types of scams to become more common and effective, said Kim-Hock Leow, Asia CEO of Wizlynx Group, a Switzerland-based cybersecurity services company.
“We can see that AI voice and video mimicking continues to seem more genuine, and we know that it can be used by actors looking to gain footholds in a company’s information and cybersecurity [systems],” he said.
Social engineering like those conducted over the phone or through phishing emails are designed to fool victims into believing they are conversing with an authentic person on the other end.
In Hong Kong, scams conducted through online chats, phone calls and text messages have swindled people out of HK$4.8 billion (US$611.5 million). AI-generated audio, video and text are making these types of scams even harder to detect.
In one example from 2020, a Hong Kong-based manager at a Japanese bank was fooled by deepfake audio mimicking his director’s voice into authorising a transfer request for US$35 million, according to a court document first reported by Forbes.
Advertisement