Published: 09:51, March 7, 2024 | Updated: 10:18, March 7, 2024
PDF View
Deepfake video scams prompt police warning
By Yang Zekun

Fraud makes use of AI technology to steal $25.5 million from HK company

This file photo dated Aug 21, 2019 shows Hong Kong Police Headquarters in Hong Kong. (PHOTO / XINHUA)

Police are warning businesses and individuals about the growing threat of "AI deepfake" scams after a case in Hong Kong in which a company was deceived out of HK$200 million ($25.5 million) through the use of video conferencing technology.

In the elaborate scheme, a financial employee at the Hong Kong branch of a multinational company received a message purporting to be from the company's chief financial officer in the United Kingdom. The message invited him to participate in a confidential video conference to discuss a transaction.

During the multi-person video call, he interacted with individuals who appeared to be the company's senior executives. However, the participants were actually deepfakes, created using artificial intelligence to superimpose the faces of real people onto fraudulent video footage.

READ MORE: SAR to further protect mainland students against fraud

Believing the video call to be legitimate, the victim transferred HK$200 million to designated bank accounts in 15 installments, as instructed by the deepfakes. It was only days later, upon contacting the company's headquarters, that he realized it had been targeted by a sophisticated scam.

Police investigators determined that the deepfakes used readily available online videos of the executives. The scammers then employed AI to synchronize the video footage with pre-recorded voices and potentially pre-scripted dialogue. To avoid suspicion, the deepfakes reportedly refrained from engaging in extended conversations during the video call, focusing solely on issuing directives to the victim.

In another case, a female financial employee surnamed Zhang in Xi'an, Shaanxi province, was deceived into transferring 1.86 million yuan ($258,000) to a designated account after having a video call with someone she believed to be her boss, who was actually being impersonated by fraudsters.

"The other party asked me to transfer the money quickly, saying it was for urgent use," Zhang said. "His voice and video image were the same as my boss, so I trusted what the other party said."

After the transaction was completed, Zhang called her boss for verification, and he told her he had not requested the money transfer.

Zhang immediately called the police for help. The police in Xi'an coordinated with the provincial anti-fraud center and contacted the banks involved to arrange the emergency freezing of the transfer, ultimately saving 1.56 million yuan.

Xi'an police said AI-powered deepfakes are highly deceptive and advised members of the public to enhance their awareness of fraud prevention. In typical scenarios such as fund transfers and transactions, it is essential to verify and confirm repeatedly through additional communication channels, they said. If fraud is encountered, it is crucial to promptly report the case to the police to minimize losses.

Experts have urged the public to exercise caution during video calls and to implement safeguards against deepfake scams.

ALSO READ: Online concert ticket scams surge in HK in first 10 months

Fang Yu, a researcher from the China Computer Federation, suggested requiring the other party to wave their hands in front of their face during video calls to identify them, because real-time fake videos require real-time generation, processing and AI face-swapping, and the waving motion would cause interference with facial data.

He also suggested that people ask the other party some questions that only the other party knows the answers to during peer-to-peer communication to verify their authenticity.

Telecommunication experts also suggested enhancing the protection of personal information and biometric data such as facial, voice-print and fingerprint data. The public should also avoid logging into websites of unknown origin to prevent the intrusion of viruses and implement proper authorization management for applications that may collect voice, image, video and location information.