Although the growth of artificial intelligence (AI) since 2000 has been extraordinary, the technological game changer has been its generative models. Whereas basic AI processes data and makes predictions, generative AI (GAI) produces intelligent systems that create new content. It includes algorithms and deep learning models, and embraces text, images, videos, and other forms of data.
Not surprisingly, GAI is a godsend for the business world, particularly for anybody wanting to automate their processes. It is invaluable, for example, for content creation, product design, and website development.
According to the China Internet Network Information Center (CNNIC) report for 2024, the user base of GAI products in China totaled 230 million by last June. The CNNIC's director, Liu Yulin, said the GAI's increased traction among internet users was significantly impacting their daily lives. Moreover, the country's AI industry ecosystem was relatively comprehensive, with over 4,500 companies related to it.
A recent survey by Finastra revealed that 38 percent of Hong Kong institutions have started rolling out GAI. This was the highest rate among all the markets surveyed, and well above the global average of 26 percent.
Last July, in its banking report, KPMG reviewed technological progress generally, emphasising the emergence of GAI. It was increasingly being used for tasks like data analysis and employee service through the use of sophisticated chatbots.
Although GAI boosts productivity and widens horizons, it must operate within secure parameters. For the business world, this involves adhering to best practices, with safeguards being precisely formulated. A framework is necessary which is effective but does not stifle innovation, with self-regulation by the industry playing its part.
In 2022, the Hong Kong SAR Government published its Innovation and Technology Blueprint, which identified eight strategies for technological development over the next 5 to 10 years. It envisaged moving “full steam towards the vision of an international I&T center,” which included building a “secure cyber environment.” Such an environment necessarily includes properly managed AI, and this is now becoming a reality.
On April 15, at the World Internet Conference Asia-Pacific Summit (in Hong Kong), Commissioner for Digital Policy Tony Wong Chi-kwong, took things to the next level by announcing the publication of new guidelines for GAI tools. This was after the Hong Kong Generative AI Research and Development Center had studied models elsewhere and consulted the innovation and technology industry. The guidelines recognized that AI tools that could be exploited for illegal activities had to be prohibited. In worst-case scenarios, these might include, for example, producing explosive devices, generating obscene material, and creating manipulated or false content to defraud or disrupt public order.
Wong hoped the new guidelines would, through a four-tier classification system, "facilitate the industry and the public in developing and applying generative AI in a safe and responsible manner, while encouraging innovative application of AI, mitigating risk and fostering the widespread adoption of generative AI in Hong Kong."
Separately, the Hong Kong Monetary Authority (HKMA) is planning a regulatory sandbox for testing new software. Once in place, it will allow financial institutions to safely trial their GAI technologies. It will also enable the HKMA to monitor GAI development, ensuring that robust risk management practices are in place before full market deployment.
In the wrong hands, GAI poses huge threats, including to national security, and its possible abuse should alarm everybody.
Although challenging, effective ground rules are essential to minimise (if not eliminate) the risks. If there is no proper oversight, GAI tools could produce a Wild West environment in which “anything goes.”
At China’s annual two sessions in Beijing, which concluded on March 11, there was a heavy focus on AI, partly triggered by the rise of DeepSeek. This year’s government work report indicated an enhanced role for AI in industrial development, including the use of "embodied" AI, such as robots. The discussions also turned to AI misuse, and there were calls for improved regulations.
Those calls resonated in Hong Kong SAR, not least with the Privacy Commissioner for Personal Data, Ada Chung Lai-ling. Since being appointed in 2020, she has pioneered AI safeguards within her portfolio, recognizing the threat AI poses to privacy and other rights. In the absence of regulation, a coach and horses could be driven through the Personal Data (Privacy) Ordinance (Cap 486), a concern that has fueled successive initiatives.
In August 2021, Chung issued the “Guidance on the ethical development and use of AI.” It recommended that organizations adopt the data stewardship principles of being respectful, beneficial, and fair. It also proposed adherence to internationally recognized ethical AI principles, including human oversight, data privacy, transparency, robustness, and security.
Thereafter, in June 2024, Chung upped the ante with the publication of the landmark “Artificial Intelligence: Model Personal Data Protection Framework.” It provided organizations with advice on how to achieve best practice in the procurement and implementation of AI, and in a manner that promoted compliance with the requirements of the personal data law.
Finally, on March 31, 2025, Chung issued the “Guidelines on the Use of Generative AI by Employees” (the Guidelines). Echoing the “two sessions,” she said it was necessary to “continuously advance the ‘AI Plus’ Initiative to unleash the creativity of the digital economy.”
The Guidelines focus on risk management, and provide a detailed checklist of “dos” and “don’ts” that employers should enforce in their workplaces. They are designed to facilitate the safe and healthy development of AI, and, apart from considering personal data protections, violations and remedies, they also review the practical application of AI tools. They indicate that before employees input personal data into GAI systems, they must first anonymize it. Employers are urged to ensure there are appropriate consequences if employees violate the Guidelines.
The Guidelines will assist organizations in developing GAI internal policies that comply with the personal data law. Although one size does not fit all, they provide the framework within which employers can ensure their employees apply the latest technology responsibly.
When considering the bigger picture, Chung remarked that China placed equal emphasis on development and security. In other words, AI must be viewed holistically, not in a vacuum. She explained that “AI security is one of the most important aspects of national security,” which was incontrovertible. Although GAI can be a force for good, it can also pose unique dangers in various scenarios. Malign actors, for example, could use it for devising and then furthering anti-state activities.
As GAI proliferates, the various agencies will be scrutinizing its progress and will undoubtedly step in if threats emerge. Although everybody recognizes the benefits GAI can bring, its development cannot come, for example, at the expense of public safety, personal rights, or market security. At the same time, Hong Kong should pursue what the Digital Office Commission calls a "pragmatic balance strategy" when formulating regulatory measures. Concerns, therefore, must be addressed in a way that does not frustrate the city's legitimate technological progress. If that is achieved, Hong Kong will undoubtedly become a regional trailblazer.
The author is a senior counsel and law professor, and was previously the director of public prosecutions of the Hong Kong SAR. The views do not necessarily reflect those of China Daily.