Published: 00:05, March 24, 2026 | Updated: 00:14, March 24, 2026
Strong privacy safeguards are vital for safe AI use
By Ada Chung

Governments and businesses worldwide are seeking to harness artificial intelligence for innovation and economic growth. Yet as AI technologies become more accessible and sophisticated, a parallel and troubling trend is emerging: The misuse of AI-driven “deepfakes”. AI can generate seemingly realistic but falsified images, audio and video, which can inflict profound and lasting harm on individuals, especially children and young people, when exploited for malicious purposes.

Ada Chung web A.jpg

A recent global incident brought these issues to the forefront: An AI chatbot allowed users to generate non-consensual sexual images of real women and children, among others. Within 11 days, an estimated 3 million sexualized images had reportedly been generated. This illustrates how easily personal data can be misused and how quickly the resulting harm can spread, especially to minors, who are least equipped to protect themselves.

The incident triggered swift regulatory actions by privacy or data protection authorities worldwide, and temporary bans in some jurisdictions.

Given the borderless nature of AI-related privacy risks, data protection authorities have stepped up coordinated efforts to advocate privacy-protective AI. In a landmark move, during the 47th Global Privacy Assembly (GPA) Conference in September 2025, 20 authorities from different jurisdictions, including the Office of the Privacy Commissioner for Personal Data (PCPD) of the Hong Kong Special Administrative Region, signed the Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-protecting AI, advocating, among others things, the incorporation of data protection principles into AI system development and the establishment of robust data governance.

In February, the PCPD, together with 60 privacy/data protection authorities from around the world (including Canada, France, Germany, Italy, Korea, New Zealand, Singapore and the United Kingdom), issued the Joint Statement on AI-Generated Imagery and the Protection of Privacy. Initiated and coordinated through the GPA’s International Enforcement Cooperation Working Group, which the PCPD co-chairs, the statement sets out fundamental international principles to guide organizations in developing and using AI content generation systems lawfully and safely. It reminds all organizations that develop and use AI content generation systems to comply with applicable data protection and privacy laws. The joint statement also recommends a series of measures to safeguard the fundamental rights of individuals, especially children and vulnerable groups.

Authorities in both the Chinese mainland and the HKSAR recognize that the development and use of AI must be accompanied by appropriate guardrails. Since the promulgation of the 2023 Global AI Governance Initiative, the equal importance of the development and safety of AI has been repeatedly stressed, which is also reaffirmed in the chief executive’s 2025 Policy Address. This balanced vision is further reinforced in the 15th Five-Year Plan (2026-30), which targets advancing the “AI+” initiative across the board, while governance over AI must be strengthened. As the plan specifies, it is essential to consolidate security during development and pursue development in a secure environment, including strengthening data governance frameworks and rules, enhancing AI governance, and fostering an environment that is beneficial, secure and fair for development.

All stakeholders in the ecosystem, including AI developers, service providers and users, have unavoidable responsibilities to co-create a safe and trustworthy digital environment for our future generations

It is against this backdrop that the recent emergence of agentic AI — autonomous systems that use large language models without continuous human oversight — warrants close attention, as it has already intensified concerns over data breaches and privacy and cybersecurity risks. Unlike conventional AI chatbots that primarily generate content in response to prompts, these agentic systems can connect with external tools and services, enabling them to take multistep actions on behalf of users. The privacy risks posed by agentic AI thus extend far beyond the outputs of conventional AI chatbots. These systems can access, manipulate and expose personal data with unprecedented speed and reach. If such capabilities are misused to create and distribute abusive deepfakes with minimal human involvement, the resulting harm could spread more quickly and at greater scale.

Encouragingly, the 2025 Policy Address tasked the Department of Justice with coordinating different bureaus to review the relevant laws needed to complement the development and wider application of AI in Hong Kong.

Pending this review, the development and use of AI are not unregulated. Hong Kong retains a flexible regulatory approach whereby existing laws remain applicable, supplemented by relevant guidelines. Any collection and use of personal data to create deepfakes is subject to the requirements of the Personal Data (Privacy) Ordinance. Specifically, the use of personal data to create and/or disclose deepfake materials may contravene the use limitation principle of the privacy law if it goes beyond the original purpose of data collection. The data protection principle governing the collection of personal data may also be contravened if personal data is collected unlawfully or unfairly. More seriously, the creation and/or disclosure of malicious deepfake materials may constitute doxing.

Any data breach caused by unauthorized or accidental access to or processing, erasure, loss or use of data by an agentic AI may also contravene the data protection principle regarding data security, thereby breaching the privacy law, not to mention any unwarranted collection or use of the personal data of third parties without their consent.

It is crucial, therefore, for all stakeholders, including AI developers, service providers and users, to be aware of the threats posed by the new technologies to humans’ fundamental rights. When using AI content-generation systems, for instance, PCPD recommends that users label or watermark the output as AI-generated to avoid confusion or misunderstanding.  

In particular, to avoid data leakage or cyberattacks, users should download only the latest official version of any agentic AI, grant minimum access rights to the tool, adopt adequate measures to ensure system security and data security, and continuously assess the risks involved. Users should be alert, for example, to any high-risk prompts or automatic processing that might wipe out all user data (including emails).

In the race to tap into AI’s huge potential, we should remember that the development and deployment of AI systems should, from the outset, be guided by the principles of protecting personal data privacy, privacy-by-design and privacy-by-default, among others, to prevent infringement on people’s data privacy and minimize the privacy risks involved.  

As recent events have demonstrated the vulnerability of users, especially minors, in the rapidly evolving age of AI, as well as the tangible and far-reaching harms of AI’s abusive or malicious use, organizations developing and deploying AI must not sacrifice privacy and security for speed-to-market or novel functionalities. All stakeholders in the ecosystem, including AI developers, service providers and users, have unavoidable responsibilities to co-create a safe and trustworthy digital environment for our future generations.

 

The author is privacy commissioner for personal data, the Hong Kong Special Administrative Region.

The views do not necessarily reflect those of China Daily.