For the past three years, scientific progress and public debate around artificial intelligence have been dominated by a single question of technological capability: What can artificial intelligence do? Industry, governments and research and development institutes have poured billions into optimizing models, aiming at rapid translation into application to generate the urgently needed revenue following the vast investments.

Technological progress has been remarkable. However, what has been neglected is the more fundamental question of the human interface: How and why do people use AI, and how is it beginning to change how we think, feel and relate to one another? The question is not only philosophical or merely theoretical — it is relevant to all of us. AI has been implemented in nearly every aspect of human life, and the question is now empirical, urgent and almost entirely understudied.
Few technologies in modern history have moved from laboratory to daily life at this dimension and speed. ChatGPT alone surpassed 900 million weekly active users in February and is only one of the generative systems. Although the science of AI has been developed for nearly 70 years, the technology has now left the laboratory — adopted faster than the personal computer, the internet, or any consumer technology before it — and implemented in the daily lives of a vast proportion of people around the globe. It is somewhat similar to early economic development, when growth came at the expense of polluting land and water, and the cleanup that followed cost more effort, with some damage proving irreversible. The impact of AI on the mental well-being of the population has been similarly overlooked. We must spare no effort to ensure AI is steered in a direction that produces desirable outcomes.
The pressing question is no longer technical. It concerns the human interface. People have stopped using these systems only as functional tools. Increasingly, they are used to meet human social needs — from advice on health and personal finance to working through relationships, processing negative emotions and trauma.
In a recent study by the China Youth and Children Research Center, 60 percent of young people had used AI and about 50 percent of them now turn to chatbots to manage emotional distress. A growing series of cases — some hopeful, some tragic — has shown both the promise and the risk of AI as a source of socioemotional support. Together, these developments underscore a single fact: AI has become a new social infrastructure of modern life. The transformation is therefore not only technological. It is reshaping how we live, feel, think, work and relate to one another. And in every one of those dimensions, technological progress and deployment have overtaken scientific, human and societal readiness. Only 5 percent of workers feel well prepared — a striking gap given that AI is already in nearly every workplace. The same gap runs through our education systems, our clinics and our family life.
AI’s impact is not purely a property of the model, but a property of the human interface — the motivation, context and developmental stage of the person using it. This will ultimately define where the opportunities and the risks of AI lie. And it is precisely this variable that we are currently failing to measure, regulate or design for
Nowhere is the gap more urgent than for the younger generation. University students are early, heavy and frequent adopters during high-stress and formative periods of development. They use AI tools during a developmental window that is both formative and fragile — the years in which mindset, identity, values and social cognition are consolidated, and in which the brain completes its maturation into the mid-twenties. Frequent interactions with AI will inevitably alter these developmental processes. Whether for better or worse will not be decided by the model. Preliminary findings from our team at the University of Hong Kong, with collaborators at the University of Electronic Science and Technology of China, make this concrete: It is imperative to account for who is using AI, why, and in what state of mind.
In a brain-imaging study of 230 Chinese undergraduates, use of the same AI chatbot tools was associated with markedly different outcomes depending on how it was used. Students who used AI for functional support — clarifying ideas, structuring arguments, academic writing support — showed greater volume in prefrontal brain regions linked to cognitive functions. Students who relied on AI for emotional support showed poorer mental health and reduced amygdala volume, a brain region central to social-emotional processing.
Set against earlier evidence that digital technologies can have divergent, sometimes opposite, effects on mental health depending on the person using them, the implication is clear. AI’s impact is not purely a property of the model, but a property of the human interface — the motivation, context and developmental stage of the person using it. This will ultimately define where the opportunities and the risks of AI lie. And it is precisely this variable that we are currently failing to measure, regulate or design for.
Even small effects at the individual level matter given the tremendous scale of AI deployment, with 900 million weekly active users of ChatGPT alone. With adoption rates this high, and time spent interacting with AI growing month on month, small effects compound into a ripple effect: from individual minds into social interactions, from social interactions into institutions, and from institutions into the architecture of society. This is not a theoretical concern — the transformation has already begun. The time to turn the debate around, from the model to the human and from capability to social infrastructure, is now.
Benjamin Becker is a professor and lead of the strategic research theme ‘AI, Society and Social Dynamics’ in the Faculty of Social Sciences at the University of Hong Kong. Paul Yip is a chair professor of population health at the University of Hong Kong.
The views do not necessarily reflect those of China Daily.
