New media artist Chris Cheung who goes by the moniker h0nh1m and runs the arts collective XCEPT/XCEED wears a heavy metallic collar with a countdown digital clock, drawing attention to the invisible yoke of smart devices that seem to have enslaved much of humanity. In an exclusive interview to China Daily, Cheung speaks on how AI could be both enabling and limiting for the human race.
New media artist Chris Cheung, aka h0nh1m, calls himself a “transhuman” and a “future generation digital slave.” (CALVIN NG / CHINA DAILY)
What drew you to AI and when?
I used to watch a lot of sci-fi films like The Matrix (Laurence and Andrew Paul Wachowski, 1999), A.I. (Steven Spielberg, 2001), Ghost in the Shell (Mamoru Oshii, 1995) and so on in my childhood. But I began thinking seriously about ideas like what makes us human, and what is our purpose on earth, after studying Conway's Game of Life (a cellular automaton created by British mathematician John Conway, 1970) during my university time in School of Creative Media.
What is your reason for using AI technology in many of your projects?
A human being is an original AI (created by other intelligence). I see the propagation of the human race as an experiment until we evolved and became smart enough to design our own AIs that could be even better than us.
I am curious about the future in which AIs start to learn and communicate in ways that we don’t know yet.
Would you like to tell us a bit about your new work, No Longer Write – Mochiji now showing at Taoyuan Museum of Fine Arts in Taiwan?
The show features an interactive installation in which artificial intelligence tries imitating a participating audience’s handwriting…
(The Collar X device Cheung is wearing around his neck starts making screeching sounds.)
I’m sorry. Maybe just take this one off, it’s a little bit annoying.
This is the same device that the visitors to your “Surviving the Glass System” show (Hong Kong Arts Centre, 2016) were made to wear, right? This heavy metallic collar with a countdown digital clock and electric wires sticking out of it was locked around their necks using an electric drill – that could feel a bit scary.
Some audience members were scared of having to put it on, but it was necessary to experience the show.
And it wasn’t possible to take it off during the 30-minute guided tour of the show, was it?
That’s right.
So Collar AG is like a yoke, a symbol of digital enslavement. It’s a way of drawing attention to the digital devices that rule our lives.
Yeah, the idea was to scare the audience a little, but many visitors who put the device on said after a point of time that they forgot they were wearing a scary device as the experience was quite unique. They tended to feel superior with this high-tech fashionable object. I even had queries from people asking, where can I get this device?
We only had a limited number of collars, so ultimately we had a group of people awaiting their turn to put it on after the others had finished their journey.
I think it also had to do with peer pressure. Everyone wants what others have.
Right, one wants to experience it but is also slightly intimidated by it. Would you like to describe the idea informing Collar X?
The exhibition was on the theme of surveillance. The function of Collar AG is similar to that of an audio guide one wears while visiting museums. Each exhibit on the show was fitted with trackers, keeping track of the route audience members took. When they stopped in front of an exhibit they heard audio descriptions of the piece. We made up some very sarcastic contexts for the art pieces which were sometimes in conflict with those of other exhibits. So that was an unconventional way of explaining art.
The collar also periodically reminded the audience about their time running out, and urged them to hurry up. We did not tell the audience what might happen if the countdown ended (before they had completed touring the exhibition), we just told them it could be something bad. So people were left wondering whether the collar might explode or cause pain if they ran out of time. What we wanted to create was a feeling of being monitored.
So Collar X is the updated version enabling the full AI function. In a sense it is similar to a cell phone, which also tracks our movements, it’s just that many of us are not aware of this, or even when we are the majority among us are willing to give up our freedom to look cool by getting the newest technology in our pockets.
You often wear this collar in your public appearances, and call yourself a “transhuman” and a “future generation digital slave.” Would you like to explain why?
I put on Collar X during some of my recent talks because I believe this is what our future might look like. Fifty years from now a lot of technology will be implanted in our bodies. Or we might have a synthesized body (part human part machine) containing some kind of AI technology.
It’s already happening. For example, an amputated leg can be replaced by a robotic one and it can even help one run faster than a non-athletic person. So the future is going to be about a fusion of machines and humans.
Do you see AI gaining more power over human beings in the near future, or creating a more free and equitable society?
I think it depends on how you use AI. There are a lot of misapprehensions about machine learning, because we don’t know about the learning process, i.e. the stages AI has to go through to acquire knowledge. For example, there’s news about new chat bots in the market who might be able to communicate between themselves using a language that humans won’t understand.
This is something I wanted to show in my No Longer Write – Mochiji installation. Participants are assigned to write a Chinese character randomly chosen by Mochiji (an agglomeration of brushstrokes, characters and stylistic particularities of master Chinese calligraphers, displayed on digital screens inside a dark room). When an audience-participant draws a Chinese character, the machine reads the brush strokes in terms of pixels and tries to trace the outline of the character. So it compares the characters drawn by master calligraphers and the audience through a deep learning process and comes up with a set of new calligraphic styles, combining certain features from both.
In the end, the characters displayed on the screen blur and turn into images that are impossible to read. The point here is to raise questions such as, does the writing still mean anything to the AI when it becomes illegible to humans? Where do the characters disappear after they fade away? The show highlights our fear of machines and new technology.
I think by inviting audiences to participate, and add to a selection of calligraphy by well-known masters of Chinese calligraphers, you are putting the handiworks of both canonical calligraphers and random, anonymous visitors to your show in the same space.
Yeah, exactly, so the audience is shown how a particular character was drawn by 10 famous calligraphers (including Wang Xizhi, Dong Qichang, Rao Jie, Su Shi, Huang Tingjian and Wang Yangming) and asked to create their own version of the same. The AI compares this against the versions by the master calligraphers, one by one, combines certain characteristics of the old calligraphy with the new to evolve a completely new style of writing the character. It’s like parents from the past and present joining hands to create a child.
What’s the idea behind making these new set of characters blur, start floating in a pool as it were and ultimately fade away at the end of the show?
There is an interesting story about the calligrapher Wang Xizhi who is not inborn to be good at calligraphy. So he practiced his craft every day for many years. The pond he sat next to while drawing the characters turned black from the continued dipping of his brush in it. In Mochiji I have tried to show how AI can learn from the ancient masters and create a similar texture in just about a second.
And then the AI takes the viewing experience to a level where it becomes mystical – something beyond the comprehension of humans.
The idea is to inspire audiences to rethink their existing notions of calligraphy, ponder whether we will still be writing by hand in the future. And even if we did will it still be relevant?
How well did AI technology work for this particular project? What do you think it helped you achieve?
The one thing I’m interested in creating with the help of new technology is live art. There isn’t a moment in Mochiji which is a repetition. Here the AI is constantly learning and adding new specimens of writing in its database. In a traditional form of art it won’t be possible to demonstrate such quick live interactions with the audience. So a work like this allows me to create a unique experience and engage with the audience.
I read somewhere that you are interested in the works of the 4th-century Chinese philosopher Zhuangzi. Was your installation called Prismverse, where a visitor could experience the feeling of entering a well-cut diamond, a reference to Zhuangzi’s (Chinese philosopher from the Warring States period) story in which the narrator is no longer sure if he is a man who had dreamt of turning into a butterfly or if he is in fact a butterfly having a dream about being a man?
I like Zhuangzi a lot but this piece was not about him. In fact my graduation project, The Happiness of Fish, was inspired by Zhaungzi.
Prismverse is about trying to create a new dimension, transforming a physical space or maybe getting audiences to experience a mixed reality. It also alludes to the fact that we don’t need physical spaces to sell things any more. Sometimes well-known brands would set up a pop-up store to give consumers a feel of the real thing – create the simulation of a brand experience which can only be purchased online.
In Prismverse audiences get to experience light from different perspectives. Why are diamonds so beautiful? Because they are cut to display the maximum number of facets. So we tried to create a space where people could experience light being passed through a diamond.
What are your thoughts on the possibility of AIs becoming sentient, developing emotions and making moral decisions? Do you see this happening in a not too distant future?
As I mentioned earlier, I foresee a time in which parts of the human body will be replaced by machines running on AI. That will be the transhuman era when humans will be able to perform feats that are not limited by the boundaries of human capability. And as the line between humans and machines gets blurred, we will need to evolve new standards for determining who or what is a human.
We have to go through this. People didn’t wear glasses four hundred years ago. With the advancement of technology, humans can now overcome barriers that were previously limiting.
Would you have suggestions as to how human beings should prepare for that new era when it’s no longer possible to tell which parts of a man are actually machines?
As we discussed earlier, by wearing a device like Collar X around your neck you gain an enhanced experience but lose part of your freedom for the duration of the show.
AI can be used to manipulate people’s voting choices like Cambridge Analytica, which was responsible for Donald Trump’s election campaign, admitted to have done (by collecting personal details of 87 million social media users and then strategically targeting them with ads). They apparently did the same thing to sway public opinion in favor of Brexit in 2016. Different people running a search on the same topic on Google will find a different set of results. On one hand this could be more enabling, but at the same time, it is someone else making the decisions on our behalf.
You seem to have a strong interest in climate change and sustainability. Would you like to talk about some of your projects like CarbonScape, DynamiCity and RadianceSpace, inspied by your concerns about the environement?
The volume of carbon emissions in cities has risen drastically in the last few years. In 2017, the concentration of carbon dioxide (CO2) in the air soared to its highest in three million years. CarbonScape is a visualization of CO2 levels in the atmosphere, based on data from National Oceanic and Atmospheric Administration. The installation features black spheres indicating CO2 levels around the globe, rising up transparent chimneys in response to the ambient sounds around it. These could be sounds of an engine, air-conditioner or those emitted by cargo vessels.
CarbonScape is meant to be a minute of silence to pay tribute to our slowly dying mother nature. These noise-scapes leading to environmental pollution are the result of our irresponsible behavior.
Related to this is RadianceSscape which I created back in 2014, as a response to the radiation leak in Fukushima Daiichi in 2011. There was attempt to downplay the extent of the damage and long after the incident the contaminated water inside the nuclear cooling towers was leaking into the Pacific Ocean,
RadianceScape is an attempt to present the accurate data to the audience. The Chernobyl disaster in 1986 saw a similar attempt to hush up the news about the extent of damage caused by the explosions in the nuclear power plant which left an estimated 5 million people affected by radiation.
RadianceScape Live! launched in 2016, is an extension of RadianceScape. It is an audiovisual performance based on a dialogue between live radiation data collected from different major cities in the world and compared to those from Chernobyl and Fukushima, in the form of an animated digital installation.
What strikes me is the delightful, almost choreographic, movements in these installations. When you’re putting a piece together, what comes first? Is it spreading awareness or giving pleasure to your audience?
It depends on how and where the inspiration strikes me. Most of my projects are a manifestation of my concerns for the future. There are a lot of concerns about the way humans are evolving and the issues we are faced with. Climate change and AI development are two of those.
How about keeping your own carbon footprint in check in your professional and personal lives?
I’m trying to eat less meat, because there’re two things that add significantly to carbon emissions. One is flying and the second meat consumption.
You seem to be in favor of ideas like open-source platforms, reverse engineering, the maker movement — an unregulated creative space with access to all. Would you say this is the way, going forward?
Yeah, my team and I have been in the movement of open-source generation since at least the last 10 years. We’re always sharing our knowledge programing language and ideas through open-source platforms. I am great believer in the idea: the author is dead. (Roland Barthes’ 1967 essay The Death of the Author argues that once published a text is free of the intentions and biographical context of the author and open to the readers’ interpretation). I believe once created, a piece of art does not belong to me anymore.
For example, I have created a platform in the form of Mochiji which will keep evolving and people will keep adding to it. This is one of the core ideas informing my art.
How do you see the future of open creative spaces in Hong Kong?
The idea caught on some five or seven years ago. For example, there was the Fablab movement (a worldwide network of shared digital fabrication laboratory) which also caught on in Hong Kong. It’s more like a café, with a number of 3D printing machines to share, and people can get coffee while discussing their projects.
But Hong Kong is seeing a downturn in such movements, one of the reasons being that the rents are too high. It is tough to sustain an open-space ecology in a major center of trade and commerce such as Hong Kong because of the costs of maintaining such a space. Still, there are a few of these in Hong Kong, like Makerspace, doing good work.
Interviewed by Chitralekha Basu
William Chang contributed to the story.
![]() |
![]() |
CHINA DAILY HONG KONG NEWS |
OPEN |