The Education University of Hong Kong banner

China Daily

Focus> Culture HK> Content
Monday, December 30, 2019, 15:25
Chronicler of the AI age
By Chitralekha Basu
Monday, December 30, 2019, 15:25 By Chitralekha Basu

Dung Kai-cheung’s writing has been compared to the stalwarts of experimental fiction — Franz Kafka, Italo Calvino and Jorge Luis Borges. In his books Dung often reworks local history, classic Chinese texts, antique maps and material culled from his extensive reading of European philosophy to come up with fantastic people and situations that are strangely familiar. In an exclusive interview to China Daily, Dung talks about his new AI-themed novels and the way he imagines the future of mankind in an AI-populated society.

Novelist Dung Kai-cheung’s current preoccupation with the AI theme is an extension of his interest in the relationship between man and machines. (EDMOND TANG / CHINA DAILY)

Artificial Intelligence figures prominently in your recent works, including Beloved Wife and a new novel in progress. Would you like to outline the plots briefly for our readers?

In Beloved Wife, the male protagonist is a professor of Chinese literature, and he meets a strange person who claims to be a scientist working on AI. Together they plan to recreate a dead author — a writer who came to Hong Kong from Shanghai in the Thirties and lived here until his death in the Seventies, a revered writer of Hong Kong literature. The plan is to input everything he wrote, all the books he had read and all available information about him to create an AI which will be able to write new works by this author who is already dead for 50 years. However this plan is not realized in the book.

There is a second plan to upload the consciousness of the professor’s wife who is also a writer. The idea is that human consciousness can be uploaded and replicated and then downloaded to another person or some sort of device. Towards the end of the book it is revealed that the wife of the professor has in fact been dead for a year, but he imagines that she is still alive, doing a short residency in Cambridge.

In part two of Beloved Wife two voices are heard talking to each other. It’s only voices, no description of the environment. Soon it is revealed that they are the husband and wife of the previous book. And then suddenly the man wakes up and doesn’t seem to know where he is, and it is gradually revealed that it is the man who has died and his consciousness was downloaded into his wife’s brain. When his consciousness is awakened, the husband discovers that he is inside his wife’s body. So the dialogue is between two conscious minds sharing one body. An English translation of this section of Beloved Wife will appear in the Spring issue of Chinese Literature Today.

Your story resonates with so many classic tales, from Zhunag Zhi’s dreaming butterfly to Mary Shelley’s Frankenstein and its very-recent reworking in Jeanette Winterson’s novel Frankisstein. So it seems a number of authors around the world are interested in the AI theme at this moment.

I think it’s inevitable that we need to deal with this, because we are actually living in an age in which AI and all sorts of new technologies are changing not only our ways of life, but also the way we define life.

I think this could be a moment in history when we renegotiate our relationship with the machines in our lives, do a stocktaking of our humanness that distinguishes us from machines. Anyway let’s hear about the AI-themed novel you’re writing now.

The novel I’m writing now is called Post-human Comedy. The protagonist is a cybernetics expert who gets invited to a university in Singapore to participate in a project to develop the software in the system of some cyborgs — not exactly robots but replicants, androids in human bodies.

He needs to stabilize their systems in order to make these machines more flexible and able to deal with probabilities in real-life situations. I have also tried fusing ideas of the German philosopher Immanuel Kant with that of the cyborgs. So the machines are called Kant Machines, and their original setting is based on Kant’s 12 categories of cognition: (unity, plurality, and totality for concept of quantity; reality, negation, and limitation, for the concept of quality; inherence and subsistence, cause and effect, and community for the concept of relation; and possibility-impossibility, existence-nonexistence, and necessity and contingency for the concept of modality). I think Kant’s categories quite comprehensively cover how we get to know the environment and live, work and interact with other things.

There is also a politician in the story who is trying to invent a kind of ideal citizen. It’s a reference to governments afraid of giving people freedom. Hence they try to invent political systems they think will help maintain stability.

So this politician wants to transform citizens into cyborgs on the lines of the Kantian model. In The Critique of Practical Reason, Kant ponders on the question of how one could be free and conform to laws at the same time. The Kantian notion of freedom is sort of paradoxical. Freedom is the most important element in the lives of human beings. At the same time, because you are free, you are fully able to be a legislator of yourself — make laws that stop you from engaging in untoward actions. The politician thinks if we can create citizens having such a frame of mind, we do not need to be afraid of a completely open political system, as citizens will not deviate (from the rules they set themselves). They will not do anything immoral or politically dangerous.

And do they? Stick to the straight and narrow?

I don’t know. I haven’t written that far… I think they probably won’t.

I’m told AIs do not always choose judiciously from the data they are fed. AIs have supposedly shown the propensity to choose potentially disruptive stuff and integrate these into their systems unless taught not to do so…

I think AI is in a very crude stage now. It has a tremendous calculating power, but is far from being able to choose well independently, to say nothing of behaving like a model citizen. But in my book they have been given a Kantian frame of mind, as a result of which they are capable of being well-behaved.

Why do you think AI has captured your imagination to such an extent?

I have always been fascinated by the theme of relationships between humans and objects, including machines. So although The History of the Adventures of Vivi and Vera was not exactly a sci-fi book, in the realistic track of the novel I investigated how a person might define himself, or be looked upon by others, in terms of the tools and machines he uses. So I think for me it’s just a natural progression from (my preoccupation with) machines in Vivi and Vera to AI.

How intense is your own personal relationship with new technologies on a daily basis?

Not particularly intense, it’s just like that of a regular person. I use a smartphone and a computer to write.

How much time do you spend looking at your phone every day?

Well, I would not say I am addicted to my phone, because I think it’s perfectly okay if I do not check it for a day or two. However, I am automatically drawn to my phone in my idle moments. Then I’m not the kind who always needs to post something and react to other people’s posts.

Do you read physical books or do you read them on a smart device?

With books I am conventional, I read printed books.

Would you like to share your thoughts on how using machines to write impacts a writer’s language?

Well, the Italian writer Italo Calvino wrote a very interesting essay called “Cybernetics and Ghosts.” In it he argued that what writers essentially do is to arrange words in different combinations until one of these clicks and becomes something interesting or creative. And he relates the idea to cybernetics. In the end, he rather strikingly concludes that writers are already writing machines…

Arranging words in a sequence that works…

Yes, words in a sequence. He said what a writer essentially does is to simply put one word after another in order to create an interesting or meaningful sequence.

Some half a year ago, a company called OpenAI released a new program called GPT-2. It is an Artificial General Intelligence and they made samples of how this text generator can imitate and create different forms of writing – news reports, magazine features etc. It works like this: staff write a short opening paragraph and the machine takes over from there. Going by the samples they released, the writing is quite convincing and very organized. And the whole piece makes sense, which is rare for AI-generated writing.

It is being said that this is the most advanced writing program we have now. The writing proceeds by following the rule that every new word added to the piece should follow the logic and theme of what precedes it. I find this idea very strikingly similar to Calvino’s thesis.

I think the quest of a writer, primarily, is to find the right word.

That’s it, the right word.

There’s a French expression to describe it:  le mot juste. So that’s what machines are doing. And yet AI, while having achieved a certain felicity of form akin to stream of consciousness novels, is probably still quite far away from writing novels with well-defined structures that hold together and parts that add up to make a meaningful reading experience.

AIs writing meaningful novels might be a possibility, theoretically speaking.

Calvino reversed the way we look at the relationship between man and machines, saying it’s we human beings who have always been machines, very complicated ones. But can we create another machine that is nearly as complicated as we are? And is it necessary to do this?

You just mentioned a scenario where humans and Artificial General Intelligence work together to write a piece by dipping into an open source comprising previous writing. Do you think such a trend could catch on in the future?

I think the thing we need to do now is to push AI development in the direction of opening up (access to information) rather than imposing control.

The scientific community has this moral question to contend with: whether governments or some powerful organization will make use of technology to control people. I tend to believe, rather optimistically, that there is something in this new technology that is intrinsically against control or totalitarian aims.

That’s exactly how Ian McEwan’s new novel Machines Like Me sees it. In it robots self-destruct en masse to resist control. The book explores the idea of whether machines can have feelings. What’s your take on the subject?

I believe that if one day machines gain consciousness, they will have feelings too.

Can they be made to experience emotions as well?

Not sure, it depends on how we design the machine. If it’s only a basic design for computation I’m not sure if it can experience emotions. But it might be possible if the aim is to achieve something more than that. However, I do believe in order to develop consciousness, machines must have a body, or sort of hardware.

It could be of human shape or other shapes. I don’t believe what exists digitally, without a body, can become conscious of its own existence one day. Without a body it’s just computation and data and things like that. I think for consciousness you need to…

… have an element of physicality rather than existing in the virtual realm.

Yes, physicality — a physical or material existence, that’s basic to experience. Data is not experience, so I think consciousness is closely related with experience. If you do not have experience, there is no consciousness.

I believe if we can one day invent a machine that can feel and think, it needs to have a body. Maybe they can get out of the body into the web or somewhere, but at least part of the time they need experience as a physical entity in a particular space and time.

I have been thinking of this problem: how do we know that something is a table. How do machines identify tables — is it by analyzing millions of pictures and videos of tables?  But if it had a body, like a child, for instance, then you don’t need to feed it so many examples or definitions of what a table is. They can experience that a table is an object of a certain height, having a relatively flat surface no matter what shape it is, four legs or one or none, it could be a block. And if this thing appears in a certain context like in a room, a café or outdoors, and usually with something like chairs, then through experience, very soon, an AI will know what a table is and what it is not. So I think having a physical dimension can help a machine learn more efficiently.

What you just said also ties in with Plato’s notion of the ideal, celestial, chair and the inadequacy of human language to describe it in words. What do you think the introduction of AIs in this scenario -- where man is forever trying, and failing, to capture the true essence of things in words – can lead to?

I think AI will teach us that there is no such truth -- that truth is a construction. Even a word is a construction. My view is that we do not need an ideal table or an idea of a table, we just need to exist and experience. If we have enough experience of a table, we know what a table is. We are not born with the idea of a table, it’s something we build up through experience.

I think if AIs are designed in a way so that they are made to learn through experience, they will have a completely different world view. This resonates with Martin Heidegger’s idea of “worlding,” (Being and Time, 1927). The world is not something out there, objectively there, or somewhere ideally there, and we set off to find it. The world is the thing that we live and create by our experience. And so if we, human beings, get to learn things like this, I think AI could also be made to learn this way, and not by being fed information on some ideal things.

What you’re recommending will work only if AIs are sentient entities. I don’t think we have gotten there yet. AI can subjectively interpret things, make selections, but it’s still not able to feel…

In my novels I don’t imagine AIs as entities with thought and feelings. I always imagine them as having a human body in which the consciousness is connected to or replaced by an AI-like technology.

What are your thoughts on AI becoming a Frankenstein, turning against and destroying the human society that created it?

I don’t think such fears arise because of the presence of AIs, but rather because of the way they are being used. For example, take the replacement of human labor by AIs. Philosophers say that in the future many people will not just be jobless, but needless. They may not have values anymore because many of the things in our lives are being replaced by AIs and machines.

Also the polarization in society that we are seeing, much of it is triggered by manipulating people’s emotions and sense of judgment through the use of AI.

On the other hand, AI can help empower people, ordinary people — someone with a smartphone is connected to a wealth of information on the web. The polarization in society you were talking of might be difficult to eliminate but some day an empowered community of people might be able to put up a fight against this.

I don’t believe one day an AI with extraordinary powers will start controlling human society. At least for now AIs might look very powerful but they actually are not that powerful. They’re powerful in very narrow and limited ways. They’re good at playing chess or games, or doing certain kind of computations.

The number of AIs in the world far exceeds the human population. And that’s a number growing exponentially…

I think the critical point is: will there be a day when AI has self-consciousness and autonomy? Going back to Kant’s philosophy, he defines humanity as being free and having autonomy. So for AIs, as long as they don’t have consciousness and autonomy, no matter how powerful they are, they are going to be working for others. But if they gain consciousness, and autonomy, they will be able to think and decide for themselves and that will change things drastically.  Then it may not be such a bad thing, it could be a good thing.

Maybe AIs will turn out more perfect than human beings and will do a better job of ruling the world than we do. Who knows?

I’ve been thinking whether it’s possible for science to give AIs a form and frame of mind which enables them to tell the difference between good or bad from experience? Also how do you make an AI develop moral values?

De Kai, an AI developer and academic I interviewed recently, says AIs need to be exposed to good values.

Yeah, teach them something like Isaac Asimov’s Three Laws of Robotics: (1, A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2, A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3, A robot must always protect its existence as long as such protection does not conflict with the First or Second Law).

A number of novels and films have been based on the idea of robots rebelling, refusing to stick to those rules.

So that’s why the idea of the Kantian ideal citizen is a good option -- you do not tell the AI what to do, but put good values in its frame of mind so that it will, through experience, learn how to formulate rules that are consistent with a moral standpoint. Of course that’s just a fantasy, but…

It might happen one day. I believe scientists work with a sense of moral duty to the future of humankind.

Yeah, I think that’s a very important part of AI development.

Do you think writers and artists like you have a moral duty to be responsible for the social impact of what you create?

Yes, in a broad sense. Of course not in the narrow sense of trying to promote something as right or good.

There is always a conflict between what I allow myself to write and what is allowed by society. As writers, we want to be free in our imagination, and not have to practice self-censorship. So the basic thing about literature is freedom of imagination, but at the same time we need to be aware that maybe there are some limits to this freedom.

I think writing, or any work of art becomes what it is, because of the limits. As we discussed earlier, words need to arranged in a sequence that has some logic to it to qualify as literature, rather than a random selection of experiences.

So it’s the same with AI, I think. What we need to work out probably is: how to allow AIs freedom, creativity, spontaneity, and at the same time make sure it knows how to build rules of conduct and learn to tell right from wrong. I don’t know if one day we will be able to invent an AI that is able to do that.

You have been active as a writer for maybe 25 years or so. As the years pass do you find this whole business of finding the right word and arranging them in a sequence that has meaning and beauty and truth gets harder to do or easier?

Technically it is easier. I have been practicing my craft for so many years, it’s not difficult for me to formulate different ways of saying things. I usually do not have much difficulty in putting an idea or description in words.

But the more challenging aspect is in the way I imagine, create and represent something. I am getting more aware of the limits you mentioned just now. So that would be something I need to have second thoughts about quite often. There is a limit to how a writer might represent things, and there is also the power and right of a writer to do certain things. I think poetic license is not absolute.

Artists and designers are known to rework older, traditional forms of art, through application of AI. Your books, The History of the Adventures of Vivi and Vera, for example, contain many resonances of myths, history and classic texts as well. Is it the same in your new AI-themed books?

There is a legend about the French philosopher Rene Descartes carrying with him an automaton who looked exactly like his daughter who had died when she was five years old. Of course the story is not true, but it is important because it illustrates Descartes’ view of the human body as a machine in which a detachable soul is attached to the pineal gland in the brain. I find this story very fascinating.

I think from mechanical bodies to AI is a continuous development.

I find Kant’s philosophy interesting because it’s all about the formal aspects of the human mind, and I think this helps us to think about what consciousness is. Why do we have this ability to know things, be conscious of ourselves, and interact with the outside world and shape it? I find these ideas very relatable to the development of new technologies.

Interviewed by Chitralekha Basu

William Chang contributed to the story

Share this story

Please click in the upper right corner to open it in your browser !