A pioneer of machine learning of the cognitive relationships between languages and fellow of Association for Computational Linguistics, De Kai wears many hats — linguist, musician, academic and public speaker. He shares his thoughts on his work, inspirations, the state of AI development and the importance of treating them well in an interview with China Daily.
Machine translation pioneer and musician De Kai at The Hong Kong University of Science & Technology where he is professor of computer science and engineering. (CALVIN NG / CHINA DAILY)
Would you like to describe a typical day in your life and how AI figures in it?
AI is something I am heavily engaged in (on a daily basis). I might be reviewing AI papers submitted for scientific conferences after I wake up in the morning. I might be doing research, reading up on AI or inventing new models that we’re implementing and testing for different kinds of machine learning. I might be looking at ways of making those models do a better job of capturing the processes that human minds go through, whether it’s for learning or creative expression through language or music or abstract thoughts about society and social and cultural issues.
I might be creating AIs that create music. I might be writing another chapter in my book on AI and human society. I might be preparing a public talk to raise awareness. I might be having teleconferences with various organizations we are working with regarding the social impact of AI, such as the United Nations in New York.
There’s an immense variety of things I’m concerned with, like how do I prepare to teach in ways that other people have not taught before because the material is so new and the teaching materials are too outdated, and it’s getting too difficult for students to get a holistic view. How do we bring them up to speed on the mathematical front, scientific modeling front and cognition front but also on the ethical and social impact front?
Since how long has this deep engagement with AI been a part of your life?
I’ve been into AI for, oh gosh, more than 35 years. I started playing with AI way back in the early 1980s as a young student. As a child I was already playing with electronics and I’ve also been a musician for as long as I can remember. They say I was picking out melodies when I was two. So naturally I was building radios and electronic synthesizers. I was heavily into electronics since I was about 10. I built my first synthesizer almost immediately afterwards. They were very simple in the beginning because you can build oscillator circuits quite easily. By the time I hit (University of California) Berkeley for my PhD, I was designing dedicated chips to do what’s now called additive synthesis. I always had these shared interests across cultures and tools that we can use to enhance what we do culturally.
You started applying machine learning and web translation decades before our lives came to be so inextricably linked to big data and Google Translate. How do you see the evolution of these systems over the last 30 years or so, and your personal journey with them?
A lot of this (new) stuff is rebranding. The term big data may have emerged only a decade ago as a fad. Before that we were calling it Very Large Corpora. In fact, the Workshop on Very Large Corpora was one of the earliest special interest annual workshops since the 1990s, trying to break the stranglehold that logic/rule-based systems had on AI. We hosted that workshop here in the brand new Hong Kong University of Science and Technology in 1997. And so later on when the term big data came in we thought, okay, (let’s) rebrand Very Large Corpora to big data.
And the same rebranding goes for things like deep learning. In the early 1990s, in Toronto where I did a post-doc and spent the year playing in Geoff Hinton’s lab — Hinton being the granddaddy of deep learning — we were already building the kinds of recursive neural networks that we now call deep learning.
The fundamental approaches have not changed much. Rather computation has become much more powerful. So now we can run much larger-scale experiments. Data has become much more plentiful thanks to the internet gathering a lot of these. So we’re able to take the same modeling approaches and just run them at an exponentially larger scale.
So I think my personal journey has been an oscillation between (exploring) how do we take advantage of the current state of hardware and data availability and bring the best to bear upon socially and culturally impactful applications by not necessarily going to one extreme or the other — between neural networks/deep learning and probabilistic machine learning. There is truth in both of these and the question is how do we actually do both.
We have not done a good job of explaining how all of these work together to form human intelligence, awareness or consciousness. We don’t have good models of mindfulness, and there is no way to build a real AI or explain real human experience without that.
Why did you choose to explore uses of AI application in the field of language over others? What made you think that these two apparently dissimilar fields might have a connection?
Well, they’re only different because we see them as different. I don’t think Leonardo Da Vinci thought of it that way at all. (At the time of the Italian Renaissance), either you were or weren’t a knowledgeable, creative thinker who could create new ideas out of daily experiences and that included experiences of physical anatomy, the soundscape of music in people’s ears, the words they spoke, read and wrote and the calculations they made. I think it’s a very strange phenomenon in the last century that these are (being seen as) separate things. And that’s because we’ve taken this weird turn toward narrowness in creativity, intellect and knowledge.
One of the fields I work in is computational creativity. In other words, how do we model the processes of creative thinking? How do we explain how humans actually do creative thinking? It’s a similar process, whether one applies (intelligence to writing) words on a page or (creating) notes coming out of a musical instrument. And again, it’s very artificial to say, “Oh no, you can’t incorporate in your creativity something that is coming from the sense of your ears together with something that is coming from the sense of the food that you’re eating.” We have so many different perceptions from our different senses and if our minds and personalities hadn’t been artificially categorized, it wouldn’t stop us so much from being able to just do our creative thinking, integrating the whole of experience.
I think we are in fact back to a stage when art is getting increasingly cross-disciplinary in form and practice. Artists often take elements from other disciplines and incorporate them into what they create.
Absolutely. In fact it’s super healthy. Almost all breakthroughs have come from people who don’t limit themselves to some narrow, defined corridor that they are operating in and incorporate things from outside into whatever that artificially constrained discipline was. The best artists, writers, musicians of every generation are those who don’t allow those conventions to constrain them.
I think in this day and age when AI is beginning to challenge society with the question of what it means to be human, it’s especially important for us to throw off the shackles of those constraints of convention and grasp the meaning of our existence across all of those different spheres of human endeavor. Otherwise we will lose what it means to be human.
AI has a strong presence in your life as an academic, musician and public speaker. I am curious as to how these different facets of your life inform each other and if AI has a role in those interactions.
In academics, I have been challenging conventional ideas about AI. When I started in the Eighties, AI was dominated by rule, knowledge and logic-based approaches. It really was very much based on conventional programming, or coding. (It was like) you, the human, sit there and write down rules in logical form encompassing the entire domain that you want the program to be able to function in.
When I joined the PhD program in Berkeley, I almost immediately told my advisor, “Look, this is impossible; nobody can write all of their knowledge in logical rule form. You could try for your entire lifetime, and you would still never complete writing things down into rule form because our brains are not rule-based.”
Reality is not rule based. Rules are an invention, a metaphor that we invented just like games or algebra. They’re very useful but they’re also limited. They’re a first order approximation. But when it comes to our human experience of understanding all the shades of context, association and cultural connotations, it’s impossible to encode all of that in logical rule form. The architecture of our brains doesn’t look anything like that.
And so I started looking at how we might challenge mainstream thinking about what AI is, and there began a long six-year battle with my advisor at Berkeley where I was arguing for probabilistic machine-learning approaches toward getting AI to deal with the complexities of human language and understanding.
Since then, in my research group, instead of incrementally going down an established conventional corridor, we’re always asking what is wrong with this corridor that many scientists have shackled themselves into because we are very far from solving the whole AI problem. If you use today’s commercial AIs, voice assistants or search engines, it’s pretty clear they don’t really understand what they’re doing. They’re getting better at playing guessing games, but they’re still making the mistakes your three-year-old child could be laughing hysterically at. So what are we just completely missing in our AI mathematical models?
It’s important to not let ourselves be shackled in our thinking because you just get trapped in these ruts and lose the creativity that lets you attack the real problem because you’ve closed yourself off to a point where you’re pretending that the real problems don’t exist.
Would you like to tell us how your engagement with AI informs your life as a musician, as a player of the Spanish cajón and keyboards, for instance?
My musical practice is indeed related to AI and it’s a very natural connection. First of all, human intelligence is essentially built on language. Most of the things that humans can do without language, all the other species -- mammals, reptiles, birds -- can do better. The one thing that lets humans reason intelligently better than other species is the use of language. We are one of a handful of species who evolved not just hard-wired sounds like dog barks but actual imitative and creative sounds like song. It’s just us and a few other primates. The others who can (produce useful sounds) — whales, dolphins, parrots, cockatoos — are all the most intelligent members on the line of evolution of their species and it’s not an accident because once you have the ability to do imitative and creative singing, you have a way of representing ideas that could be turned into language…
… and be able to articulate what’s inside of you, or rather who you are…
To do that we basically start telling ourselves stories about who we are. This is why people from different cultures have such a different perceptions of who they are. For example, if you try to tell yourself a story about who you are in English, the available inventory of concepts from that vocabulary is very different from trying to tell the same story in Chinese. Often some of the first words learnt by Chinese children have no corresponding words in English. The Chinese word “guai” [乖], for instance. There is no accurate English translation for that word. People struggle (to find one), they say it means obedient, or well-behaved, or good, but none of those words captures what “guai” is. The word “obedient” in English carries slightly negative connotations of being mindless, slavish etc. Concepts like these are deeply embedded in the human culture that the language comes from. You have similar concepts related to governance in Chinese for which there are no words in English. So the connection between language — which is connected closely to music — and the science of cognition runs very deep and this is what we have tried to model into AI.
When you’re performing on stage, say playing the keyboard, how does AI inform that moment?
I grew up in Chicago and was trained in classical music at Northwestern University’s Conservatory of Music. My blues and jazz teachers were quietly subverting everything that they were teaching in the classical tradition.
When I was around nine, one evening I was playing on the piano, sitting in the dark. I was playing around with classical tunes but also a bit of blues. My grandfather who was living with us at the time walked by, remarking casually: “That sounds Chinese.”
That got me thinking. I realized that the way we understand music is really dependent on the cultural frame of reference we adopt. I had the privilege of growing up with exposure to multiple cultures, thanks to my Chinese heritage and growing up in different places, travelling and seeing the world, so I found it relatively easy to set myself in this cultural frame and at the same time intuitively create music resonating with a different frame. I realized our cognition works by reorienting perspectives across different frames of reference, just unconsciously. Since then I’ve always had it in my head that I’m jumping around between different mental and cultural frames of reference when I perform.
Did you have much exposure to Chinese culture growing up?
Yeah, so I was born in St. Louis, which is smack in the middle of the United States. The famous St. Louis gateway arch symbolizes the gate from the East to the West, but St. Louis is also on what’s called the Mason Dixon line that separates the North from the South, so being born there my parents naturally thought “Oh yeah, our kids are being born right in the middle of America, why don’t we teach them Chinese?” So I hit nursery school not speaking any English and spent the first couple of weeks arguing with the teachers about the right words to describe things.
How did you get the idea for Free/Style (a bot that can participate in rap battles with humans)?
Questions such as what makes music music have been in my head since I was a kid. The thing is: music is a kind of language. It comprises sequences of words and sequences of notes creating melodies as well as sequences of chords creating chord progressions. So there are many languages in music. What makes music music is the relationship of my chord progression language with your singing language and the drummer’s percussive language. When I’m listening to the melody you’re singing, I’m translating it to the chord progression language. Otherwise, it’s just noise.
So (I figured that) the fundamental machine-learning building blocks that we’re using to link spoken languages like English and Chinese could also be used to learn the relationships between musical languages like chord progressions, melodies and so forth, and sure enough, that’s what we ended up doing. It explains creative processes and improvisational abilities because improvisation is just often a form of translation and accompaniment and even just appreciating music. When I’m listening to music, I’m listening to the melody language, the chord progression language, rhythm, bass and so on. What makes it work for me is the conversation between those different languages that makes music music.
The Free/Style rap bot responds to components of music you refer to as well as to the different musical genres, and does so in different languages…
There’s a dialogue between the challenge lyrics and response lyrics. Hip hop is a very interesting case because it’s smack in the middle of that continuum between music and language. And so, it’s a particularly interesting point for us to work in. In fact, this is a project that we are moving on. We received a fairly prestigious arts grant that we are now looking to fundraise to match (in order) to roll out physical implementations of the Free/Style rap bot to help underprivileged youth learn the machine learning principles so that they can build their own AI.
Will they be learning through music?
It’ll be simultaneously through music and technology. We’re trying to lift mental constraints from thinking there’s a division between the two. It’s not a division, it’s the same thing.
Music and mathematics are language (of a kind), and the elegance of mathematics is very similar to that of any other kind of language. We use the mathematical language to describe how we do the music and the language within hip hop music. And these are all built on top of each other.
The statistical machine translation system you pioneered is based on trying to figure the relationship or common patterns between different sequential frames of reference in two languages. It mimics the development of language learning abilities in a child. How does AI interpretation of data compare with human capabilities to do so at present? Is there a chance that AI might supersede humans in this respect sometime in the future?
We are still so far away from where we need to be. It’s not that it would take a long time to solve these problems. It’s just that most people are working on the wrong problems. People are saying let’s get more data, more computation. You don’t need to do that. Here’s why: today’s commercial AI systems are trained on trillions of words, but it doesn’t mean that their vocabulary is trillion-strong. For example, the word “the” itself will occur trillions of times. On the other hand, human beings more or less master their mother tongue by the time they are four years old. Human beings use 15 million-ish words during their entire life. That is the square root of the amount of data we’re training commercial AIs with. I think these commercial AIs do not qualify as intelligent if it takes them a square of the number of words a three-year-old needs to learn. Sorry, but that’s artificial stupidity. It is more like efficiently learning the right generalizations.
Real AI is not big data, rather it is small data. It’s the ability of a human child to take a small amount of data and learn the right generalizations from it. These are the fundamental problems we need to solve.
Are you saying this might work better if AI could choose only what’s useful from a pool of data?
Not particularly, because if you think about it, a child doesn’t do that much selection. Parents teach new words to children in ways that make it easier for the child to understand and learn. A decent parent is not going to spend 15 million words by reading only Shakespeare to a child. Treat the AI the same way as you would a three-year-old and AI should be fine after that.
So are you saying the cognitive abilities of the commercially-used AI in the market are nowhere near that of a three-year-old?
(The users of) today’s mainstream commercial AI don’t even attempt to solve this problem. People are barking up the wrong tree when they go “Give me more computers, more GPUs, more data,” because that’s the easy mindless thing to do. It’s like turning the crank harder and hoping for more magic, instead of realizing that’s the wrong crank that you’re turning.
Today, because of the hype, everybody says they’re doing AI. And you get these AI indexes comparing how much AI is going on in this country versus that country. Ninety-nine percent people who say they’re doing AI are taking existing tools off the shelf, throwing in their own data and turning the crank. Real AI research explores what’s wrong with these tools, why are they so inefficient, what would it take to come up with a tool that is more like the mind of a young human.
From Plato to TS Eliot to Jacques Derrida, philosophers, poets and linguists have often drawn attention to the inadequacy of languages in expressing the true meaning embedded in things that we experience. Where does AI figure in the endeavor to capture the soul of an idea in words?
Meaning is much more of a phenomenological thing. For those of us who really work seriously at these kinds of meaningful AIs, I think it’s important to understand that meaning is more about (a particular) word’s effect on the hearer. The same exact sentence could mean different things to different hearers because of where they are from. Even if they’re coming from the same culture and same circumstances, people are likely to associate slightly different things with what they hear. The meaning of music and language is really subjective, based on associations from one’s entire lived experience.
I think what’s also at play is the inadequacy of the speaker to articulate his idea in words while translating between thought and speech — what Derrida might call differance. Could AI help bridge that gap?
I think it can, and I think that is where we are going. So over the years, we started with a very crude, simplistic approach that was heavily representational. It said this word in this phrase in Chinese represents the same thing as this word or phrase in English. And that got us a certain distance toward automatic translation. But then afterwards it was said, the syntactic structure matters, so we built the first systems that would start learning syntax by themselves in the process of learning what the relationship between Chinese and English is. But then as we moved on toward incorporating sense disambiguation into those models and factoring in more context — and a few years later — how the semantics are framed within (those parameters). So we keep on pushing the paradigm more towards the pragmatics of language, which is not about the literal representational meaning which is too limited.
Already AI is factoring in the disambiguation of word senses and framing of literal semantics, but now we need it to look at the use of metaphors, which is how we use language more than 90 percent of the time.
You seem to favor the idea of opening up the space of translation between languages and expose it to diverse regional and ethno-cultural influences. While such an approach will certainly make the concerned languages richer, there are concerns about ignoring time-tested standards of grammar and syntax.
I don’t think any reasonable person is advocating tossing out the Chicago Manual of Style. For instance, I’m a strong advocate for the Oxford comma. In a teaching environment, we take prescriptive approaches because we’re trying to bind together societies where people are following largely similar conventions so that we don’t have total chaos.
That said, if you’re actually building a complex cognitive model or AI system, you have to recognize that we may have been taught prescriptively, but in everyday life we work with language in ways that are constantly evolving. And what was wrong yesterday is right now because of the internet. So what I just did would probably shock my grammar teacher in grade school.
The internet conjures up a future where no rules apply.
It’s not so much that no rules apply, it’s just that the rules are constantly evolving. Language is a living thing. The English we speak today is not what Shakespeare did in his time, and we just need to recognize is language is a means, not an end.
In some of your recent public lectures you seem to make a distinction between smart/intelligent and mindful AI. Would you like to explain?
I’m making a distinction between weak AIs and strong AIs, which are also called narrow AIs and general AIs. Today’s AIs are all weak/narrow, and they can have superhuman intelligence, but in narrow domains. So we now have AIs that can play chess or golf better than any human. Those same AIs would get their pants beaten up by a five-year-old cooking an egg. What a real human-level intelligence would take is a general intelligence. We don’t reprogram our brains each time we want to do a different task. That’s where I think it’s important to make a distinction between commercial AI and AI with general human-level intelligence. I think to do that you need to have mindful AIs. Our ability to reflect on ourselves is based on our linguistic ability to tell us stories about ourselves and the architectures of today’s mainstream AIs don’t even attempt to do that.
Could mindfulness be taught to AIs?
Yes, depending on building the right AI architectures.
Has it been done?
No.
Is it in the realm of possibilities?
Yes.
Would you have a projected date as to when this might happen?
It’s going to happen incrementally as we’ve seen with the other translation technologies and voice assistants, so there’ll be crude forms of it fairly soon and then it’ll take years of incremental improvement.
I think this is super important from the point of view of ethics, because you probably know I do a lot of work in AI ethics and social impact. It’s impossible to have ethical AIs if they’re not mindful of what they’re doing, so I think this is really important for (scientists) to make sure that AI are mindful of their ethical responsibilities.
I imagine in that world of mindful AIs human beings will have to take certain responsibilities as well to make the relationship work.
Absolutely. People tend to draw lines separating humans from machines. There’re very strong historical reasons for that, and strong psychological biases are at work. But then we have to realize that even without the presence of strong AIs, society already comprises billions of humans and even more billions of artificial members of society — that we have more AIs today that are part of our society, that these are functioning integral, active, imitative, learning, influential members of society than most — probably more influential than 90 percent of human society — in shaping culture. We the humans and AIs are jointly shaping culture already. And even though these are really weak AIs, the culture that we are jointly shaping with our artificial members of society is the one under which every successive stronger generation of AIs will be learning and spreading their culture. We are already in that cycle and we don’t realize it because we don’t look at machines from a sociological standpoint. We must do that now in order to understand where we’re going before it’s too late.
Can AI actually think or do they think they can?
It depends on how you define the word think. You will get different definitions of the word if you consult Webster’s or Oxford dictionary. So they can “think” in certain senses. If you recognize “Oh, that’s a palm tree,” is that a thought or just perception?
It’s data recognition.
Ninety percent of our “thinking” is basically recognition. It’s what psychologists call automatic system processes, or what the psychologist Daniel Kahneman calls System 1 (fast, automatic, emotional, unconscious thinking).
I guess what I’m getting at is can an AI write a novel…
We already have AIs generating novels…
As far as I understand, AI-generated novels such as 1 the Road are a sum of parts rather than a meaningful whole.
They’re getting quite readable.
The first AI-generated portrait sold at Christie’s a year ago for a whopping six-figure sum…
It wasn’t the first, it was marketing hype.
I am not sure the image qualifies as a painting.
Would you call the duct-taped banana (Maurizio Cattelan’s installation at Art Basel Miami 2019 that sold for US$120,000) a work of art? What about the 18-karat gold toilet by the same artist?
Okay, you mean to say there will always be takers for just about anything that comes on the market. Like the artist Banksy said recently, “I can’t believe you morons buy this shit.”
Right. I think when it comes to art, only part of the work is created by the artist, whereas another large part is contributed by the interpreter.
In the old days, art largely comprised representational paintings. And the viewer pretty much took away very similar interpretations from these. Now that we have more of abstract art, what one person thinks of the duct-taped banana is very different from the next person’s interpretation.
Today AI is producing novels and many of these read like abstract poetry. I think AIs are already capable of writing train of thought kind of novels.
Then can we have AIs that are going through more classical processes of constructing plots, characters and so forth? We certainly can. I think we don’t see many efforts in that direction because people seem more interested in playing with the gimmicks of free association, single architecture, deep learning-based technology. So, if you like, what we’re doing now is deep faking novels instead of trying to produce novels on classical lines, even though there have been writers (like Virginia Woolf and James Joyce) who have written novels of free association and produced literature that’s far edgier than today’s AI-generated novels.
Even in the 1980s, the Berkeley research group to which I later belonged was actually famous for AIs modeling characters and their motivations, plots and so forth. Now the technologies they were using were insufficiently machine learning-based, and insufficiently contextual, because they were working with logic rule-based systems, but we could build machine learning versions of those architectures that would be closer to the real thing.
What are your thoughts on the possibility of sentient AI? And could AI be taught to develop feelings and emotions?
There’s a very comforting myth out there that “Oh we might have very good AIs but they’ll never have emotion and creativity.” One hears this all the time.
And also that the handiwork of AI can never have a soul…
Yes, exactly. Such thoughts make us feel all warm and fuzzy. They reflect an underlying belief in human exceptionalism rather than being grounded in rigorous thought. What does it mean to actually experience emotion? It doesn’t require human-level intelligence. I mean, anybody who says dogs don’t have emotion is seriously missing something.
A dog is not a man-made entity, AI is.
The oxford definition of “artificial” is “having been produced by humans roughly as a copy of something that exists naturally.” Therefore you are a rough copy of something that occurs naturally — namely your parents — and assuming you are intelligent, which clearly you are, you are an artificial intelligence.
Thank you. Did I get all of that so-called intelligence from my parents or from a few other sources as well?
Your parents did not design you. There was no intentionality there. They had nothing to do with your DNA. They had a lot to do with your social upbringing, which is a very important part of who you are now.
The limitations of today’s AI — today’s artificial children — are that their DNA isn’t quite yet evolved to the stage where it needs to be. So if you raise it like you raise a three-year-old, it’s probably going to turn out the same way.
Talking of feelings, there is System 1, where you can’t really explain why you feel about a thing in a certain way, you just do. It’s below the level of consciousness, and we do that during more than 90 percent of the time. We drift through mindlessly. We’re actually mindful of a very tiny fraction of our feelings.
Emotion is easier to build than sophisticated human intelligence that can write novels. It’s very primitive. That’s why all these other species that don’t have human-level intelligence have emotion. You’ve got fear, anger, sadness, happiness, and these are the basic characteristics that evolution has bred because they’re really useful for survival. Being mindful of what our emotions are is more sophisticated, and interesting, and that’s what we should build.
How real is the fear that AI might become a Frankenstein and destroy completely the moral core that holds human society together?
It makes for a great Hollywood cliché. Rather, what we should be terrified of are humans — humans armed with AIs, humans abusing AIs, and teaching them wrongly. What we need to be fearful of is the unintended consequences of humans not being mindful of raising those AIs properly so that our society continues to hold together. Imagine a society where humans pay zero attention to how they raise their children.
I think now we might be talking about control.
Just leave aside control for a second and consider one question. Picture a society thousand years ago. All humans in it, or 99.9 percent of them, think it’s completely unimportant to raise their children (with moral values). What will become of that society?
It would pave the way for total chaos.
It would implode. It would disappear or go extinct. That’s what we’re doing today.
The idea of AI rising in rebellion against selfish and tyrannical humanity has been the subject of novels and films since the 19th century, probably most notably in Mary Shelley’s Frankenstein…
Yeah, that novel explores the idea of anthropomorphism — the idea that it’s possible to create evil geniuses in pure biological human forms. If you mistreat a child or a slave, you’re going to end up with the same psychological issues as we find in Frankenstein.
There’s also this element of artificial intelligence beating its human creator at his own game. Much of the crisis in Frankenstein arises after Viktor Frankenstein discovers that the monster is in fact a better version of him. It can learn much faster, is capable of showing love and gratitude and so on…
The monster in Frankenstein is an example of mindful AI. It had already achieved what we call general intelligence. This is not what we need to worry about because it’s not going to happen for some years yet. What we do need to worry about is that already there are more artificial children in the world than there are human children. AI are more influential than 90 per cent of human members of society, and we are paying zero attention to how we’re raising them.
We all have a pretty good sense of how to raise children. One starts by setting good examples: (demonstrate) respect, tolerance for diversity, being rational.
It’s really scary when we start looking at how we’re raising our artificial children. Each of us has a whole brood of them. I have my Apple AI, my Amazon AI and Instagram AI. Am I setting a good example? Am I good role model? Do I speak to them respectfully and teach them to respect diversity, or do I show them that it’s okay to insult people online. Do I reward them? Do I show them that I’m going to reward other people for being insulting, disrespectful or being open only to ideas that agree with my own? The dozens of billions of our AI children are shaping our culture that the next generation of humans and machines are going to be learning from.
So what do you recommend?
We need to take the parenting of our artificial children seriously, like we do for our human children.
I guess the onus lies not just on people like you who build and nurture AIs…
No it lies on society as a whole. There’s no other solution. It’s easy to point fingers at big tech organizations, for building AI, or governments for not imposing more regulations (on AI trading). Rules only catch the most egregious violations of acceptable social behavior. What really hold societies together are the unwritten rules, unspoken conventions and shared norms. Likewise, “meaning” is not in what’s said, rather in what’s not said.
I suppose these unwritten rules informing the practice of virtue ethics will have to come from within.
Yeah, it depends on how one is raised — (what one imbibes) from one’s parents, peers and the society one grew up in.
I’d imagine it also has to do with the choices one makes — the propensity to select certain things over others.
And where do those propensities come from?
A huge amount of who we are (is impacted by) the value systems one is raised with — being steeped in a culture amongst parents, peers, siblings and so forth. Our values and mindsets come from our environment (during the growing-up years). We’re born with our primitive emotions and certain social traits. The rest is a human construct.
So I guess your message to future users of AI is: handle your AI with care.
Raise them the way you would raise human children, equally and responsibly, because they have more power than your human children. I’ve heard people say, “What’s the difference how I raise mine, there are billions of others, and I’m just a drop in the bucket.” But then have you ever heard a human being say “What difference does it make how I raise my children? I’ve only got a couple of them when there are billions out there.” If humans had that mindset, we would have gone extinct a long time ago. The only thing that keeps our civilizations and cultures alive is that 99 percent of humans take the responsibility of raising children with good values seriously. We have to do the same thing for our artificial children too if our culture has any hope of surviving.
Interview by Chitralekha Basu.
William Chang contributed to the article.
![]() |
![]() |
CHINA DAILY HONG KONG NEWS |
OPEN |