Former Privacy Commissioner Stephen Wong (right) attends the Straight Talk show on TVB, April 18, 2023. (PROVIDED TO CHINA DAILY)
Former Privacy Commissioner Stephen Wong is on the show this week.
He says the development of artificial intelligence and ChatGPT is highly related to personal data, because "within the set of data, on which AI works and the ChatGPT works, there must be human experience, human data, human data must include personal data. So, the whole collection, whole bunch of personal data protection principles will come in".
Check out the full transcript of TVB’s Straight Talk host Dr Eugene Chan’s interview with Stephen Wong:
Chan: Good evening! Welcome to Straight Talk. Our guest this evening is Stephen Wong Kai-yi. Wong is a barrister by profession and has had extensive experience in the field of data protection and privacy. He is the former co-chair of the ethics and data protection in the AI working group for the Global Privacy Assembly, where he played a pivotal role in shaping the international discourse on data ethics and governance. He was our Privacy Commissioner for Personal Data, and held various posts in the Department of Justice, including Senior Crown Counsel and Deputy Solicitor General. This evening, we will be discussing the very hot topic of AI and ChatGPT. Welcome, Stephen!
Wong: Good evening, Eugene!
Chan: Stephen, thank you for coming. And before we dive into this very hot topic that I've just mentioned, maybe you can share with the viewers, your background in AI, and all the background that you had, that makes you a specialist in this area.
Wong: So, when I was the previous commissioner in Hong Kong, I had the privilege of working with a collection of data privacy commissioners and around the world. In those days, that was about seven, six years ago, we put our heads together and tried to work out, you know, certain ethical standards for artificial intelligence and the use of AI in especially data protection. And at that time, we didn't have much law about data protection in the field of AI.
Chan: Steven, you are a co-chair of the Global Privacy Assembly. I mean, we are not very familiar with this organization, maybe you can share with the viewers, what significance this assembly actually does have.
Wong: Global Privacy Assembly, otherwise known as GPA, is a collection of international privacy commissioners, formerly known as International Conference of privacy commissioners around the world, about 120 members and in other words, regions, and jurisdictions joined this assembly, the purpose of this assembly is to ... amongst other things, to discuss various issues around the world, and also to work on certain topics. This one, ethics in AI was one of the topics that we were working on in those days.
Chan: I see. We often hear of the word AI, artificial intelligence, maybe you can tell us briefly what exactly is artificial intelligence? And I'm sure many people have been reading in newspapers about this ChatGPT as well. I mean, what are their relationships?
Wong: Yeah, I think we must have heard of AI, artificial intelligence, you may think that it is a brain, but not a real one. It's an artificial one, a good one and a powerful one. In technical terms, it is a computerized ability. You may call it digital mind, a digital mind.
Chan: Right.
Wong: And this mind, digital mind can perform tasks associated with human intelligence. And that's why is known as artificial intelligence. So, that depends on a data set, including human data, including personal data of human beings. And what is most important of all, is that it is associated with learning from experience. It is a matter of machine learning, and algorithms.
Chan: So, how long have we had this technology?
Wong: I will say, in the region of 40 years,
Chan: Wow.
Wong: We've been working on we have had this application or technology. But of course recently, people stopped talking about no other ChatGPT. And that makes... you know, the entire idea of AI and ChatGPT a topical issue.
Chan: So, what is Chat GPT?
Wong: ChatGPT is an application driven by AI, artificial intelligence, of course, you know, when you have the technology of AI, you have the use of it. And this is a new use, powered by AI. And the purpose is to allow human like conversations with Chatbot. So, another form of robot allowing conversations and dialogue between you and the Chatbot. And this one, this ChatGPT is known as generative, pre-trained transformer. That's why this GPT.
Chan: Right. So, generative, pre-trained transformer.
Wong: …allowing human like conversations, as I said, but it runs on a large language model. Okay, which is an architecture that could help you answer questions, assist in writing essays, composing songs, creating websites, it is amazing. People will simply say that it's immeasurably good.
Chan: Right. Someone said it potentially revolutionizes the way we live and work. So, what are the some of the benefits from your angle? That if we use the ChatGPT or these type of specialized AI systems, what can we use it for them to help our life work better?
Wong: Yes. Generally, it helps with the automation enhances productivity, it increases efficiency, decision making will be made easier, solving complex problems, it helps economic development, of course, you know, at the end of the day, you feel that you have a smarter, a happier, a healthier, trendier, or, you know, a more dynamic lifestyle.
Chan: Right. I've also read…been reading the news, they said this this ChatGPT actually can pass medical exams…
Wong: Yes.
Chan: … a licensing exam in the US. And actually you can talk to this doctor ChatGPT as your doctor, how accurate would that service be from your point of view?
Wong: One thing that I'm sure AI is increasingly employed in the field of medicine. But in another example in the United States shows that if you ask the ChatGPT to answer a model examination, it scores only 42 percent. It fails the exams. Another example shows that, it's a… it provides plausible sounding but incorrect, or sometimes nonsensical answers, but with an authoritative tone. That is what is attractive about ChatGPT because it writes beautifully.
Chan: So, it may sound beautiful...
But not correct.…but it may be false, or are misleading.
Wong: Yeah, we call it hallucination.
Chan: I see.
Wong: Sometimes is known as approximation in grammatical text.
Chan: Right. So, just we talked about medicine, and how about in the field of law where you are the specialists? I mean, you guys have a lot of cases to refer to, a lot of references to look at, that your paralegals do your background work, can you rely on the chat GBT to do the search for you?
Wong: I tried it. But I must say that I cannot rely on it. One, you know, because it only realize it comes on that data set that it has. Two, some of the information might have been misfed. Some of the information might be incorrect, or might be imagined. So, I don't find it very helpful.
Chan: Right. So, what areas do you think is going to be the most effective? I mean, say for medicine, just now, you said there's only 40 odd percent in scores and for law, it may not be 100 percent, depending on the data that was input before. So, what area do you think we can use this with more confidence?
Wong: Wow, perhaps you know, with scientific developments that you don't need to go through, certain researches. But one finding, you know, recently made by, Elon Musk, you know, the founder of Open AI that powered you know, the ChatGPT. And one of the concluding observations that he had, was that there were profound risks of… all kinds of risks to society and humanity. It's the powerful digital mind may be getting out of control. Even dangerous.
Chan: Right.
Wong: So, not even not even to the extent that not even the creators can understand or control it.
Chan: Right. Yes, I was also looking at a survey by the Stanford University, more than one third of the researchers believe AI could lead to what we call nuclear level catastrophe. Do you agree?
Wong: Yes. This is one of the observations made by many.
Chan: So, you agree?
Wong: I agree. I agree. This is something that we should be very careful about. We should be fully alert of its danger.
Chan: The title of the show tonight is: Is relying on artificial intelligence or ChatGPT, a double-edged sword. Do you say that its double-edged or we shouldn't use it at all, since you're worried?
Wong: Double edged? Yes, probably because of the special characteristics or features of this technology. One for example, you have to identify you know, certain information first and then apply this information to the new data which is required for a decision making. So, the incorrect information or biased information or discriminatory information, may they say, perpetuate on an extensive scale, which may permeate, within the data set, and in turn the AI the artificial intelligence systems. So, that is very risky, because we don't know much about this operation, the algorithm, the working of the algorithm, and also it's about the probabilities. So, the probabilistic nature of AI may be very risky indeed.
Chan: Right. Stephen let's take a break now, but do stay with us, we will be right back.
Straight Talk's Eugene Chan (left) interviews Former Privacy Commissioner Stephen Wong on the Straight Talk show on TVB, April 18, 2023. (PROVIDED TO CHINA DAILY)
Chan: Welcome back. We have been talking with Mr. Stephen Wong about ChatGPT and AI, and its wide range of applications. So, Stephen, I think that the viewers after the first part of the show would start to realize this ChatGPT, although it sounded very, as you said, amazing, in terms of application, that can help many people in different areas, that actually it has a lot of concerns that you have brought up. So, maybe we can reiterate or reemphasize the area that we have to be careful of. What areas that we have to be careful… like for example, from your law work, you said it is not very accurate. So, maybe you can share with the viewers more.
Wong: Yes, apart from being inaccurate, it may be discriminatory, there may be bias, and there are so many uncertainties about the operation of the algorithms involved. And there might be information misfeed, and also the information might decay over a period of time. For example, after COVID-19, so the information is still there, you might have a wrong or incorrect result, if you conduct certain research on the related topics. And there are other risks like security risk, data security risks, about personal data protection because the way that after collection, how it is going to be used? Even by the machine, it is not clear, it is not transparent enough to the consumers, to the personal data… personal data owner. And also mankind, civilization, okay? Being human. Now this sort of thing is... may not be existing in the algorithms, working in the AI system. So, these are the fears, the risks that we should be concerned about.
Chan: Right. Stephen, if you look at the past in the world, we have the invention of machines and computers that have changed our life dramatically. And now with AI, whereby, of course, we have to be very careful, a lot of people are using it. And some people may say “all right, since this ChatGPT is so powerful, maybe we can start employing fewer staff because you can just ask this question and they give you a very good answer within seconds”. So, what are your thoughts on that? I mean this is going to, as I said earlier, revolutionize the whole world altogether.
Wong: Yes.
Chan: So, is it… are we heading the right direction?
Wong: Well, that is one of the major concerns expressed by… including by dignitaries, including Elon Musk, “should we automate away all the jobs?”, that is what he meant. So, can we lose all the jobs, human jobs, jobs done by human beings because we have the automation now, because we have the machine helping us? This is one issue that he talks about very much. Another issue is, I think, shall we let the machine take over our mind, intellectual work? For example, composing a song, writing an essay. And should we let the machine flood our information channels, which might be full of fake news, misinformation, or biased news? So, and then the other thing is should we, eventually, should we allow the non-human brain or mind taking over, controlling our work and our lifestyle? Eventually it may outnumber or outsmart or replace human efforts.
Chan: Right. Stephen, so, you have brought up many areas that one has to be concerned. But I am sure like all things, people will still use ChatGPT for their work.
Wong: Indeed.
Chan: And… yes, I think you have made it very clear that we have to be very careful.
Wong: We’ve got to live with it.
Chan: Yes, so, is there anything we can mitigate the risk?
Wong: Indeed. You know in Europe, several countries, started by Italy, has banned the application of ChatGPT until the data used to train the machine complies with the regulations notably. You know, personal data protection regulations and GPTR.
Chan: Why is this about personal data?
Wong: Because within the set of data, on which AI works and the ChatGPT works, there must be human experience, human data, human data must include personal data. So, the whole collection, whole bunch of personal data protection principles will come in.
Chan: So, Stephen, I mean you have been the privacy commissioner. In Hong Kong we have a lot of regulations. So, should this new area of artificial intelligence or, specifically ChatGPT, should this be regulated because since you mentioned, there are a lot of ethical aspects of these AI and ChatGPT?
Wong: Certainly the authorities are working very hard to try to contain the problem by introducing new legislations or regulations. But as you would agree, laws would always be lagging behind innovation or technology development. So, at the same time, the authorities, the data protection authorities are working on, as we did a few years ago, on standards, other than legal standards, known as ethical standards. The core principle of ethical standards include being fair, being transparent, and being accountable to all the stakeholders. And it is often said that data protection must be integrated, obeyed in their data protection legal or ethical framework.
Chan: Right. Stephen, I also read that Italy has already temporarily banned ChatGPT, and the UK has published some regulations or recommendations a few weeks ago.
Wong: Indeed.
Chan: So, how can we borrow or should we set up our own systems in Hong Kong in that matter?
Wong: We made recommendations on the use of AI already, you know the Privacy Commissioner has already published some guidelines. This is in line with those guidelines around the world, including those in mainland China. So, we are working on the ethical standards, promoting the compliance with ethical standards, ethical standards simply means whether we should do hing that we think it is right, we are ought to do.
Chan: Right. Stephen, will you be also concerned that there are people who are with malicious intent, feed intended wrong information into the system and really create a lot of chaos in our system?
Wong: There are examples like this. For example, writing malwares, phishing emails, fake information…
Chan: So, it is already happening?
Wong: Yes. For example, political manipulation. If you ask the ChatGPT to say something about a president of a certain country, you may come up with a story about one president, something nice about him, but something bad about another. So, this is what happened in the world.
Chan: Another area that one must not overlook is unequal access to technology.
Wong: Yes.
Chan: Because there are obviously the lower income groups or even those in rural areas, they may not have access to this sort of advanced information, and it will put them in a very disadvantaged position with those who has. So, are you concerned also in terms of this equality as well?
Wong: Yes, this is one of the core principles of the ethical standards. Equality, being transparent, being easily understood by the users and the creators. For example, the software, the program, the application. This is… There should be no bias, no discrimination, on the basis of their economic conditions, wealth status, and so on. So, this is the basic principle, as data protection is one of the fundamental human rights.
Chan: Right. Stephen, last month Elon Musk, the one you just mentioned, Apple co-founder Steve Wozniak, another AI expert, were amongst 1,300 people, who have written an open letter calling for 6-month pause on the developing AI system, beyond the level ChatGPT-4, which we are right up there. It said that we are confident that until that effect will be positive and the risk will be manageable. So, the letter now has more than 13,000 signatures, will you sign it?
Wong: Yes, I will certainly sign it because the risks are real. Real risks and that we have to address it, we have to mitigate those risks, before we can really enjoy a smarter, a happier lifestyle.
Chan: Right. Do you think people in the world, maybe from Hong Kong, do you think they are unaware of this altogether?
Wong: Yes, I think… well, I mean privacy protection in Hong Kong is a slow starter, I would say. So, they will be interested to read news about that, and they will be interested in thinking more about that. But in taking certain actions, including a campaign, I have doubts about it. But you know, this is something that the regulator and also the business enterprises should work on.
Chan: Right. Stephen, before I meet you today, I did try to use the ChatGPT myself, function myself, and try to get the script out between us. And it is immensely convenient, it is very fast. But then after listening to you, one has to be very careful because if you base on some biased information that you are not aware of and then make a decision, the consequences could be disastrous. So, maybe you can tell the viewers what are the areas that one must not use ChatGPT or what are the area that we can use?
Wong: I think for expressions about your own human feelings, about your personal experience, I think you should avoid using the ChatGPT. But for general information collection, research purposes, perhaps it could be a starter, a starting point for you to have some points. But you know, you can't rely on this application to pass exams or to win a trial.
Chan: But do you think in time, when more regulations are being laid down, more information, more transparent, do you think this will… all the risks will be mitigated and we can really use it? Do you see that day happening?
Wong; It depends, it depends because we don't really understand, and the scientists agree and accept that there might be no way to understand how the machine works, how the algorithm works, how the machine learns from the experience of human beings.
Chan: So, you will say at this stage, you will be very cautious?
Wong: Until we have convinced the majority stakeholders about the risks, and also the mitigating work that we should put our shoulders into it. So, I think before that, as it is the intention of the 6-month pause.
Chan: Right, okay. Thank you, Stephen, for opening up this new world of AI and ChatGPT to us. They are clearly powerful tools with immense potential, but need to be approached carefully and responsibly to avoid unintended consequences. Thank you for watching and see you next time.