2024 RT Amination Banner.gif

China Daily

HongKong> Opinion> Content
Tuesday, February 08, 2022, 00:47
Artificial intelligence is transforming people's lives but is not a panacea
By Quentin Parker
Tuesday, February 08, 2022, 00:47 By Quentin Parker

We hear a lot about artificial intelligence these days. Indeed, the perceived wisdom by people who know best about such matters is that it will influence almost every aspect of human activity in some form and affect us all in untold ways. It is driving emerging technologies with impacts on big data and the internet of things and is often mentioned in the same breath as robotics and automation. We are told it will completely transform our lives and society. It is coming and it will increasingly determine how we interact with our fast-changing world.

Interest in AI is of course already high in the Hong Kong Special Administrative Region, with all our top universities engaged in AI research, and with strong moves at my own university, University of Hong Kong, with the recent establishment of the new Musketeers Foundation Institute of Data Science. It has three key research pillars: Explainable AI and Human-Machine Interplays; Fundamental Data Science; and Smart Society. These will shape the research directions of the new Institute. Of course, with Hong Kong also a global financial hub, AI applications in fintech are highly significant.

I believe societies and governments should steer clear of a blanket adaptation of AI and remain ever vigilant when it comes to AI’s inherent risks

However, are we really prepared? Are we aware of the threats and dangers that AI may pose as well as the benefits it may bring? Are we right to fear it or to favor it? These are serious questions deserving serious thought. Indeed, such issues have been exercising the minds of philosophers, ethicists, politicians, scientists, technologists, the business community, even conspiracy theorists and of course the general public at varying levels of interest and engagement for years. 

The path forward is not clear and each nation state must take into consideration its own policies, foci, capacity and interests in this domain making control and regulation difficult. The big players will prevail as they nearly always do.

In November 2021 the World Economic forum set out six positive AI visions based on conversations with over 150 AI experts to explore and discuss the challenges that AI may pose for humanity. There was no consensus on what the future may bring or even on what timescales AI may outperform humans in every activity where we might compete. However, some positive outcomes were explored in terms of relieving mankind from mundane drudgery and dangerous activities, allowing the human spirit to soar and flourish. One interesting suggestion was that AI should not be allowed to drive excessive automation where nearly all things are done by AI robots and machines, but developed along more human-centric lines. This would be in tandem with more flexible labor markets and the rising and more broadly shared economic prosperity that AI might enable. 

The risks posed by AI have also been intensely debated and range from the creation of deepfakes (already happening), job losses (we get replaced), volatile markets (entire industries made obsolete overnight), automated weapons (terrifying, indiscriminate, lethal drones), loss of privacy (omnipresent facial recognition CCTVs) and underappreciated human creativity, etc.

Whatever the level of current debate for and against AI, its unfettered progress continues. China is seen by many as being at the forefront of this coming AI revolution. Indeed, it was reported in Nikkei Asia in August 2021 that China exceeded the US for the first time in the number of academic articles in AI cited by other researchers in 2020. Clearly great strides are being made but to what end?

In 2017, I attended a TEDx conference in Hong Kong that had a big focus on AI with talks on “Can AI be trusted?”, and later TEDx talks like “The paradox of AI Ethics”. I was motivated to pose a question to the speaker: “Will AI be slaves to humans or will humans become slaves to AI?”

In popular culture AI is sometimes portrayed negatively in terms of The Terminator (think actor Arnold Schwarzenegger) and The Matrix movie franchises where humans become slaves or are hunted to extinction by all powerful AI machines. The counterpoint was the 2001 movie A.I. by Steven Spielberg (I was moved to tears by its humanity) that provided a much more benign portrayal of AI but still left it wanting.

I have personal concerns too, relating to the TEDx question I posed back in 2017. I am worried about the impact AI personal assistants may have on our lives and decision-making. I can foresee a potential dumbing down and ossifying of human thought and natural development through excessive use of AI where it starts giving answers before you can ask them! “Time to get up, time to brush your teeth, this way not that way, turn left, turn right, go straight,” etc., while we follow blindly. In this scenario the more impressionable among us would be on the way to being unthinking slaves. Many of us are already in the magnetic pull of social media’s instant gratification where attention spans seem to be getting ever shorter and critical thinking is rarely exercised. Some have literally thrown intelligent decision-making to the wind to be replaced by Google or Baidu or social media platforms that have been spouting QAnon, stolen US presidential election and anti-vaxxer idiocy. 

AI can be manipulated to generate malign conspiracy theory nonsense; it can also provide complete direction for our lives if we let it. Human societies are not determined on clear-cut standards of true or false, or black and white, but also according to shades of gray and nuance, flexibility and being adaptable — the oil in the machine to keep things ticking over; the fuzziness in life that helps us avoid conflict; the application of common sense; and the irreplaceable human wisdom to navigate between individual eccentricities to bring about harmony. So, despite its labor-saving possibilities, leaving everything to AI is not for me. Hence, I believe societies and governments should steer clear of a blanket adaptation of AI and remain ever vigilant when it comes to AI’s inherent risks.

There is a clear need for regulation and global, enforceable standards that provide us with the power to intervene where necessary and protect societies everywhere. These need to come in sooner rather than later. Here, I believe China’s leadership in AI gives her the opportunity and responsibility to take a global leadership role along with Europe, Japan and the US in this domain for the benefit and safety of all mankind.

What if it all goes wrong?

If true AI does eventually emerge that can access the IoT what might that even mean? What makes us human is our ability to empathize, to exercise compassion and mercy, to tolerate faults, to love, to create, to imagine but also to fight and be cruel and uncaring. None of this may apply to AI in pure form unless we try to build it in. But even if we do, the superior machine-intelligence may still dispense with such human “qualities”.

If full AI evaluates the state of our planet today, what would it think? What extreme “solutions” might it recommend in the face of our population explosion, poverty, inequality, war, famine, pandemics and massive climate change threatening the future of us all. It is entirely possible that the AI “solution” might seem worse to us than the problems it attempts to resolve, given that AI may lack the imperfect human qualities that are difficult to quantify.

I would urge restraint when it comes to our adaptation of AI and be careful what we wish for! We might not like what AI might do if it had the power (that we gave it) to intervene in human affairs.

The author is a professor in the Faculty of Science at the University of Hong Kong and the director of its Laboratory for Space Research.

The views do not necessarily reflect those of China Daily.

Share this story

Also Read