Published: 12:48, July 4, 2025
PDF View
Will artificial intelligence outsmart humankind?
By Toby Walsh
The OpenAI logo is seen near computer motherboard in this illustration taken on Jan 8, 2024. (PHOTO / REUTERS)

For a decade now, Sam Altman, the CEO and co-founder of OpenAI, one of the leading companies developing artificial intelligence, has been warning of the risks of AI.

There is, of course, a lot of hype around AI. And Altman has a lot to gain by talking up the capabilities of his company's technology. But you don't hear such existential warnings from leaders in other scientific fields that are changing our world.

Such worries go back a long time, to before the race to build AI even began. Today, some dismiss such existential worries as being too far off, and not worthy of much concern. After all, where are the self-driving cars that we've been long promised?

READ MORE: OpenAI says has no plan to use Google’s in-house chip

Andrew Ng, one of the most influential popularizers of AI, said at NVIDIA's GPU Technology Conference in 2015: "I don't work on preventing AI from turning evil for the same reason that I don't work on the problem of overpopulation on the planet Mars." Back in 2015, Ng had a point. AI was stuck in the laboratory, and a long way off impacting most people's lives. It was still science fiction, not science fact. But that is no longer the case. AI is entering our lives. Even Tesla and SpaceX founder Elon Musk's self-driving taxis recently turned up in Austin, Texas, and started offering rides to the public.

Indeed, what surprises me most about the development of AI today is the speed and scale of change. Significant resources are being poured into AI on a daily basis. We've never made such massive bets before on a single technology.

This unprecedented level of investment in AI is paying off. As a consequence, many people's timelines for artificial general intelligence (AGI) when machines will match human intelligence are rapidly shrinking. Musk has predicted AGI by 2026. Dario Amodei, CEO of OpenAI competitor Anthropic, has said that "we'll get there in 2026 or 2027", while NVIDIA CEO Jensen Huang put the date as 2029.These predictions are all very near for such a portentous event.

Of course, there are also some dissenting scientific voices. Yann LeCun, Meta's chief scientist, has argued that it will take several more decades for machines to exceed human intelligence. Another of my AI colleagues, Gary Marcus, said in 2024 that it will be "maybe 10 or 100 years from now".

But whether you're an optimist or a pessimist — indeed, I'm not even sure if it is optimistic or pessimistic to have a short AGI timeline — it seems scientifically plausible and therefore entirely reasonable to suppose AI will match and likely exceed human intelligence in the lifetime of our children. And we should probably entertain the idea that AGI might even happen in our own lifetime. That's an exciting but scary thought.

We have, of course, had hundreds of years to get used to the idea that machines could be better than us. In the past, it was only our brawn that was bettered. Machines could do more work than any person. But soon it will be our brains that are overtaken.

It may come as a surprise to hear that the first world champion was beaten by a computer over four decades ago. In 1979, the world backgammon champion Luigi Villa was convincingly beaten 7-1 by Hans Berliner's BKG 9.8 program. In a cruel twist, Villa had been world champion for just one day before this defeat.

More recently, in 1997, the reigning world chess champion Garry Kasparov was narrowly beaten by IBM's Deep Blue computer. Discussing his loss, Kasparov described a future that awaits humankind. "I had played a lot of computers but had never experienced anything like this. I could feel — I could smell — a new kind of intelligence across the table. While I played through the rest of the game as best I could, I was lost; it played beautiful, flawless chess the rest of the way and won easily."

But human chess hasn't suffered from this machine dominance. Indeed, computer chess has improved the human game in several ways. Chess computers now provide professional coaching advice to human amateurs. And chess computers have opened up new avenues of play that we humans might never have considered. Our machine overlords have actually improved our game.

ALSO READ: Tech narratives equally vital in AI progress

But there's a more subtle reason why, as an AI researcher for 40 years, I don't worry too much about AGI. It's simply that intelligent people over-estimate the importance of intelligence. It's not intelligence that's the risk. It's power.

AGI isn't going to overthrow these existing power structures. It will have to exist within them. Also, AGI won't have a free shot at goal. My AGI is going to be competing against your AGI. On the other hand, AGI can supercharge our economy, dramatically improve healthcare, and transform education.

The future is therefore bright. The future is AGI.

The author is a professor of AI and chief scientist of the AI Institute and UNSW Sydney, and author of five books on AI which have been published in a dozen countries including China. His most recent books are Faking It: Artificial Intelligence in a Human World and The Shortest History of AI: The Six Essential Ideas That Animate It.

The views don't necessarily reflect those of China Daily.