Fu XiaolanShen. (PHOTO PROVIDED TO CHINA DAILY)
Society can benefit from the new wave of technical revolution being unleashed by artificial intelligence, but must make sure that it remains under the control of humans, an expert said on Thursday.
Fu Xiaolan, a fellow at the National Academy of Social Sciences in the United Kingdom, was speaking at Vision China, which was organized by China Daily and the Tianjin Municipal Party Committee's publicity department.
The convergence of rapid technological breakthroughs has created new models of value production and a new industrial revolution, said Fu, who is also the founding director of Oxford University's Technology and Management Center for Development.
"Artificial intelligence is one of the core technologies at the heart of this revolution," she said. "The impact of artificial intelligence on society includes opportunities and challenges."
Fu highlighted the "wonderful opportunities for development" — greater efficiency, improvement of work conditions and welfare.
She cited the use of AI doctors during the COVID-19 pandemic as an example, and the opportunity the technology provides to developing countries that could enable them to skip some stages of development.
She also warned of the challenges created by AI.
"AI can replace workers, including skilled workers, whose jobs are repetitive and routine," she said.
"It can also significantly increase income inequality between rich and poor — those who own the technology and those who don't — and between countries."
AI may also deepen the existing digital divide and favor countries that have the needed skills, she said, adding that there are also privacy and security concerns and that the algorithms may contain embedded bias, which could influence outcomes.
"While globally, AI opens digital windows of opportunity for developing countries, there is also the risk that the opportunity for developing countries to catch up might narrow," Fu said, referring to the rapid rise of AI, as well as the infrastructure needed to develop it.
These challenges have also been highlighted by governments, tech giants, institutes and experts, especially since ChatGPT was launched earlier this year.
In a white paper on regulating AI released in late March, the British government proposed adopting a comprehensive regulatory approach. Italy and Canada have also underscored the data security risks posed by ChatGPT.
Stressing the importance of ensuring AI is responsible and ethical if it is to be useful, Fu emphasized the need to build privacy, robustness, transparency and fairness into the tech, noting that those who develop and employ AI for business and other services should be held accountable.
Trust must be created around the technology so that people have the confidence to use AI for economic and welfare improvement, Fu said.
jiangchenglong@chinadaily.com.cn
Yan Hua. (PHOTO PROVIDED TO CHINA DAILY)
AI's potential use in medicine vast
By Wang Xiaoyu
Artificial intelligence can be harnessed to boost the efficiency of healthcare services and hospital management, devise tailored treatment plans and help address the shortage of doctors, a senior hospital manager said on Thursday.
Yan Hua, an ophthalmologist who is also Party chief of Tianjin Medical University, said that a wide range of AI tools such as speech recognition, computer vision, data collection and analysis can be deployed to empower the medical sector.
"AI can improve the efficiency of diagnosis and treatment and increase safety and precision in patient care," he said during a Vision China event organized by China Daily in Tianjin. "It can also prescribe personalized medicine based on the particular conditions and genomic information of patients to improve treatment outcomes."
Yan said that AI can also play a positive role in enhancing hospital management and alleviating the lack of doctors. It can also help patch the healthcare insurance net in less-developed regions.
"For instance, grassroots eye doctors might not be capable of detecting diabetic retinopathy accurately and might have no idea how to deliver treatment.
But if communities are equipped with an AI device to help them screen for the condition, those showing signs of the disease can be diagnosed and transferred to better hospitals where they can be treated," he said.
Yan said that the accuracy rate of AI-powered diagnosis is nearly on par with that of experienced doctors, but AI greatly shortens the time it takes to generate a correct diagnosis.
"Also, for patients with very complicated eye diseases, AI can be used to help design and execute a surgery," he added.
Yan said that for patients with severe eye trauma and who have lost the perception of light, the question of whether a vitrectomy — surgery to remove the eye's vitreous humor, the tissue behind the lens of the eye, and replace it with another solution — would yield the desired results is difficult for even top doctors to determine.
"We have therefore developed and upgraded an AI model that is fed the condition of patients before surgery, such as whether the retina is detached or the vision is impaired," he said.
"As a result of continuous training and testing, the system can generate an accurate prediction of the post-surgery level of vision."
Yan Hua said that AI's spread into the healthcare sector has begun but remains at an early stage.
"Although AI has many advantages, doctors cannot be replaced entirely because AI needs to learn and train," he said.
"As the medical field advances, doctors are still needed to continue feeding fresh data to the AI for it to process."
A potential caveat, he said, is how to determine legal responsibilities when an incorrect diagnosis is given, or a misuse of technologies occurs.
"The protection of patient-doctor privacy and data security are bound to be essential tasks," he said.
Yan added that there is no standard pricing mechanism for medical services involving AI so far, which could prevent hospitals from purchasing and deploying AI equipment.
Pierre Pakeyma. (PHOTO PROVIDED TO CHINA DAILY)
Laiye discusses language models
By Zhu Wenqian
Pierre Pakey, head of product innovation at Laiye Technology (Beijing) Co Ltd, shared his thoughts about how large language models, a new path in artificial intelligence, can mimic human minds in some ways during the latest Vision China.
Previously, the most common way to train AI was to give it plenty of examples, a process called supervised training.
With the new approach of descriptive training, AIs are trained in the same way humans would be, by describing the task that requires completion in natural language, Pakey said.
Taking the example of intelligent document processing, where the goal is to extract key information such as issue dates, supplier addresses and vendor names from an invoice, he said that the common way of training involves feeding the AI thousands of invoices.
But as invoices vary, one of the issues is physically pinpointing the positions of each piece of information on the documents in order to train the language model. This process is both slow and prone to errors, Pakey said.
"With descriptive training, people just describe what they want in plain language. So it's extremely simple and it completely changes the time necessary to actually launch a new AI and train on a new task," he said.
With descriptive training, users must ask themselves what the best question is, and what is the best way of asking the model to perform the desired task. This process is called prompt engineering.
"The first time we launched a large language model in production, we had very disappointing accuracy, meaning that our metrics were telling us this was a bad model. When we dove deeply to understand why our accuracy was poor, we noticed that the model was still doing better than the human laborers it was ranked against," he said.
The large language model needed to handle hundreds and even thousands of different examples, and in this situation, it did better than human workers at labeling.
As with any AI model, its accuracy did not exceed 95 percent, and Pakey said that if users needed further improvements, it was only a question of aligning expectations and of being very clear about what information they wanted to extract.
Large language models still have limitations. They need to be given time to produce a good answer, and when prompting different models for an answer, it's important to bear in mind the trade off between complexity and accuracy, Pakey said.
"But most importantly, they are able to learn new tasks almost instantly, and that makes this one of the most exciting things that we have seen in AI in a long time."
Wei Qing. (PHOTO PROVIDED TO CHINA DAILY)
Microsoft executive says positioning of human-machine relationship is critical
By Ma Si
A responsible approach is needed to maximize the benefits of artificial intelligence, which as a tool could be a powerful assistant to humans, according to a senior executive of United States tech giant Microsoft.
At Vision China on Thursday, Wei Qing, chief technology officer of Microsoft (China) Ltd, said that in the era of AI, positioning the relationship between humans and machines is of crucial importance.
"AI should be a copilot rather than an autopilot. In other words, the machine is an assistant to help people, and no matter how powerful, it is always in the side seat, not in the driver's seat," Wei said.
He referred to US engineer Vannevar Bush, who in 1945 published an essay titled "As We May Think", in which he was already envisioning the information era and the need to design a machine to help humans manage the information overload.
More importantly, Bush envisioned that the machine would be able to process information through selection by association, which Wei said is similar to the way generative AI works now.
Generative AI is the latest tech frontier and has taken the world by storm. It refers to computer algorithms trained to produce new text, images, code, video or audio, a key example of which is ChatGPT, an AI chatbot developed by US-based AI research company OpenAI.
Wei quoted a line from the 1927 movie Metropolis describing the difference between humans and machines: "The mediator between head and hands must be the heart."
"It is all about how to position humans and how to position machines. If we position this incorrectly, this might have a negative impact on society, but if this is positioned in the right way, as we mentioned just now, in a responsible way, we might see the coming of a new era of AI that will really help humans in a purposeful manner," Wei added.
His comments come in the wake of global discussion over the use of ChatGPT-style products and related AI technology, which have raised concerns of ethics, data security and infringement of personal privacy.
Meanwhile, the number of incidents involving the misuse of AI continues to rise. According to the AIAAIC database, an independent organization that tracks incidents related to the ethical misuse of AI, the number of incidents and controversies has increased 26-fold since 2012.
In an open letter in March, Tesla CEO Elon Musk and a group of AI experts and industry executives called for a six-month pause in developing language models more powerful than OpenAI's newly launched GPT-4, citing potential risks to society, with experts saying the development of responsible AI is vital to long-term, sustainable growth.
"At Microsoft, we have principles laid out as our North Star in doing all things related to AI, which starts from a foundation of accountability and covers all stages," Wei said.
But principles don't execute themselves, and they need to be interpreted. To solve this problem, Microsoft also provides tools and frameworks such as a measurement standard and tests for promoting responsible AI development, Wei added.
He also highlighted the fact that even though AI is now the buzzword, digital transformation remains the foundation for the progress of machine capabilities.
"No matter how powerful a machine is, it needs human data, either human-generated data or data generated by human-developed machines," Wei said.
"Then, through computer algorithms, AI can turn data and point-in-time data into information and knowledge.
"Its like the well-known information pyramid, DIKW — data, information, knowledge and wisdom. Humans should stay at the top, at the wisdom level, and machines should handle the rest of the work."
Minh Thao Chan. (PHOTO PROVIDED TO CHINA DAILY)
PhD student extols rise of self-driving cars
By Zou Shuo
The future of AI and autonomous driving is exciting, and people should harness its combined power to create an inclusive, sustainable and prosperous future for all, according to Minh Thao Chan, an international student at Tsinghua University.
As a doctoral student in electronic engineering specializing in autonomous driving, he has witnessed the transformation of the AI landscape in China and is inspired by its passion, dedication and collaboration.
"Imagine a street in Beijing teeming with cars, pedestrians and cyclists, along which an autonomous vehicle moves gracefully, navigating the labyrinth of traffic, making split-second decisions with precision and accuracy," he said at Vision China on Thursday night.
This would be a sight to behold and symbolizes AI's potential to create a future where traffic jams are a memory and streets are safer, more efficient and environmentally friendly, he said, adding that it's a future that China is on the cusp of realizing.
When Thao Chan arrived in China from France in 2016, electric vehicles were just being introduced. By 2018, most taxis were electric, and in the last three years, autonomous vehicles have gone through trials at test centers and, more recently, on roads in Shanghai and Shenzhen, Guangdong province.
As more people embrace the potential of autonomous driving, these vehicles are becoming integral parts of the urban fabric, transforming how we live, work and play.
Thao Chan used the example of a visually impaired woman in Beijing to explain the power of AI and autonomous driving to change lives.
For most of her life, she relied on others for mobility, but the arrival of autonomous vehicles granted her newfound independence. She is now able to order a self-driving car and go wherever she desires.
However, he said that as people explore the potential of AI and autonomous driving, they must remain mindful of the challenges they present.
Data privacy, cybersecurity and ethical considerations must be addressed, and it is the collective responsibility of researchers, policymakers and citizens to ensure that AI technology remains safe.
"We have an unparalleled opportunity to shape a world that is more intelligent, more connected and more compassionate," he said.
"Together, we can forge a new era of transportation and mobility. Together, we can build a future where technology serves not just as a tool, but as a force for good, improving lives and fostering greater understanding, connection and collaboration across the globe."