WHO: AI-driven health revolution must leave no-one behind

GS News
5 min readJun 30, 2021

Artificial intelligence holds “enormous potential” for improving health but algorithm bias, the unethical use of data by both companies and governments, and cybersecurity breaches are all risks that still need to be overcome, according to a new report by the World Health Organization (WHO).

The Ethics and governance of artificial intelligence for health report, the WHO’s first global report on the use of AI in health, comes after nearly two years of consultations by a panel of 20 international experts, and sets out guidelines for ensuring advances in AI do not result in a new digital divide in healthcare.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm,” Dr Tedros Adhanom Ghebreyesus, WHO director-general, said at the launch on Monday.

“This important new report provides a valuable guide for countries on how to maximise the benefits of AI, while minimising its risks and avoiding its pitfalls.”

Health and human rights

AI has started to be used more widely in healthcare in many countries for enhancing diagnostics, reading X-rays and scanned reports. Algorithms can also help healthcare workers identify various treatment options for a particular patient and even help with drug discovery.

It is further proving useful in parts of the world where there is a lack of specialists such as radiologists and pathologists, noted Dr Soumya Swaminathan, the WHO’s chief scientist, pointing out that the reading images and returning the results to patients and treating physicians in a completely different part of the world can be sped up with AI.

But the long term impact and implications of AI in healthcare has also divided opinion, with some worried that it could leave healthcare systems vulnerable to misuse and human rights concerns. Agnès Callamard, secretary general of Amnesty International, and one of the participants at the discussion panel at the launch, said: “It is an understatement to say that AI is transforming the way we live, work and think. But without safeguards, digital health technologies can threaten a range of human rights.”

In preparing the report, the interdisciplinary group — which included experts in ethics, digital technologies, law and human rights, as well as physicians and representatives from ministries of health of the WHO member states — did not start from a clean slate.

Professor Effy Vayena from ETH Zurich in Switzerland, one of the co-chairs of the expert group that prepared the report, said the work was built on top of the extensive literature that exists for the ethics and governance of artificial intelligence.

Vayena believes, though, that the guidance document the WHO produced is unique: “Although we’ve had a number of governance and recommendation documents about the ethics of AI in general, this is the first of its kind to specifically address the domain of health.”

Guiding principles

The group drew up a list of six ethical principles to help guide both the development and the use of AI:

  • Protecting human autonomy: Humans should remain in control of health-care systems and medical decisions, and privacy and confidentiality should be protected, with informed consent being obtained from the patients through the appropriate legal frameworks.
  • Promoting human well-being and safety and the public interest: AI technologies should not harm people, mentally or physically, and there should be measures of both quality control and quality improvement over time.
  • Ensuring transparency, explainability and intelligibility: The technologies should be understandable to everyone involved, from the developers to the patients, and should be transparently documented before deployment in order to facilitate public consultation and debate.
  • Fostering responsibility and accountability: The technologies should be evaluated by patients and clinicians while being developed, and there should be appropriate mechanisms for redress for those adversely affected by AI systems.
  • Ensuring inclusiveness and equity: AI technologies should not be available exclusively in high-income settings and should not encode biases that adversely affect groups that are already marginalised.
  • Promoting AI that is responsive and sustainable: AI systems should minimise their environmental impact and be energy efficient.

Based on these principles, the report lists 47 recommendations aimed at not just developers and designers of AI but government ministries and healthcare providers. It raises several ethical challenges facing the widespread deployment of AI in healthcare. For one, the report states, we should ask ourselves whether it is appropriate to use AI in the first place. And in a world fraught with cybersecurity threats, how do we safeguard AI technologies? The 165-page report also expressed a mixture of optimism and pessimism regarding the impact that AI could have on labour and employment in health.

“We need to think about how health data is organised in a country and what are the laws governing health data,” Swaminathan added. “Has there been a discussion with civil society, with the public about the use of their data? Some countries have taken a liberal view, where citizens have a lot of trust in government and they permit the government to use routinely collected health data to inform policies. Other countries have strict rules while some have no regulations.”

Swaminathan sees AI as an opportunity for countries that do not have legacy systems that make interoperability difficult. “But there are a couple of caveats,” she added “The challenge can be overcome if there is investment made in the cost of internet services by countries and international donors. What is most important is to have a national mechanism, with a set of policies and regulations for the deployment of AI.”

Eyes on the future

For the WHO, the task of ensuring ethical AI for health is just beginning. “Next week we have a mission briefing for all WHO member states,” said Andreas Reis, co-lead of the WHO’s Health Ethics and Governance unit. They will be presented with what the report means for them and what kind of support they will need from the WHO to implement it. “It is important for countries to keep abreast of this quickly evolving technology.” The WHO will also provide support for capacity building for technology experts in the ministries of health and provide training for healthcare workers to use the new digital tools.

The guidance report is viewed as a living document. “The technology is moving very fast and there are unknown unknowns along with the known risks we’re taking,” notes Vayena. “We want the opportunity to update the document as we go along and make it more useful.”

Her views were echoed by Professor Partha Majumdar from the National Institute of Biomedical Genomics in India, and the other co-chair of the expert group, who said, “We all should take this document forward and improve it. Nothing is static and these ethical issues will become more and more nuanced as AI systems are deployed in areas of health.”

Dr John Reeder, the director for Research for Health at the WHO, closed the launch event with a word of warning: “Covid-19 has accelerated our willingness to use AI, yet we’re also realising that they should only be used if they actually help healthcare workers overcome the daunting challenges they’re facing and don’t divert resources or distract from the proven interventions. There needs to be balance.”

Originally published at https://genevasolutions.news.

--

--

GS News

Online media covering international cooperation and development. Subscribe to our free newsletter: https://newsletters.genevasolutions.news/