HAP's Latest News

Ethical Guidelines for AI in Health Care

January 22, 2024

ChatGPT, Bard, and other large multi-modal models (LMM) may have potential across health care, but developers need proper guardrails to protect patients and communities.

This month, the World Health Organization (WHO) published new guidance for governments, technology companies, and providers about this emerging technology and appropriate ethical boundaries to safeguard their use.

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” Dr. Jeremy Farrar, WHO chief scientist, said in a statement.

Here's what you need to know:

  • Background:  LMMs have several potential uses in health care to support patient diagnosis, clerical and administrative tasks, clinical education, and research and drug development.
  • The issue:  In health care, these technologies carry risks related to “false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions.” There are other worries related to cybersecurity, privacy, and the need to properly regulate untested technology.
  • Key points:  Automation bias is another area of concern when errors or difficult choices are improperly delegated to technology.
    • The WHO says these new tools should be able to perform tasks “with the necessary accuracy and reliability to improve the capacity of health systems and advance patient interests.”
  • Recommendations:  The report includes recommendations to governments to properly regulate this technology, especially related to public data transparency, ethical and human rights standards, and patient privacy.
  • Quotable:  “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities,”  Farrar said.

“Governments should enact laws and policies that require providers and developers to conduct impact assessments of LMMs and applications, which should address ethics, human rights, safety and data protection, throughout the life cycle of an AI system,” the report noted.

The WHO’s report is available online.