HAP's Latest News

Preventing Bias in Health Care’s New Algorithms

December 19, 2023

Health care needs guiding principles to prevent new algorithms from confirming harmful biases that can affect patient care, federal health leaders said.

This month, a paper published in JAMA Network Open offers five guiding principles “to avoid repeating errors that have tainted the use of algorithms in other sectors.” As technology advances, these guiding principles will help developers make decisions to help mitigate and prevent racial and ethnic bias.

The recommendations come from an expert panel convened by the Agency for Healthcare Research and Quality (AHRQ) and other federal agencies focused on health disparities. Algorithms to support treatment, diagnostic testing, and scheduling and administrative functions could be a part of the future, but it’s important to take a careful approach.

“Promise aside, algorithmic bias has harmed minoritized communities in housing, banking, and education, and health care is no different, so AHRQ’s guiding principles are an important start in addressing potential bias,” said Dr. Robert Valdez, AHRQ director. “Algorithm developers, algorithm users, health care executives, and regulators must make conscious decisions to mitigate and prevent racial and ethnic bias in tools that may perpetuate health care inequities and reduce care quality.”

The panel developed the following framework that should guide decisions across an algorithm’s life cycle to:

  1. Promote health and health care equity during all health care algorithm life cycle phases.
  2. Ensure health care algorithms and their use are transparent and explainable.
  3. Authentically engage patients and communities during all health care algorithm life cycle phases and earn trustworthiness.
  4. Explicitly identify health care algorithmic fairness issues and tradeoffs.
  5. Establish accountability for equity and fairness in outcomes from healthcare algorithms.

“ChatGPT and other artificial intelligence language models have spurred widespread public interest in the potential value and dangers of algorithms,” the paper concludes. “Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithm bias in health care.”

The policy paper is available online.



+