HAP Blog

5 Ways AI Could Challenge Emergency Preparedness

June 14, 2023

If you haven’t been paying attention to artificial intelligence (AI), it’s time to up your situational awareness.

Surveying the health care landscape, it’s easy to understand the ways AI should trigger your preparedness instincts. Images generated by AI can pose significant dangers for public safety when exploited by malicious actors to deceive others. These dangers are particularly relevant and concerning for the health care community, so we need to start thinking about the implications for our hospitals and our communities.​

To address these dangers, it is crucial to be vigilant, stay informed about AI technologies, and implement robust verification systems to detect and prevent the misuse of AI-generated content. If you’re interested in a few of the potential ways AI could affect your preparedness, here are five key areas to watch:

  • Misinformation and deception:  This involves the creation of realistic and convincing fake medical reports, diagnostic images, or treatment results. Bad actors can manipulate AI algorithms to generate false evidence, such as medical images or misleading visual representations of medical conditions.​
  • Impersonation and identity theft:  By using AI-generated images, individuals could impersonate health care professionals or gain unauthorized access to facilities and potentially even sensitive medical information.
  • Synthetic patient records:  Fabricated patient profiles, for example, could be used to deceive health care providers, leading to inaccurate diagnoses and improper treatments. Such deception could harm patients by delaying or preventing them from receiving appropriate care, potentially worsening their health conditions.​
  • Erosion of trust in medical imaging:​  As the public becomes aware that AI can create convincing fake images, there may be increased skepticism regarding the validity of medical scans and diagnostic reports. This could undermine the credibility of health care professionals and the reliability of medical imaging technologies, potentially resulting in reluctance to seek necessary medical attention.
  • Spread of disinformation campaigns:  AI-generated images can be weaponized as part of disinformation campaigns targeting the health care community. Bad actors can create deceptive visual content to spread false information about medical breakthroughs, treatments, public health issues, or emergencies impacting your facilities.

All of the above could lead to confusion, panic, and uncertainty among health care professionals, the public, and policymakers, hindering effective decision-making and public health efforts.

As we consider our next steps, collaboration between experts in AI, health care professionals, policymakers, and regulatory bodies can help develop guidelines, standards, and safeguards to mitigate the risks associated with AI-generated images and preserve public safety.​ It is not advisable to fear the changes ahead, but our preparedness improves when we know the potential challenges in front of us.

For more information about the potential implications of AI in health care, contact me or HAP’s Emergency Management team for more information.

Please login or register to post comments.