HAP Blog

The Big Questions for Emergency Managers and AI

April 30, 2024

The modern era has pushed us all to implement new technologies before they are truly vetted for the limelight.

From smartphones to tablets to wearable technology, the world—for better or worse—is increasingly interconnected and technological. We aren’t able to truly consider the big-picture questions until after we have transitioned into a new era of tech or a new device has hit the market.

We’ve seen this throughout health care, with artificial intelligence (AI) becoming part of patient care and facility operations. On the surface, we know AI can help us harness efficiencies and provide answers to help us get through the day.

But the tech’s potential comes with a few key questions that we need to answer. Here are a few from an emergency manager’s perspective.

Who’s Watching Over AI?

One of the reasons we are seeing bad actors in this space stems from the fact there is not much, if any, regulation around AI.

AI is governed by a mix of the federal government, state governments, the industry itself, and the courts. An executive order created late last year required every U.S. government agency, plus offices related to the President, to evaluate the development and use of AI; develop regulations for each agency; and identify and establish the use of AI for public-private engagement.

We have seen some safety suggestions released from government and private agencies, but these are not regulations; they are simply comprehensive guidance. In school, it’s the difference between your professor’s suggested reading and the required texts on the syllabus.

Late last year, one of the industry leaders, OpenAI, created the most-referenced comprehensive preparedness framework for AI development. This framework allows users to identify what is allowed to be released, and what can be developed further based on tracked risk categories. It classifies risk at low, medium, high, and critical levels. Currently, the tracked risk categories are cybersecurity, Chemical, Biological, Nuclear, and Radiological (CBRN) threats, persuasion, and model autonomy.

Essentially, those applications determined to be low or medium risk can be released publicly, and those deemed high risk can be developed. Reading through the various deemed capabilities based on risk, there are some rather eye-opening capabilities in front of us.

What’s real and what’s deepfake?

AI is being used in concerning ways. The technology has been deployed for cyberattacks and to disrupt day-to-day operations. But AI also can be used to mislead.

A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name). This technology uses machine learning and a specific algorithm to create something resembling what it has learned. This is much the same we as humans learn as a baby of what to do and not do.

Deepfakes have become increasingly prevalent in image and video. If I feed an AI model constant images of myself or someone else, it can essentially recreate them. If I were a bad actor, I could take your company leader’s face, which I am sure is publicly available, and make the person do and say things that have never been said. These tools can be used in cyberattacks or disinformation campaigns to elicit fear or in blackmail campaigns around negative publicity.

AI can be used to disarm our sensibilities and put us at risk for scams and other digital attacks. The technology makes us question what we are seeing with our own eyes.

How is this playing out in real life?

This exact case happened about two months ago in Hong Kong, where a company’s CFO and other senior managers were led into a video conference with an upper-level manager with access to company funds. The fake CFO asked for $25 million to be wired. The cybercriminals were able to recreate the CFO and other senior officials, likely relying on publicly available company videos and audio to digitally redevelop their likenesses and voices.

This case is being claimed as the first known case of customized deepfakes. As this technology becomes more publicly available, the chances of it occurring will only increase.

What should you do?

The Open AI framework discusses so-called “unknown unknowns.” Even the developers of OpenAI don’t know the power, pitfalls, and possibilities that could come from AI. It is important to stay up to date with artificial intelligence trends and incidents, so we know what we are up against. Every day, AI is being used in different ways as we learn more about its true potential.

This is in no way meant to scare you, but we are entering a new age of technology with endless possibilities. This only raises the potential for modern attacks that we have never seen before.

Being open to this reality helps us prepare for the best—and worst—that’s yet to come.


Please login or register to post comments.