5 Fast Facts: The Joint Commission’s AI in Health Care Guidance
September 30, 2025
Hospitals, patients, and regulators are all wrestling with the same big question: what is the role of artificial intelligence (AI) in health care?
This month, the Joint Commission partnered with the Coalition for Health AI to issue first-of-its-kind comprehensive guidelines on the responsible use of AI in health care.
Here's a quick rundown of what hospitals need to know about integrating AI into their organizations safely, ethically, and effectively under these new standards:
1. Hospitals are asked to develop robust AI governance structures
- What is recommended? Health care organizations should create formal governance teams to manage AI systems. This structure should include expertise in AI technology, ethics, data security, and clinical operations.
- The team should be responsible for overseeing AI deployment, ensuring compliance with internal policies and safeguards, conducting testing on model accuracy, and reporting to the hospital's fiduciary board on outcomes, risks, and any adverse events.
- Why is it important? The rise of AI in health care, which can inform patient care and operational decisions, brings potential risks. A structured governance model may help mitigate these risks by providing accountability and oversight to ensure AI tools align with safety standards and ethical practices.
2. Administrative processes should center patient privacy, consent, and disclosure
- What is recommended? Each facility should have strong policies in place to safeguard patient data when using AI tools. This includes clearly articulated data protection protocols, patient consent when AI affects care decisions, and disclosure about how AI tools are being used.
- Why is it important? Transparency is essential to building trust with patients as they need to be aware of how their data is used in AI systems and the direct impact these tools have on their care. With AI systems relying on vast amounts of health data to function, patients may be more likely to trust AI in their care if they understand how their data is being handled securely and ethically.
3. Enhancing data security measures are of particular importance in the guidelines
- What is recommended? AI tools should adhere to rigorous data security standards, particularly when dealing with protected health information. This includes the use of encryption (both in transit and at rest), enforcing access controls, and conducting regular security assessments. Health care organizations must ensure that any third-party AI vendors comply with HIPAA and relevant data privacy regulations.
- Why is it important? The integration of AI systems into health care increases the risk of data breaches or misuse. Ensuring that data is properly protected, even when it's used for AI training or processing, helps hospitals avoid liabilities and ensures patient trust is maintained.
4. Quality monitoring and risk management will need continuous evaluation
- What is recommended? Hospitals should establish continuous monitoring processes to evaluate the performance and safety of deployed AI tools. This includes regular testing, validating the quality of AI-generated data, and assessing algorithm performance over time. AI tools should be constantly reviewed for any emerging biases, and hospitals may need a system to report errors or incidents related to AI tools.
- Why is it important? AI is dynamic—its algorithms evolve as they learn from new data. This makes continuous performance monitoring essential to prevent the tool from producing unsafe, unreliable, or biased outputs. Given AI’s potential to affect patient outcomes, hospitals must ensure that AI tools are performing as expected, particularly as they adapt to new data and contexts.
5. New education provisions focus on staff training and mitigating bias
- What is recommended? Hospitals should invest in AI education for staff to ensure they understand how AI works, its benefits, and its limitations. Training may focus on AI literacy and how to detect and mitigate biases that could affect care. Bias assessments should be performed before a model’s deployment and throughout the lifecycle of the AI tool to ensure equitable outcomes for diverse patient populations.
- Why is it important? AI systems can perpetuate or exacerbate biases if not carefully managed, especially if the data used to train the models is not representative. Hospitals are asked to be proactive in addressing biases that could negatively impact care delivery, such as algorithms that perform poorly with certain demographic groups. Regular auditing and staff education are critical to maintaining fairness and quality in AI-driven decisions.
The bottom line: As AI technologies continue to advance, hospitals and health systems face the challenge of balancing innovation with responsibility to patients. These new guidelines focus on deploying AI in a way that prioritizes patient safety, ensures privacy, and maintains equitable care. By adhering to these principles, the Joint Commission states that hospitals can mitigate risks while benefiting from AI’s potential to improve patient outcomes and streamline operations.
Read the guidelines online.
Tags: Regulatory Advocacy | Health IT