Implementing a Regulatory Framework for AI in Healthcare

Baby Steps Toward Implementing a Regulatory Framework for AI in Healthcare

Baby Steps Toward Implementing a Regulatory Framework for AI in Healthcare


McDermottPlus is pleased to bring you Regs & Eggs, a weekly Regulatory Affairs blog by Jeffrey DavisClick here to subscribe to future blog posts.

February 1, 2024 – A few weeks ago, Regs & Eggs highlighted the major issues that McDermott+Consulting is monitoring this year – and the implementation of the president’s executive order (EO) on artificial intelligence (AI) was definitely part of that list. My colleagues Rachel Stauffer, Kristen O’Brien and Deborah Godes have been closely following all AI-related developments.

The first of the EO’s deadlines came this week when we hit 90 days from the EO’s issuance: the establishment of a US Department of Health and Human Services (HHS) AI Task Force. While the White House released a fact sheet this week marking progress on the EO, including the creation of the task force, it omitted key details, such as a formal announcement of the task force’s co-chairs and other federal government members, or next steps. It has been reported that the task force is chaired by Micky Tripathi, the current national coordinator for health IT, and Syed Mohiuddin, a counselor to the HHS deputy secretary. Dr. Tripathi has led the Office of the National Coordinator for Health IT (ONC) for three years and Dr. Mohiuddin has been with HHS for a little over two years. According to the fact sheet (and the EO itself), the task force will focus on developing methods for evaluating AI-enabled tools and frameworks for AI’s use to advance drug development, as well as bolstering public health and improving healthcare delivery.

Within one year of establishment, the task force is required to develop a strategic plan that includes policies and frameworks – possibly including regulatory action, as appropriate – on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector. The specific areas the task force must look into include the following:

  • Developing, maintaining and using predictive and generative AI-enabled technologies in healthcare delivery and financing (including quality measurement, performance improvement, program integrity, benefits administration and patient experience) taking into account considerations such as appropriate human oversight of the application of AI-generated output.
  • Long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers and users.
  • Incorporating equity principles into AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems.
  • Incorporating safety, privacy and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector.
  • Determining appropriate and safe uses of AI by developing, maintaining and making available documentation to help users in local settings in the health and human services sector.
  • Advancing positive use cases and best practices for use of AI in local settings by working with state, local, tribal and territorial health and human services agencies.
  • Identifying uses of AI to promote workplace efficiency and satisfaction in the health and human services sector, including reducing administrative burdens.

We are not sure what, if any, actual regulations will come out of all this. At the very least, the efforts stemming from the EO will help inform policymakers and stakeholders for potential areas that need regulation, further guidance or other potential federal action. As stipulated in the EO, by April HHS must develop a quality strategy for AI and “act to advance the understanding” of how AI impacts non-discriminations laws. Then, in concert with the US Departments of Defense and Veterans Affairs, HHS must establish an AI safety program by October.

Not much concrete action is occurring on the legislative side as well. Congress continues to navigate this space with hearings, requests for information and other formal and informal stakeholder gatherings. There has been an uptick in AI-related legislation recently, but it remains unclear if or how Congress will legislate regarding AI broadly, or in healthcare specifically, this year. For example, US Senate Commerce Committee Members John Thune (R-SD), Amy Klobuchar (D-MN), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV) and Ben Ray Luján (D-NM) introduced the AI Research, Innovation, and Accountability Act of 2023 (S. 3312). The US House of Representatives Energy and Commerce Committee held a hearing in November 2023, and one of the co-chairs of the GOP Doctors Caucus, Rep. Greg Murphy (R-NC), sent a letter to the US Food and Drug Administration on AI regulation earlier this month.

Needless to say, there’s still a lot more to come! We will continue to follow these developments closely. In the meantime, please visit our AI in Healthcare Law Center and let us know if you have any questions.

Until next week, this is Jeffrey saying, enjoy reading regs with your eggs.


For more information, please contact Jeffrey Davis. To access the full archive of Regs & Eggs, visit the American College of Emergency Physicians.

To subscribe to Regs & Eggs, please CLICK HERE.