Responding to the threat that biased health care artificial intelligence (AI) tools pose to health equity,1,2 the US Department of Health and Human Services Office for Civil Rights (OCR) published a final in May 2024 holding AI users legally responsible for managing the risk of discrimination.3 This move raises questions about the rule’s fairness and potential effects on AI-enabled health care.
The New Regulatory Requirements
prohibits recipients of federal funding from discriminating in health programs and activities based on race, color, national origin, sex, age, or disability. Regulated entities include health care organizations, health insurers, and clinicians that participate in Medicare, Medicaid, or other programs. The OCR’s rule sets forth the obligations of these entities relating to the use of decision support tools in patient care, including AI-driven tools and simpler, analog aids like flowcharts and guidelines.
The rule clarifies that Section 1557 applies to discrimination arising from use of AI tools and establishes 2 new legal requirements. First, regulated entities must make “reasonable efforts” to determine whether their decision support tools use protected traits as input variables or factors. Second, for tools that do so, organizations “must make reasonable efforts to mitigate the risk of discrimination.”
Starting in May 2025, the OCR will address potential violations of the rule through complaint-driven investigations and compliance reviews. Individuals can also seek to enforce Section 1557 through private lawsuits. However, courts disagree about whether private actors can sue for disparate impact (practices that are neutral on their face but have discriminatory effects).4
The anxieties that the OCR’s rulemaking has provoked5-7 may reflect some misapprehensions. The rule does not prohibit use of protected characteristics as inputs to algorithms. Rather, citing age as an example, the OCR explains that such use is acceptable if considering the characteristic is clinically indicated or otherwise conforms to best practices. The rule also does not single out health care professionals as the locus of responsibility. Health care organizations and insurers are the primary targets because decisions about adopting AI tools are typically made at the organizational level. In addition, the likely consequences of violating Section 1557 are less catastrophic than commonly portrayed. Although penalties can include termination from Medicare or fines, the OCR Section 1557 through corrective action plans.
Fairness to Health Care Organizations
The OCR’s rule applies only to the organizations and individuals who use AI tools, not to the entities that develop them. Apparent unfairness arises from the fact that developers seem better positioned to avoid biased products,6 but the OCR’s purview under the Affordable Care Act only extends to health program participants, necessitating this approach.
Developers do not, however, go unregulated. In addition to the oversight by the US Food and Drug Administration of , other federal rules set forth requirements for health information technology developers who design “predictive decision support interventions,” which is a subset of the tools covered by the new rule from the OCR. The US Department of Health and Human Services Office of the National Coordinator for Health Information Technology (ONC) issued in January 2024, updating requirements in the , a voluntary initiative for health information technology developers. The ONC rule revises the certification criteria to increase . Certified developers must disclose certain information, including source attributes.
Taken together, the rules from the US Department of Health and Human Services OCR and ONC create incentives for health care organizations and insurers to choose AI tools that have undergone certification. Organizations subject to the Section 1557 rule can more readily meet their obligations when using certified tools because developers must disclose information about source data and risk management practices. Thus, the OCR’s rule will create market pressure for developers to generate and provide information about the extent of bias in their products, shifting some of the compliance burden from adopters to developers. It should fortify the resolve of hospitals and insurers to ask hard questions about the tools that are being marketed to them.
The Dark Side of Flexible Enforcement
The new rule telegraphs the OCR’s intention to approach enforcement on a case-by-case basis, using “reasonable efforts” as the lodestar. The agency’s flexibility as it ventures into a novel area is sensible. Yet, the dark side of regulatory flexibility is that it can make it hard for regulated entities to discern what is required.
The final rule attempts to clarify how the OCR will apply the “reasonable efforts” standard. The OCR will consider an organization’s size and resources, whether it used the tool as the developer intended, whether it received information from the developer about the risk of discrimination or use of protected characteristics as inputs, and whether the organization had a process in place for evaluating the tools it adopts.5 The OCR indicated that this process could include seeking information from the developer as well as from scientific articles, professional organizations, government agencies, and nonprofit organizations.
Although helpful, the OCR’s clarification falls short of the level of specificity that many organizational leaders desire. If they respond with conservative behavior, the new rule could chill AI adoption,5,6 notwithstanding the OCR’s stated intent to encourage continued use of helpful tools. It is therefore important that the OCR continues to clarify what constitutes compliance and noncompliance by publishing case decisions and further guidance. Health care organizations, for their part, should resist the impulse of being overly compliant with the OCR’s rule, needlessly rejecting opportunities to try promising AI tools. Some AI tools may entail bias, but others could counteract existing biases in human decision-making8 and in older clinical algorithms.2
Facilitating Meaningful Compliance
The “reasonable efforts” standard arguably could be met through fairly minimal investigation. The OCR estimated the required review time at 1 hour the first year and 30 minutes annually thereafter (excluding any time needed for interventions to mitigate the identified risks).5 Experts in conducting AI model bias assessments will find these estimates optimistic, to say the least. The estimates wrongly assume that reliable information about bias is accessible for all health care AI tools and that no site-specific assessment is needed.
The OCR’s rule requires organizations to take steps that they should be taking anyway; no organization should use an AI tool without evaluating the risk of harm to patients. Nevertheless, small physician practices, rural and safety-net hospitals, and other low-resourced organizations will struggle to conduct bias assessments. The OCR’s willingness to calibrate its expectations to organizational resources makes the risk low for a finding of noncompliance. However, it is dispiriting to contemplate that the OCR’s effort to protect vulnerable patients may have the weakest effect in the settings that have the biggest role in serving them.
To ensure a meaningful effect, additional resources must become available to health care organizations. Many tools for bias assessment are emerging,9 but affordable technical assistance is also essential, whether through “assurance labs”10 or another mechanism. The question of who will pay for AI assessments looms large,9 especially in light of to ensure that no taxpayer dollars are used. Many AI tools offer little or no financial savings to health care organizations, and the business case for adopting them will evaporate if assessment and monitoring costs are high and not reimbursed.
The OCR rule is an important step toward a future in which AI tools help to reduce rather than exacerbate discrimination in health care. Realizing this vision, however, requires resources to make meaningful compliance possible for all health care organizations.
Published: August 29, 2024. doi:10.1001/jamahealthforum.2024.3397
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2024 Mello MM et al. JAMA Health Forum.
Corresponding Author: Michelle M. Mello, JD, PhD, MPhil, Stanford Law School, 559 Nathan Abbott Way, Stanford, CA 02421 (mmello@law.stanford.edu).
Conflict of Interest Disclosures: Dr Mello reported serving as an advisor to Verily Life Sciences, LLC, and AVIA’s Generative AI Collaborative and that her spouse is an executive at Cisco Systems, which develops artificial intelligence tools and products that power such tools. No other disclosures were reported.
1.Tipton
K, Leas
BF, Flores
E,
et al. Impact of healthcare algorithms on racial and ethnic disparities in health and healthcare. Published December 8, 2023. Accessed July 19, 2024.
2.National Academies of Sciences, Engineering, and Medicine. Ending unequal treatment: strategies to achieve equitable health care and optimal health for all. Accessed July 19, 2024.
3.Nondiscrimination in the use of patient care decision support tools, 45 CFR 92.210 (2024). Accessed August 3, 2024.
4.Wójcik-Suffia
MA. Algorithmic discrimination in m-health: rethinking the US nondiscrimination legal framework through the lens of intersectionality. BioLaw J. 2024;1:367-388. doi:
5.Nondiscrimination in health programs and activities, 89 Fed Reg 37 522 (2024). Accessed August 3, 2024.
6.Shachar
C, Gerke
S. Prevention of bias and discrimination in clinical practice algorithms. Ѵ. 2023;329(4):283-284. doi:
7.Goodman
KE, Morgan
DJ, Hoffmann
DE. Clinical algorithms, antidiscrimination laws, and medical device regulation. Ѵ. 2023;329(4):285-286. doi:
8.Obermeyer
Z, Topol
EJ. Artificial intelligence, bias, and patients’ perspectives. Գ. 2021;397(10289):2038. doi:
9.Cary
MP
Jr, Zink
A, Wei
S,
et al. Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: a scoping review. Health Aff (Millwood). 2023;42(10):1359-1368. doi:
10.Shah
NH, Halamka
JD, Saria
S,
et al. A nationwide network of health AI assurance laboratories. Ѵ. 2024;331(3):245-249. doi: