Artificial Intelligence

Artificial Intelligence Could Worsen Systemic Racism in Health Care

Caution is strongly advised with embracing Artificial Intelligence (AI) in health care. With still unproven technology, notwithstanding the excitement AI continues to generate, scientists warn that incautious AI use could result in worsening health care disparities and inequities.

According to NPR:

“Doctors, data scientists and hospital executives believe artificial intelligence may help solve what until now have been intractable problems. AI is already showing promise to help clinicians diagnose breast cancerread X-rays and predict which patients need more care.

 

But as excitement grows, there’s also a risk: These powerful new tools can perpetuate long-standing racial inequities in how care is delivered.

 

‘If you mess this up, you can really, really harm people by entrenching systemic racism further into the health system,’ said Dr. Mark Sendak, a lead data scientist at the Duke Institute for Health Innovation.”

 

In 2022, the U.S. Department of Health and Human Services (HHS) in its Notice of Proposed Rulemaking for the new Affordable Care Act civil rights regulations, recognized the health equity pitfalls inherent in AI.

As HHS noted:

“Research suggests that overly relying upon any clinical algorithm, particularly without understanding the effects of its uses, may amplify and perpetuate racial and other biases.

 

Accordingly, the Department strongly cautions covered entities against overly relying upon a clinical algorithm, for example, by replacing or substituting the individual clinical judgment of providers with  clinical algorithms.

 

Citing to similar cautionary notes about AI, HHS quotes the American Medical Association:

“[H]ealth care AI should be a “tool to augment professional clinical judgment, not a technology to replace or override it,” and that organizations that implement AI systems “should vigilantly monitor [the systems] to identify and address adverse consequences.”

 

And the American Academy of Family Physicians (AAFP):

“AI-based technology is meant to augment decisions made by the user, not replace their clinical judgement or shared decision making… we recognize the limitations and pitfalls of this technology. It is critically important that AI-based solutions do not exacerbate racial and other inequities pervasive in our health care system. We strongly believe systematic approaches must be implemented to evaluate the development and implementation of AI/ML [machine learning] solutions into health care.”

 

Although hospitals and health care systems are increasingly turning to the promise held by AI, they are not closely examining the technology’s inherent racial, ethnic, and gender bias. As the Biden Administration continues to examine AI’s contribution to medical disparities, health care is on notice that AI’s increasing use has its downsides and risks, especially as HHS continues to develop its final health care civil rights regulations.

 

According to NPR:

Over the last several years, hospitals and researchers have formed national coalitions to share best practices and develop “playbooks” to combat bias. But signs suggest few hospitals are reckoning with the equity threat this new technology poses.

 

Researcher Paige Nong interviewed officials at 13 academic medical centers last year, and only four said they considered racial bias when developing or vetting machine learning algorithms.

 

‘If a particular leader at a hospital or a health system happened to be personally concerned about racial inequity, then that would inform how they thought about AI,’ Nong said. ‘But there was nothing structural, there was nothing at the regulatory or policy level that was requiring them to think or act that way.’

Several experts say the lack of regulation leaves this corner of AI feeling a bit like the ‘wild west.’

 

The Biden administration over the last 10 months has released a flurry of proposals to design guardrails for this emerging technology. The FDA says it now asks developers to outline any steps taken to mitigate bias and the source of data underpinning new algorithms.

 

The Office of the National Coordinator for Health Information Technology proposed new regulations in April that would require developers to share with clinicians a fuller picture of what data were used to build algorithms. Kathryn Marchesini, the agency’s chief privacy officer, described the new regulations as a ‘nutrition label’ that helps doctors know ‘the ingredients used to make the algorithm.’ The hope is more transparency will help providers determine if an algorithm is unbiased enough to safely use on patients.

 

The Office for Civil Rights at the U.S. Department of Health and Human Services last summer proposed updated regulations that explicitly forbid clinicians, hospitals and insurers from discriminating ‘through the use of clinical algorithms in [their] decision-making.’ The agency’s director, Melanie Fontes Rainer, said while federal anti-discrimination laws already prohibit this activity, her office wanted “to make sure that [providers and insurers are] aware that this isn’t just ‘Buy a product off the shelf, close your eyes and use it.'”

 

As AI use in health care continues to grow, cautionary tales and red lights abound to ensure that bias and discrimination do not inform health care decisions and planning.

 

Dr. Sendak’s caution as reported by NPR is a prescription health care should regularly take when it comes to AI use:

“You have to look in the mirror,” he said. ‘It requires you to ask hard questions of yourself, of the people you work with, the organizations you’re a part of. Because if you’re actually looking for bias in algorithms, the root cause of a lot of the bias is inequities in care.’”

 

© Bruce L. Adelson 2023. All Rights Reserved The material herein is educational and informational only.  No legal advice is intended or conveyed.

Bruce L. Adelson, Esq., is nationally recognized for his compliance expertise.  Mr. Adelson is a former U.S Department of Justice Civil Rights Division Senior Trial Attorney.  Mr. Adelson is a faculty member at the Georgetown University School of Medicine and University of Pittsburgh School of Law where he teaches organizational culture, implicit bias, cultural and civil rights awareness.

Mr. Adelson’s blogs are a Bromberg exclusive.