Will AI maintain or eliminate health inequalities?

May 15, 2023 — Wherever you look, machine learning applications in artificial intelligence are being deployed to change the status quo. This is especially true in healthcare, where technological advancements are accelerating drug discovery and identifying potential new cures.

But this progress does not come without red flags. They also put a magnifying glass on avoidable disparities in disease burden, injury, violence and opportunities to achieve optimal health. disproportionately affect people of color and other disadvantaged communities.

The question is whether AI applications will further increase or decrease health disparities, especially when it comes to developing clinical algorithms that doctors use to detect and diagnose diseases, predict outcomes and guide treatment strategies.

“One of the problems that has been shown in AI in general and medicine in particular is that these algorithms can be biased, meaning they perform differently on different groups of people,” said Paul Yi, MD, assistant professor diagnostic radiology and nuclear medicine at the University of Maryland School of Medicine, and director of the University of Maryland Medical Intelligent Imaging (UM2ii) Center.

“For medicine, getting the wrong diagnosis is literally life or death, depending on the situation,” Yi said.

Yi co-authored a study published in the journal last month Naturopathy in which he and his colleagues sought to discover whether medical imaging datasets used in data science competitions help or hinder the ability to spot biases in AI models. These competitions involve computer scientists and physicians crowdsourcing data from around the world, with teams competing to create the best clinical algorithms, many of which are put into practice.

The researchers used a popular data science competition site called Kaggle for medical imaging competitions held between 2010 and 2022. They then evaluated the datasets to see if any demographic variables were reported. Finally, they looked at whether the competition included demographic performance as part of the algorithms’ evaluation criteria.

Yi said that of the 23 datasets included in the study, “the majority — 61% — reported no demographics at all.” Nine leagues reported demographics (mostly age and gender), and one reported race and ethnicity.

“None of these data science competitions, whether or not they reported demographics, evaluated these biases, that is, answer accuracy in males versus females, or white versus black versus Asian patients,” Yi said. The implication? “If we don’t have the demographics, we can’t measure bias,” he explained.

Algorithmic hygiene, checks and balances

“To reduce bias in AI, developers, inventors and researchers of AI-based medical technologies should consciously prepare to avoid it by proactively improving the representation of certain populations in their dataset,” says Bertalan Meskó, MD, PhD, director of the Medical Futuristic Institute in Budapest, Hungary.

One approach, which Meskó called “algorithmic hygiene,” is similar to one a group of researchers at Emory University in Atlanta took when they created a racially diverse, detailed dataset: the EMory BrEast Imaging Dataset (EMBED) — which consists of 3.4 million breast cancer screening and diagnostic mammography images. Forty-two percent of the 11,910 unique patients represented were self-reported African American women.

“The fact that our database is diverse is kind of a direct by-product of our patient population,” said Hari Trivedi, MD, assistant professor in the Departments of Radiology and Imaging Sciences and Biomedical Informatics at Emory University School of Medicine and co-director of the Health Innovation and Translational Informatics (HITI) lab.

“Even now, the vast majority of datasets used in deep learning model development do not include that demographic information,” says Trivedi. “But it was really important in EMBED and any future datasets that we develop to make that information available, because without it it’s impossible to know how and when your model might be biased or if the model you’re testing, can be biased.”

“You can’t just close your eyes to it,” he said.

Importantly, bias can be introduced at any point in the AI ​​development cycle, not just at the beginning.

“Developers could use statistical tests that allow them to detect whether the data used to train the algorithm differs significantly from the actual data they encounter in practice,” Meskó said. “This may indicate biases due to the training data.”

Another approach is de-biasing, which helps eliminate differences between groups or individuals based on individual traits. Meskó referred to IBM’s open source AI Fairness 360 Toolkita comprehensive set of metrics and algorithms that researchers and developers can use to reduce bias in their own datasets and AIs.

Checks and balances are also important. This could be, for example, “cross-checking the decisions of the algorithms by humans and vice versa. This way they can hold each other accountable and reduce bias,” said Meskó.

Keeping people informed

Speaking of checks and balances, should patients worry that a machine is replacing a doctor’s judgment or driving potentially dangerous decisions because a critical piece of data is missing?

Trevedi said AI research guidelines are under development that specifically focus on rules to consider when testing and evaluating models, especially models that are open source. Also, the FDA and the Department of Health and Human Services are trying to regulate development and validation of algorithms with the aim of improving accuracy, transparency and fairness.

Like medicine itself, AI is not a one-size-fits-all solution, and perhaps checks and balances, consistent evaluation, and concerted efforts to build diverse, inclusive datasets can address and ultimately help overcome pervasive health inequalities.

At the same time, “I think we’re a long way from completely removing the human element and not involving clinicians in the process,” said Kelly Michelson, MD, MPH, director of the Center for Bioethics and Medical Humanities at Northwestern. University Feinberg School of Medicine and attending physician at Ann & Robert H. Lurie Children’s Hospital in Chicago.

“There are actually some great opportunities for AI to reduce inequalities,” she said, also noting that AI isn’t simply “this one big thing.”

“AI means a lot of different things in a lot of different places,” says Michelson. “And the way it is used is different. It’s important to recognize that issues around bias and the impact on health inequalities will be different depending on what kind of AI you’re talking about.”

Stay connected with us on social media platform for instant update click here to join our Facebook

Read more here: Source link