Incorporating Health Equity and Racial Justice into Medical Artificial Intelligence Systems
Chloe Meck, cmeck@bilh.org Written by Jacqueline Mitchell
MAY 27, 2021
Thanks to advances in artificial intelligence (AI) and machine learning, computer programs can distinguish between images of benign growths and more ominous ones, identify patients at risk for specific conditions such as cardiovascular disease or HIV and help administrators forecast resource needs. In the context of the COVID-19 pandemic, scientists used rapidly evolving AI technologies to predict local viral outbreaks and hotspots, project virus transmission, and forecast mortality rates under various lockdown or distancing scenarios. However, longstanding concerns about including unconscious or unintentional biases in the development and application of AI technologies only intensified during the pandemic.
"While there have been great advances in personalized medicine and AI-based biomedical discovery, there is also a lack of diverse clinical research data used to generate those treatment strategies, which can result in worse outcomes for underrepresented members of the community," said Yuri Quintana, PhD, Chief of the Division of Clinical Informatics at Beth Israel Deaconess Medical Center (BIDMC). "Without consciously and appropriately addressing potential health equity concerns in the development of AI-based solutions in healthcare, these tools may exacerbate structural inequities that can lead to disparate health outcomes."
A systems design engineer, Quintana, who is also Assistant Professor of Medicine at Harvard Medical School, is working to ensure that the next generation of AI systems are developed in ways that intentionally promote racial health equity and social justice. After starting his career developing early AI systems at IBM, Quintana has worked extensively in global health initiatives to reduce health inequities. He is an advisor to the Pan American Health Organization (PAHO) on e-health for Latin America, and participated in PAHO forum on AI for public health organized by the International Telecommunications Union (ITU). He is currently working with colleagues on projects to increase access to AI-based remote patient monitoring to undeserved communities, including older adults. If social inequities and health disparities are not overtly addressed, bias can be introduced into AI applications early in the design phase and have unintended consequences on its outcomes. AI-based analytics done without diverse data that includes underserved communities may result in a biased, inaccurate view of disease and mortality as it affects racial and ethnic groups. "It's important to make sure we have equity in how we collect data because those data can influence public policy on a national level," said Quintana at a recent American Medical Informatics Association national informatics conference.
When disadvantaged populations are not well represented in clinical trial data, the algorithms developed and implemented based on these incomplete data sets may have detrimental and adverse outcomes. Existing clinical datasets have been criticized for not fully accounting for genetic diversity across all human populations, which is needed to tailor precision medical treatments. Additionally, algorithms trained on and dependent on measurable data may not always capture relevant environmental information, social data, or patient cultural beliefs, preferences, and values. Quintana believes that a strategic approach to AI design and development that considers health equity principles in managing data, model building, training, and deployment from conception to implementation can avoid these pitfalls.
To that end, Quintana recently collaborated with colleagues to develop a seven-step framework to integrate health equity and racial justice into AI development. The steps include: defining objectives for an AI system aligned with promoting equity, including identifying inclusive data sets and developing policies for data stewardship; establishing equity-sensitive metrics and key performance metrics related to the target outcomes to promote awareness of systemic racism, discrimination, and exclusion and their effects on adverse health outcomes in socially disadvantaged populations; engaging with stakeholders, patients and end-users in the process of implementing AI-systems in a way that fosters accountability trust, transparency, fairness and privacy; and maintaining and updating AI-systems to ensure system performance reflects clinical care environments, changing patient demographics and new evidence.
"We will achieve racial equity when a person's racial or ethnic identity no longer predicts their social or economic opportunities and health outcomes," Quintana said. "Simply denouncing or illuminating discrimination or stereotyping and bias is not sufficient. Instead, organizations and systems need to reimagine and co-create a different culture and society by implementing interventions that impact multiple sectors or processes or practice."
About Beth Israel Deaconess Medical Center
Beth Israel Deaconess Medical Center is a leading academic medical center, where extraordinary care is supported by high-quality education and research. BIDMC is a teaching affiliate of Harvard Medical School, and consistently ranks as a national leader among independent hospitals in National Institutes of Health funding. BIDMC is the official hospital of the Boston Red Sox.
Beth Israel Deaconess Medical Center is a part of Beth Israel Lahey Health, a health care system that brings together academic medical centers and teaching hospitals, community and specialty hospitals, more than 4,700 physicians and 39,000 employees in a shared mission to expand access to great care and advance the science and practice of medicine through groundbreaking research and education.