Article

News

Study: Collaboration across disciplines ensuring fairness of AI in healthcare

Author(s):

According to a study by a team of scientists led by Duke-NUS Medical School, pursuing fair AI for healthcare requires cross-disciplinary collaboration to translate methods into real-world benefits.

Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond. (Image courtesy of Adobe Stock)

Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond. (Image courtesy of Adobe Stock)

Seeking fair artificial intelligence (AI) for healthcare requires collaboration between experts across disciplines, says a global team of scientists led by Duke-NUS Medical School.

The research was published in npj Digital Medicine.1

Liu Mingxuan, a PhD candidate in the Quantitative Biology and Medicine Program and Center for Quantitagive Medicine (CQM) at Duke-NUS, noted in a Duke-NUS news release that while AI has demonstrated potential for healthcare insights, concerns around bias remain.2

“A fair model is expected to perform equally well across subgroups like age, gender and race. However, differences in performance may have underlying clinical reasons and may not necessarily indicate unfairness,” Mingxuan said in the news release.

Ning Yilin, PhD, research fellow with CQM and a co-first-author of the paper, highlighted the ability of AI to be a useful tool.

“Focusing on equity—that is, recognizing factors like race, gender, etc., and adjusting the AI algorithm or its application to make sure more vulnerable groups get the care they need—rather than complete equality, is likely a more reasonable approach for clinical AI,” Yilin said in the news release. “Patient preferences and prognosis are also crucial considerations, as equal treatment does not always mean fair treatment. An example of this is age, which frequently factors into treatment decisions and outcomes.”

Moreover, the research focuses on key issues between AI fairness research and clinical needs.

“Various metrics exist to measure model fairness, but choosing suitable ones for healthcare is difficult as they can conflict. Trade-offs are often inevitable,” Liu Nan, PhD, associate professor at Duke-NUS’ CQM, and senior and corresponding author of the paper.

Nan also noted that differences detected between groups are frequently treated as biases to be mitigated in AI research.1

“However, in the medical context, we must discern between meaningful differences and true biases requiring correction,” he said in the news release.2

The researchers also outline the need to evaluate which attributes are considered “sensitive” for each application. They say that actively engaging clinicians is key for developing useful and fair AI models.

“Variables like race and ethnicity need careful handling as they may represent systemic biases or biological differences,” said Assoc Prof Liu. “Clinicians can provide context, determine if differences are justified, and guide models towards equitable decisions.”

Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond.1

Daniel Ting Shu Wei, PhD, an associate professor and director of SingHealth’s AI office, is a co-author of the study. He highlighted the importance of AI in healthcare.2

"Achieving fairness in the use of AI in healthcare is an important but highly complex issue. Despite extensive developments in fair AI methodologies, it remains challenging to translate them into actual clinical practice due to the nature of healthcare – which involves biological, ethical and social considerations,” he said in the news release.

Moreover, Wei pointed out that in an effort to push AI practices to benefit patient care, clinicians, AI and industry experts need to work together and take active steps towards addressing fairness in AI. He is also Senior Consultant at the Singapore National Eye Centre and Head of AI & Digital Innovation at the Singapore Eye Research Institute (SERI).

Marcus Ong, PhD, a senior co-author and director of the Health Services and systems Research Program at Duke-NUS, who is also Senior Consultant at SGH’s Department of Emergency Medicine, highlighted the need for diverse opinions in AI.2

“Good intentions alone cannot guarantee fair AI unless we have collective oversight from diverse experts, considering all social and ethical nuances,” Ong said in the news release. “Pursuing equitable and unbiased AI to improve healthcare will require open, cross-disciplinary dialogues.”

Authors from across the SingHealth Duke-NUS Academic Medical Centre (including Duke-NUS, SingHealth, SGH, Singapore Eye Research Institute and Singapore National Eye Centre) worked together with experts from the University of Antwerp in Belgium as well as Weill Cornell Medicine, Massachusetts Institute of Technology, Beth Israel Deaconess Medical Center and Harvard T.H. Chan School of Public Health in the United States.1

Patrick Tan, PhD, senior vice-deal for Research at Duke-NUS noted in the news release the range of researchers involved in the effort demonstrates the importance of dialogue.2

“This global cooperation exemplifies the cross-disciplinary dialogues required to advance fair AI techniques for enhancing healthcare,” he concluded in the news release. “We hope this collaborative effort spanning Singapore, Europe, and the US provides valuable perspectives to inspire further multinational partnerships towards equitable and unbiased AI.”

Reference

1 Liu, M., Ning, Y., Teixayavong, S. et al. A translational perspective towards clinical AI fairness. npj Digit. Med. 6, 172 (2023). https://doi.org/10.1038/s41746-023-00918-4

2 Ensuring fairness of AI in healthcare requires cross-disciplinary collaboration. EurekAlert! Accessed October 23, 2023. DOI https://eurekalert.org/news-releases/1005371.

Newsletter

Don’t miss out—get Ophthalmology Times updates on the latest clinical advancements and expert interviews, straight to your inbox.

Related Videos
Lisa Nijm, MD, says preoperative osmolarity testing can manage patient expectations and improve surgical results at the 2025 ASCRS annual meeting
At the 2025 ASCRS Annual Meeting, Weijie Violet Lin, MD, ABO, shares highlights from a 5-year review of cross-linking complications
Maanasa Indaram, MD, is the medical director of the pediatric ophthalmology and adult strabismus division at University of California San Francisco, and spoke about corneal crosslinking (CXL) at the 2025 ASCRS annual meeting
(Image credit: Ophthalmology Times) ASCRS 2025: Taylor Strange, DO, assesses early visual outcomes with femto-created arcuate incisions in premium IOL cases
(Image credit: Ophthalmology Times) ASCRS 2025: Neda Shamie, MD, shares her early clinical experience with the Unity VCS system
Patricia Buehler, MD, MPH, founder and CEO of Osheru, talks about the Ziplyft device for noninvasive blepharoplasty at the 2025 American Society of Cataract and Refractive Surgeons (ASCRS) annual meeting
(Image credit: Ophthalmology Times) ASCRS 2025: Bonnie An Henderson, MD, on leveraging artificial intelligence in cataract refractive surgery
(Image credit: Ophthalmology Times) ASCRS 2025: Gregory Moloney, FRANZO, FRCSC, on rotational stability
Sheng Lim, MD, FRCOphth, discusses the CONCEPT study, which compared standalone cataract surgery to cataract surgery with ECP, at the 2025 ASCRS Annual Meeting.
(Image credit: Ophthalmology Times) ASCRS 2025: Steven J. Dell, MD, reports 24-month outcomes for shape-changing IOL
© 2025 MJH Life Sciences

All rights reserved.