New technologies in aggregated real-world data aim to drive clinical practice patterns and algorithms in the near future.
This article was reviewed by Robert T. Chang, MD
Artificial Intelligence (AI) is the subject of numerous articles that tout how well machines function better than human ophthalmologists, but why is this happening?
In April 2018, the first FDA approval of autonomous AI for detecting referable diabetic retinopathy (DR) from fundus photos, the IDx-DR camera system (IDx Technologies, Inc.), legitimized and essentially jumpstarted AI in ophthalmology, according to Robert T. Chang, MD, associate professor at the Byers Eye Institute of Stanford University, Palo Alto, CA.
“The most important thing to understand if the technology will become widespread is how quickly doctors and patients will trust an AI system such as understanding its strengths and limitations, and how easily the technology will be integrated into current eyecare workflows, especially in terms of liability and business models,” he said.
The FDA was very careful in approving the first specific AI doctorless screening method for detecting DR in fundus images with a heavy emphasis on safety (what could be missed).
The IDx-DR breakthrough device prospective multicenter trial included exacting requirements, such as specific camera type, single primary reason for DR screening, a narrow asymptomatic population not previously evaluated for DR, and specific minimal cutoffs for specificity and sensitivity to detect DR that exceeded mild disease, a threshold which likely would not result in a bad outcome given a false negative.
While the narrow confines of the 2017 trial may limit generalizability or slow adoption of telemedicine screening, an AI-driven screening approach may be ideal for ruling out negative disease, which frees up doctor time for positive cases, Dr. Chang explained.
“AI-based screening algorithms can achieve economy of scale and increase access to care at lower cost but high quality, inexpensive image capture remains a barrier,” he said.
Currently, deep learning has been deployed in the case of DR using supervised learning techniques requiring over a 100,000 labelled images (or subimages) to train the algorithm.
With such a large number of examples, modern computational power helps finetune a “neural network” mathematical model to detect the most important features within an image to properly classify it with a certain degree of statistical certainty.
“With constant refinement, the model can achieve a performance that is equal or even superior to human pattern recognition, depending on the consensus ground truth (predetermined right answer),” Dr. Chang said. This is in contrast to older AI algorithms in which human expert-driven features of DR were programmed, but these algorithms were not able to achieve superhuman performance.