Publication
Article
Author(s):
As technology advances, innovations increase physician capabilities.
Reviewed by Nitish Mehta, MD
Cutting-edge advances like artificial intelligence (AI) and machine learning (ML) are becoming part of the retinal imaging process, according to Nitish Mehta, MD, a retina specialist at New York University (NYU) Langone Eye Center in New York City who is excited both about the technology itself and about describing its capabilities.
Originally defined as “the notion…that any aspect of learning or a feature of intelligence can be so precisely described that a machine can be made to simulate it,”1 an AI system in medicine can be designed to learn from clinical and imaging data in annotated datasets, Mehta said, and thus assist with treatment.
But AI comes in a variety of “flavors,” he noted. Machine learning, which he defines as a “family of statistical methods that can do more than what we normally do with our simple statistical models,” has most relevance in the study of the retina. And deep learning, a subset of ML particularly adept at learning from images (computer vision), has demonstrated utility in the analysis of retinal images. “These tools,” he said, “may allow us to gain more insight and predictive abilities.”
It is well known that, for a variety of reasons, a less than satisfactory number of US patients with diabetes are seen by an ophthalmologist in a timely manner. To streamline screening, programs have been developed for remote analysis of retinal imaging by trained human readers who identify, for instance, individuals who should come to the clinic for in-person evaluation and management of diabetic retinopathy.
Many institutions and health care systems, including NYU Langone Health, have such programs in place. But an AI reader may be able to supplant a human reader and potentially save time and money. AI models have demonstrated the capability to categorize images as well as, and perhaps more consistently than, human readers. Investing in AI readers may lower the burden of retinal screening programs and allow for their expansion.
The logical next step is to expand beyond screening to diagnostic grading. There is reason to hope that the algorithm may be able to determine disease stage (mild, moderate, and severe diabetic retinopathy) because AI has already been used to segment fluid volumes in diabetic macular edema (DME) and neovascular age-related macular degeneration (AMD), potentially grading disease severity.
Taking this further, can AI itself teach the observer a new way to categorize patients based on retinal imaging and/or clinical outcomes? This is an active area of research, but the Holy Grail of AI in this space is perhaps guidance on therapy and prognosis.
For example, clinical trials offer physicians a plethora of recommendations and predictions, Mehta noted. Could an AI system synthesize such data to provide patient prognosis? Are there features we don’t yet use, like a small variation in the pixelation of a retinal image, that may have clinical value? Can AI be used to predict the direction in which geographic atrophy will spread in a patient with advanced AMD?
Finally, Mehta foresees a synergy between AI screening algorithms and future adoption of optical coherence tomography at home, imagining an AI-assisted platform that could detect disease activity without manual oversight on a home OCT and provide seamless, timely identification of a patient who has converted from dry to wet AMD. Although these are only theories at the moment, ophthalmologists can look forward to seeing them put into practice in the future.
Screening via fundus photography is the application that is furthest along in terms of development and implementation. The FDA has approved 2 devices from IDx-DR and EyeArt for use in primary care to identify referral-warranted diabetic retinopathy. Automated screening could prioritize these patients earlier or more often than standard methods and improve outcomes for the population as a whole. However, the ability of AI to grade disease and recommend treatment is still at the research stage.
In 2021, the American Medical Association released a new Current Procedural Terminology (CPT) code that allows clinicians to bill government and private insurers for the use of AI services. CPT code 92229 refers to retinal imaging performed to detect disease with automated analysis and report at the point of care. As of late 2022, the Centers for Medicare & Medicaid Services had established a national price for this code.2 “The hope is,” Mehta said, “that payers will continue to incorporate this code, and you could see this…AI tool [being used]…in retinal imaging practices.”
Mehta also offered an example of how AI can benefit patients. If someone with retinal vein occlusion has been treated with monthly anti-VEGF injections for about 6 months, a physician may consider as-needed treatment. Clinical trial data suggests that on average these patients would receive fewer injections late in the treatment. However, there may be nuances in the data that AI/ML models could reveal.
Mehta described an NYU study in which clinical data and imaging from the COPERNICUS and GALILEO trials were fed into an ML algorithm. The investigators wanted to know whether there were clinical or imaging biomarkers present during the first 6 months that could predict the outcome during the as-needed treatment period.
“The outcome of the ML model was fascinating in that patients with higher central subfield thickness (thicker retinas) at baseline or at week 4 were more likely to receive more than 2 injection treatments during the PRN treatment phase, which was late in the treatment (weeks 24 to 52),”3 he said.
The ML model provided actionable clinical information, allowing physicians to tell patients with thick retinas at baseline that they were more likely to need more injections during the first year than after the second year. “As this research continues to progress,” Mehta added, “you may see these lessons that we derive from AI being put…into clinical practice [in terms of treatment guidance].”
However, as Mehta made clear, these models are only as good as the data you provide and the questions you ask. For example, if training is done on a certain patient subset and subsequently the model is used on a different subset, the results may not be the best. This was most famously demonstrated by a Google group that recently developed a diabetic retinopathy AI-screening program that, when applied abroad, had very poor performance and prediction outcomes.4
Another concern surrounds the explainability of the models, ie, how results are derived, and Mehta urged transparency with respect to model development. There are also regulatory issues that may impact patient outcomes. Cost effectiveness and how the system would be incorporated into the clinical workflow are additional considerations. Finally, ethical questions are very important in AI, as is making sure that biases are not introduced into data sets that could impact real-world patients, Mehta observed.
Mehta believes that AI imaging will reveal features not previously foreseen and that physicians did not have to ability to answer. “We have a large set of biomarkers in our retinal imaging. Perhaps there are things that we haven’t looked at yet,” he said, adding that AI imaging may also allow for effective collaboration among specialists: “We may be able to reveal neurodegenerative disorders or cardiovascular risks that can help the patient get the care that they need.”
Challenging clinical questions about the choice between medication and surgery, for example, may be guided by AI-based results. “And most importantly,” Mehta concluded, “the route for patients to present to our retinal clinics will be guided by AI-based screening platforms.”