Article

AI model shows accuracy in distinguishing mild/severe TED from normal tissue

Author(s):

Researchers are looking for improved detection of disease in patients.

Researchers are currently looking for improved detection of thyroid eye disease in patients.

Researchers are currently looking for improved detection of thyroid eye disease in patients.

Reviewed by Paul S. Zhou, MD

A deep learning model for thyroid eye disease (TED) may improve disease detection and outline the need or referrals to oculoplastic surgeons and endocrinologists to ensure that patients diagnosed with the disease receive earlier treatment to prevent permanent disability and vision loss, according to Paul Zhou, MD.

Zhou is a research fellow at the Ophthalmic Plastic Surgery Service in the Department of Ophthalmology at Massachusetts Eye and Ear in Boston.

TED is characterized by a wide range of symptoms including dry eye, photophobia, diplopia, and decreased visual acuity and visual fields. If left unaddressed, the disease can be devastating for patients cosmetically but also can increase the risk for compressive optic neuropathy.

Zhou and colleagues are seeking to facilitate easier detection of TED with the use of an artificial intelligence (AI) model using radiographic imaging for screening patients and determining the disease severity. Such a model would ease the diagnostic process and gauging of the severity for physicians such as general practitioners, endocrinologists, and thyroid surgeons who may have less familiarity with TED than general ophthalmologists and oculoplastic surgeons, he explained.

“Having such a tool that is readily available and aids in the diagnosis of TED can be helpful,” Zhou said.

The AI model

In the study, which may be the first to apply AI to TED in the United States, the investigators set out to train an AI algorithm to accurately detect TED and identify compressive optic neuropathy.

“An AI model is an approximation of a true function that relates inputs and outputs,” he explained.

During the training process, the model can learn to pick out subtle traits, allowing the model to learn and relearn from the images to which it is exposed.

In this study that spanned 10 years, the investigators retrospectively reviewed patients with orbital CT scans who had undergone an examination by an oculoplastic surgeon. The data set included patients with and without TED.

A region of interest on the CT scans was selected, and the left and right eyes were distinguished to allow independent AI training, he explained. The region of interest in the images was then categorized as normal or as showing mild or severe TED based on the clinical examination by an oculoplastic surgeon.

The data sets comprised of normal orbits and mild and severe TED were transformed into color images, which translated into more vibrant images in cases with severe TED surrounding the extraocular muscles and connective tissues.

The investigators used VGG16 (also called OxfordNet), a convolutional neural network model named after the Visual Geometry Group from Oxford University. VGG16 is 16 layers deep and was previously trained on ImageNet, which contains more than 1 million images. When the region of interest in the CT images were fed into VGG16, the model was able to differentiate among normal thyroid, mild TED, and severe TED.

In this study, 885 images from 131 patients were used, of which 279 were normal, 251 showed mild TED, and 355 showed severe TED; 100 images of the total were held for later evaluation.

Zhou noted that the overall prediction accuracy across the 3 groups was 94.27%. The normal and mild TED cases were never misclassified as severe TED. Of the 355 cases with severe TED, 1 was misclassified as mild disease.

This model may help general practitioners distinguish between normal thyroid tissue and that with mild TED. “The AI model can make this distinction based on one snapshot, with an accuracy of 92.16%,” Zhou said.

When the AI model was tested using the 100 images kept for later evaluation, the accuracy was 98%. A further test pitted the AI model against a physician: When 114 unlabeled and randomly selected images were graded by an oculoplastic surgeon, the surgeon’s accuracy was 43.83%.

Future efforts will incorporate the model into radiology protocols, compare the model’s accuracy with other human experts, and apply similar machine learning to other ocular conditions such as orbital tumors and inflammation.

Paul Zhou, MD

E: paulzhou27@gmail.com

Zhou is a research fellow at Massachusetts Eye and Ear in Boston and has no financial interest in this subject matter.

Related Videos
© 2024 MJH Life Sciences

All rights reserved.