• COVID-19
  • Biosimilars
  • Cataract Therapeutics
  • DME
  • Gene Therapy
  • Workplace
  • Ptosis
  • Optic Relief
  • Imaging
  • Geographic Atrophy
  • AMD
  • Presbyopia
  • Ocular Surface Disease
  • Practice Management
  • Pediatrics
  • Surgery
  • Therapeutics
  • Optometry
  • Retina
  • Cataract
  • Pharmacy
  • IOL
  • Dry Eye
  • Understanding Antibiotic Resistance
  • Refractive
  • Cornea
  • Glaucoma
  • OCT
  • Ocular Allergy
  • Clinical Diagnosis
  • Technology

ARVO 2024: How a deep learning model can benefit femtosecond laser-assisted cataract surgery

News
Article

Dustin Morley, PhD, principal research scientist at LENSAR, discusses research on applying deep learning to benefit FLACS procedures.

At the 2024 ARVO meeting in Seattle, Washington, the Eye Care Network took time to speak with Dustin Morley, PhD. Morley, a principal research scientist at LENSAR, spoke about his recent work investigating the benefit of deep learning models in femtosecond laser-assisted cataract surgery (FLACS) procedures.

Video transcript

Please note: The below transcript has been lightly edited for clarity.

Dustin Morley, PhD:

Hello, I'm Dustin Morley, principal research scientist at LENSAR. And I'm here today at ARVO 2024 to present our research in applying modern artificial intelligence in the form of deep learning to the benefit of FLACS procedures. As we know, correctly identifying the anterior and posterior surfaces of the cataractous lens as well as the cornea is critical for a safe and effective FLACS procedure. So our study goal was to determine if deep learning was a suitable method to completely and fully solve this problem toward the benefit of FLACS procedures.

So to study this, we obtained de-identified Scheimpflug scans for a total of 973 eyes, the vast majority of which contained cataract, and the dataset contained a wide variety of different cataract morphologies within it. On that data set, we also performed aggressive data augmentation to simulate things like illumination changes or geometric differences, such as warping the images, or rotating the images, things of that nature. And on that full composite dataset, we designed and trained a deep convolutional neural network based on the U-Net architecture to identify and classify all pixels belonging to the anterior and posterior surfaces of the lens and cornea. And from those pixels, we then apply a RANSAC algorithm to take those pixels and obtain best-fitting geometric curves, which we could then project into 3D space for a composite 3D reconstruction, which was ultimately needed to correctly position all of the laser pattern for the treatment of the eye. And to assess how well the model performed, we did twofold cross validation, specifically on the 692 images that were both cataractous eyes that were imaged by our newer ALLY system. And we use the ability to obtain that final 3D reconstruction of the surface as our final endpoint. And what we found when we did that was that there were zero failures to reconstruct the anterior and posterior cornea surface and zero failures to reconstruct the anterior lens surface. And there were five failures to reconstruct the posterior lens surface. But for three of those, the task was legitimately impossible because the cataracts were so advanced that the surface itself was just completely invisible. So that leaves, really, only two failures in terms of what was actually doable, for a success rate of 99.7%. And so therefore, from that, we conclude that deep learning is in fact very well suited to fully solve the problem of locating the anterior and posterior surfaces of the cataractous lens, even in...the presence of very advanced and challenging cataract artifacts, as well as the cornea.

And based on that, Lensar has incorporated this deep learning model into our latest next generation femtosecond laser, the ALLY system, and thereby eliminating the need for manual surface placement as part of the FLACS procedure workflow. I found, as the developer doing it, that [there was] the streamlined process, whenever you want to make it better, it's the same. You do the same thing every time: you get more images, you label more images, and then you train it on your new expanded dataset, and then it gets better. So it's just the same thing, it's much better than the old days of trying to cram a set of handcrafted rules together to make it all work. And then finding one case that suddenly doesn't fit into the rules and then having to redesign the whole thing. Again, this is just nice and perfectly streamlined. You just get more data, label it and train it on the extra data. Well, the next research part is, I mean, we're always continuing to collect images and seeing if we eventually run into types of cataracts that are maybe so rare that we haven't seen before, and maybe ones that we might still struggle on. Haven't seen very many of those yet, but there's a lot of people in the world, lots of different ways that cataracts can look. You know, you never know when you're gonna finally hit the one in 100,000 or the one in a million type thing that's completely different from everything else. And so we'll just always be on the lookout for those, and incorporating them, as well as seeing what other avenues we could explore with the same deep learning technology on similar imaging modalities.

Related Videos
EyeCon 2024: Peter J. McDonnell, MD, marvels on mentoring, modern technology, and ophthalmology’s future
© 2024 MJH Life Sciences

All rights reserved.