News

Article

Q&A: Joaquin De Rojas, MD, on ASCRS 2025 Best Paper of Session for advancing AK with machine learning

Key Takeaways

  • Machine learning improves AK nomograms by identifying key variables like preoperative astigmatism and age, enhancing predictive accuracy.
  • The model allows personalized preoperative planning, suggesting alternatives when necessary, and can be customized with different data sources.
SHOW MORE

A machine learning model incorporating treated astigmatism and nuanced inputs is advancing the precision and personalization of arcuate keratotomy planning

Joaquin De Rojas, MD, on ASCRS Best Paper of Session for advancing AK with machine learning (Photo courtesy of Joaquin De Rojas, MD)

Joaquin De Rojas, MD, presents his research on using machine learning to enhance arcuate keratotomy nomograms, earning the ASCRS Best Paper of Session honor. (Photo courtesy of Joaquin De Rojas, MD)

At the 2025 American Society of Cataract and Refractive Surgery annual meeting, held April 25–28 in Los Angeles, California, Joaquin De Rojas, MD, highlighted key insights from his work on applying machine learning to enhance arcuate keratotomy (AK) nomograms. His presentation1 earned him the Best Paper of Session award. De Rojas, a cataract, refractive, and corneal surgeon, as well as the Director of Refractive Surgery at the Center for Sight in Sarasota, Florida, provided further insights in this interview with Ophthalmology Times.

Transcript edited for clarity.

What were the most influential variables identified in your retrospective analysis that helped drive the machine learning model’s predictive accuracy?

Joaquin De Rojas, MD: In an effort to make the predictions of AK better, we leveraged machine learning to help understand this complex interplay of multiple what we call ‘features’ in machine learning, which you could think of as the input variables that you plug in to try to figure out what’s going to lead to the best outcome. When you’re training the model to figure this out, what you’re able to do is actually incorporate multiple input factors, much more than has been done in other models in the past.

This is the power of AI and the power of machine learning. The most significant ones that after we did the analysis, after we trained the model, you can figure out with factor analysis which ones were most important for figuring out how much arcuate to make, what’s the sweep and what’s the axis. And the ones that stood out, well, the most important one is the amount of astigmatism that you input. The preoperative astigmatism, corneal astigmatism, is going to make the biggest difference. That’s no surprise. That’s what most current nomograms are based on. It’s just astigmatism, and then that leads to an arcuate, maybe with an age modifier.

Watch the video: ASCRS 2025: Joaquin De Rojas, MD, leverages machine learning model to predict arcuate outcomes

There’s no doubt that that was the highest but what we did a little differently is it’s not just the corneal astigmatism that you input. It’s you take these cases that have already been done retrospectively, and you look at the residual refractive error after the surgery was completed, and you subtract that residual refractive error from the preoperative astigmatism, and you make sure they’re on the same axis. You could do that with a basic trigonomic function to make sure everything’s on the same axis.

But now you’re going to get what’s called treated astigmatism. And this is the number like that essentially this is the amount of astigmatism that was actually treated with the arcuate. And that’s how you want to train the model. When we did the retrospective analysis, this factor, treated astigmatism, was the most predictive of arcuate incision length.

Age was another important feature as well. We were surprised to see there were a few other features that were important, for example, white to white. Things that we don’t really typically think about, maybe that if you think about it a wider cornea, well, that same arcuate incision at 9 mm diameter is going to have a different effect if it’s a wider cornea versus a smaller cornea, right? We also looked at other factors, like sex. There was a small effect of that. Right eyes versus left eyes. If you’re right eye dominant, you’re making your SIA [surgically induced astigmatism] at a specific access and there’s an interplay there between right or left eye, you know, and then also if you’re right- or left-handed. We did look at all these things. These all had small effects. And there’s also other ones, axial length. We’re looking at longer eyes, how do they perform or do they perform better. And we even looked at different biometers, different data from different biometers as well to see.

In short, we had what we considered that the most important ones are going to be more astigmatism related, age-related factors, things we know in the past that are going to have a strong effect, but we also had all these other features that had some effect to help get that percentage of improvement even better from what we have. And we leverage the power of AI to be able to not just understand how these factors play in but what’s the complex interplay of these factors, things that we couldn’t really even figure out, that the machine learning can somehow determine that by figuring out the weights of the model.

How might this model enhance preoperative planning or personalization of arcuate keratotomy procedures in clinical practice?

De Rojas: We actually have this model available right now. It’s at DeRojas.info, and if you go on there, you’ll be able to plug in what we think are the most prominent parameters now, or the input variables or features, a lot of words for the same thing. You can actually play with it, and you’re going to get a printout of where it thinks the arcuate should be. And you can actually play with two different versions of the model. But how’s it going to impact it?

First off, if you go on this website, what you’ll notice is that there’s a few things that this does better. The first is it’s more realistic in terms of what arcuate incisions can do. For example, if you put in there that you want to correct 1.2 diopters (D) of against-the-rule astigmatism, it’s going to tell you consider using a toric instead, because the data for this is showing that it’s just not quite as robust, not quite as consistent for these high levels of against-the-rule astigmatism.

Now, if you put 1 D of astigmatism with-the-rule, it may tell you that this is going to be pretty confident with the outcome. It’s not going to have that pop up. I think it’s more realistic. I think what it’s going to help you do is you can really, with the current iteration online, you have certain input parameters, but this can be changed very easily. You could retrain the model. This was trained specifically on IOLMaster 700 K-readings, but you could easily incorporate Pentacam, Cassini, whatever you want to do. You just have to get some data, good data, with postop astigmatism, and then you just feed into the model within seconds you have a customized, tailored approach for that surgeon.

The way I see this going is we’re going to have a really strong base model that’s based mostly on a few assumptions, like you’re making incisions temporally, you’re using, we use the Lensar, although it works with other devices as well. We’ve seen that. There’s certain givens that are going to be probably best if you follow those givens, you’re going to have more confidence in the model. But likewise, we’re going to have a version where you can input your data. You can input maybe 50, 100 cases. Fifty cases would be enough. And that’s going to factor in your SIA. It’s going to factor in what type of biometer, or topographer, do you use to figure out the astigmatism to plug in. And it’s going to factor in any intangibles that we don’t know. Like, what are the parameters on your laser. Maybe you use a different laser. Maybe instead of 80% depth, you use 85% depth. Maybe instead of making them at 9-mm diameter, like we do, you make them at 9.2 so you don’t have to know exactly how these things interact. But the beauty of our process is that the machine learning is going to figure out how to optimize for that.

Were there any surprising findings in the data that shaped how you trained or validated the machine learning algorithm?

De Rojas: One of the things that I learned through this process is there’s a fine balance between sophistication of the model and making like too sophisticated and then having actually something that doesn’t work as well. You really have to try to find a balance. And that’s why part of the process was figuring out not just what features are important, which kind of became pretty evident, but figuring out what machine learning algorithm is going to work best. And we did do neural networks and deep learning, but we found that if you just do raw neural networks the way that like ChatGPT works on a neural network, whatever, you really require a lot of data for it to be a stable model, and it can sort of go off the rails, and this is not very good when you have medical applications. You had to find something that has a nice balance between sophistication but also not being too sophisticated where it starts hallucinating or it does weird things.

What we settled on was that something like XGBoost. It’s called the gradient boosting algorithm, very popular machine learning for tabular data, which is exactly what we had. It’s good because it’s a model. It really prevents over fitting. You can have a small number of patients, and it’s able to figure out the interactions. It’s kind of, you think of it as like a decision tree, the way it works. You can look into do more research on how that works, but it’s very interesting because it works really well for this type of data. We found that that’s not the most complicated model in the world, but it’s still a lot better than straight forward regression analysis, and that worked really well for us. And if we did have another version with neural networks, but what we had to do is we had to create guardrails.

So essentially, the neural network can help, but really it ended up feeding into actually regression, because you end up using something simple like regression, because you really want to ensure that as you go up in astigmatism, the arcuate incisions go up, and you’d be surprised. You have that data. You throw this into the model, and all of a sudden you get some wonky result. You have to be careful with real sophisticated neural networks, and if you’re going to employ them, you need to make sure you have enough data, and you need to make sure you have like certain guardrails in place. I think that was the most elucidating finding, was that more sophistication isn’t always better. You have to find the best balance.

Reference
  1. De Rojas J. Retrospective analysis of femtosecond arcuate keratotomy (AK) data to develop a predictive machine learning model. Presented at: ASCRS; April 25-28, 2025; Los Angeles, CA.

Newsletter

Don’t miss out—get Ophthalmology Times updates on the latest clinical advancements and expert interviews, straight to your inbox.

Related Videos
(Image credit: Ophthalmology Times) ASCRS 2025: Jason Bacharach, MD, on early-onset efficacy with perfluorohexyloctane in dry eye
(Image credit: Ophthalmology Times) CCOI's new CEO Malvina Eydelman, MD, outlines her mission and vision for the organization
CIME 2025: Neda Shamie, MD, on expanding options in pharmacological presbyopia management
At CIME 2025, Neda Shamie, MD, answers questions about Demodex blepharitis
© 2025 MJH Life Sciences

All rights reserved.