Machine Learning-Based Tool Aims to Enhance Stereotactic Radiosurgery Outcomes

Fact checked by Sabrina Serani
Commentary
Article

In an interview with Targeted Oncology, Rupesh Kotecha, MD, discussed a machine learning tool designed to predict local failure after stereotactic radiosurgery for brain metastases.

Rupesh Kotecha, MD

Rupesh Kotecha, MD

Rupesh Kotecha, MD, recently discussed an innovative machine learning tool aimed at predicting local failure after stereotactic radiosurgery for brain metastases during the 2024 American Society for Radiation Oncology (ASTRO) Annual Meeting. This cutting-edge model, developed using a comprehensive dataset of 1503 brain metastases treated in 235 patients, evaluates 3 different radiation dose levels: 20, 22, and 24 Gy.

The tool achieved 88% accuracy, 91% specificity, and 89% sensitivity at the 2-year mark. Key findings from the study suggest that the tool not only assists in determining the optimal radiation dose but also helps tailor the frequency of MRI scans based on individual patient risk.

Future research with this tool plans to focus on integrating systemic therapy effects and validating the model across diverse patient populations, according to Kotecha, radiation oncologist at Miami Cancer Institute.

In an interview with Targeted OncologyTM, Kotecha discussed this tool and its implications for the field.

Targeted Oncology: Can you provide an overview of the abstract regarding the machine learning tool and its intended purpose for predicting local failure after stereotactic radiosurgery?

Kotecha: Essentially, when patients are treated with stereotactic radiosurgery for small, intact brain metastases—those that we typically consider to be less than 2 cm in size—we are always wondering about what factors can influence the individual patient's risk of developing local failure after treatment. There are things that are modifiable, [like] the dose of radiation that we use, but there are [several] nonmodifiable conditions, such as patient characteristics, that we cannot change.

Our goal for this project was to evaluate 3 different dose levels that we have used in patients treated at the Miami Cancer Institute previously: 20 Gy, 22 Gy, and 24 Gy. All of these doses are within the threshold of what is considered standard radiosurgery treatment practice.

What we wanted to do was evaluate all of the patient characteristics [and] other treatment-related factors and identify the risk of local failure at each dose level at 6 months, 1 year, and 2 years after treatment. This requires the evaluation of a significant amount of data, as you can imagine, considering all these factors and conditions, in addition to our usual methods from a statistical perspective, to evaluate the risk of local failure. We used machine learning algorithms to help us determine which factors are associated with local failure and how we could potentially predict a patient's risk of local failure after their treatment with radiosurgery.

What types of data were used to train the machine learning model? How was this data collected?

The data was collected from patients who were treated with stereotactic radiosurgery at our institution, and those patients were treated between January 2017 when the institution went live, up until July 2022 when we did the data cut off for this particular study. We used propensity score matching analysis, so you need to have a large number of patients. Our database consisted of 1503 brain metastases treated in 235 patients over 358 courses of stereotactic radiosurgery. We did have a sizable dataset to be able to evaluate this data, and this allowed us to be able to assess the data at a more granular level.

How well do you think the machine learning tool performs in terms of accuracy, sensitivity, and specificity compared with traditional methods?

We do not have an idea of traditional methods because we were evaluating a wealth of data that we really did not have access to. But essentially what we looked at is, in this study, we used a 1-year model because that had the highest levels of diagnostic performance, and getting to the values that you had asked about when we look at, for example, the area under the curve, the accuracy was 88% and the specificity was 91%. Interestingly enough, when we look even beyond that, for example, a 2-year model, we had very high sensitivity at about 89% as well. As we have further follow-up in our patients, we can fine-tune the model. But we were impressed, and I would say surprised at how well this model was able to perform.

What were just the key findings from the study for a community oncologist?

In this study, we were able to develop an initial machine learning model that can predict local failure as a function of dose. I think this is useful in 2 ways, directly for clinical implementation.

Number 1 is that you could develop a clinical decision-making tool to determine what dose of radiation should be optimally used for a patient. Is 20, 22, or 24 Gy the best? We should be able to predict from the data the risk of local failure with each of those doses and therefore use that information to select the highest dose that provides a clinically meaningful result. If 24 Gy does not give you any better control than 22 Gy, then we know we can stop at that dose; we do not have to administer a higher dose to that particular patient.

The second part, which I think is intriguing for our patient population in clinical practice, is that if you know the risk of local failure at 6 months, at 1 year, and at 2 years out, [this tool] can help determine how frequently to schedule MRI scans. We can space them out or make them more frequent based on whether that patient is potentially at a lower or higher risk of failure. This allows you to fine-tune how often we are following that patient based on their individual risk.

What kind of challenges were encountered during the development of this model? How were they addressed?

I think the challenge is, just like with any development of a machine learning algorithm, it is based on the data that you feed it, so we had to have a very well-curated dataset. We needed to know exactly what the parameters were that we were going to assess in patients. For this study, we specifically used local failure, but one can use risk of radiation, necrosis, and survival. We can put a bunch of parameters in, [but] we had to have very specific coding for the patients, because the model is going to be only as good as the data that you feed it, so it required careful curation of the data. Fortunately, we treat patients with a standardized treatment approach at our facility, and we have been since we opened, so we had essentially good data to be able to do such types of projects.

What future research do you foresee in enhancing this tool?

We have a very diverse patient population at Miami Cancer Institute, so that helps with generation of models with regards to internal validity, and I think for external validity as well. But I think as we add additional patient populations or datasets from other institutions, it will help us to identify [if there are] limitations to our particular model when it is applied at different institutions. For example, if you look at our model, and if you tried to apply it to maybe a European population of patients, it may or may not require additional tweaking. I think this is a good initial model or dataset, but obviously as you get to larger patient populations, that data would be useful for fine-tuning the future assessments of this moment

I would like to add that we are finalizing the paper for this. One thing we did not put into the abstract or presentation for this, but we have been able to create for the paper is actually almost like an app-based system where one could type in all of the key parameters that were identified in the model as being significantly associated with risk of local failure. For example, age of the patient, their gender, their race, the performance status, primary tumor type, location, [number of] lesions, [tumor volume], status of the external disease, and burden.Then, are they receiving systemic therapies? What are those concurrent systemic therapies?

If you put all of that into the model, the computer will then spit out, essentially, what that risk of local failure would be at 6 months, 1 year, and 2 years with each of the dose levels, and that will ultimately help you pick that the best dose for that particular patient that you are seeing in the clinic. We have an app-based system that we are developing that will be able to give you that information in real time. That is going to be put as part of the paper, and it was finally submitted for publication.

REFERENCE:
Yarlagadda S, Zhang Y, Saxena A, et al. Development of a machine learning-based tool to predict local failure after stereotactic radiosurgery for small brain metastases. Abstract presented at: 2024 American Society for Radiation Oncology Annual Meeting; September 29-October 2, 2024; Washington, DC. Abstract 2645.
Recent Videos
Related Content