GigaPath AI Model Shows Promise in Predicting Cancer Mutations and Tumor Burden

News
Article

The GigaPath AI model was highlighted for its advanced capabilities in predicting cancer mutations and tumor mutation burden, including those in lung cancer and other malignancies.

Tumor cell being attacked - Generated with Google Gemini AI

Tumor cell being attacked - Generated with Google Gemini AI

At the 2024 ESMO Congress, the AI foundation model GigaPath showed its potential in predicting cancer mutations, including those in lung and other cancers.1 GigaPath achieved an average macro area under the receiver operating characteristic (AUROC) score of 0.626 for lung adenocarcinoma, a performance that researchers claimed exceeds all competing methods with statistical significance (P < .01).

“I'm going to show you an EGFR mutation, a particular model which is one of the strengths of this model, is very good actually predicting this particular class of mutations,” Carlo B. Bifulco, MD, medical director of oncological molecular pathology and pathology informatics at the Providence Oregon Regional Laboratory and CMO of Providence Genomics, said during the presentation. “…You can see how we compared our model with other existing models….And you can see that the receiver operator curve numbers on those are higher than their internal comparisons, hence the superiority, at least in this particular benchmark of what we actually did.”

This model also outperformed other methods regarding 5 genes for pan-cancer, with improvements of 6.5% for macro-AUROC and of 18.7% for macro-area under the precision-recall curve (AUPRC; P < .001).

To predict tumor mutation burden, the best performance, as researchers noted in the abstract, was achieved for GigaPath, with an average AUPRC of 0.35, which demonstrates a significant improvement vs the second-best method (P < .001).

In the abstract, GigaPath is explained as, “an open-weight, billion-parameter AI foundation model pretrained on a large digital pathology dataset from 28 cancer centers containing 1,384,860,229 image tiles from 171,189 [Hematoxylin and Eosin] slides of biopsies and resections in more than 30,000 patients, covering 31 major tissue types.”

In the study, researchers aimed to compare GigaPath with other competing AI models including CtransPath, HIPT, and REMEDIS across pan-cancer 5-gene, lung adeno 5-gene (EGFR, FAT1, KRAS, TP53, LRP1B), and tumor mutation burden prediction.

Digital pathology has been used in the space for nearly 20 years, Bifulco said, and AI has already played a role in this space. Despite its use and some FDA-approved algorithms in this setting, the impact of this has been relatively limited.

“There’s a watershed event in AI, … and unless you’re living in a cave, I think we’ve all been affected by this,” Bifulco said.

He added that there is a way to integrate AI into the pathology and biology space, and some of that information has already been published.

“Everything [that] has been published so far has been based either on classical image analysis or what we call convolutional neural networks, CNNs,” Bifulco said. “The way these networks work, they have [a] representation of features of the image, like angles that, again, get abstracts at higher levels until they enable you to actually reach a conclusion about the image. And those methods are very powerful, and they’ve been used for many applications, but they’re very brittle.”

He added that the reason why these networks have not been applied more broadly is because they depend on the characteristics of the dataset, which can lead to it generalizing the information poorly. Bifulco said that we are currently leaning lessons from the AI text language space.

“The underlying concepts are really about the prediction of the next word in the sentence,” he explained. “These models are trained by trying to make a prediction of something. They don’t require any kind of labels. You don’t need to instruct them to learn from the text itself, which makes them incredibly powerful, given the amount of huge, huge amount of texts that are available.”

He added that AI text language models are able to predict the word based on the context of a given sentence, and how the context of the sentence drives the interpretation of what the AI model is deciphering. This type of learning can also be applied to images.

“Fundamentally, we are trying to predict a patch of the slide with cells based on the context of the surrounding slides,” Bifulco said. “You don't need to tell anything to the machine learning algorithm about the slides they're looking at. They're learning those features, those fundamental features, foundation models. That's where the name comes from, from the images themselves, and that enables them to scale and to be very robust and reliant across different applications.”

He noted that the images that the model works with need to be large. “The size of the model is going to be a key factor for our ability to actually perform as well as we would like to,” Bifulco said.

In addition, he explained that training the model involves the whole pathology slide, and since they are gigapixel slides, they are very large.

“There’s additional training necessary to embed all the features of the whole slide that you’re looking at under the microscope,” Bifulco said. “And as you can see, we reach the billion-parameters level.”

Training the GigaPath model also involved embedding multiple sources of information including genomic data, language via text from pathology reports, and text from clinical reports.

Bifulco mentioned that this model has been released as an open source for everyone to download and access the datasets.

“A cool thing when you do actually make available these datasets and these models to the public is that actually people can test independently,” he added. “So there's already a large number of benchmarks available.”

Even though GigaPath is being tested in this area, Bifulco said that there are many other models coming that are integrating several technologies. For example, these models may have the ability to train with different sources of images (CT scans, MRIs, ultrasound, pathology), in addition to the ability to interact with models with text, similar to an AI chat prompt.

“Currently, those are potentially deployed on phones, on little devices, but you can see that in the future, very likely, you will have a multimodal kind of integration, where you interact by voice, very likely, with the whole comprehensive data set of the patient,” Bifilco said.

REFERENCE:
Bifulco C, Poon H, Usuyama N, et al. Application of GigaPath: An open-weight billion-parameter AI foundation model based on a novel vision transformer architecture for cancer mutation prediction and TME analysis. Presented at: 2024 ESMO Congress; September 13-17, 2024; Barcelona, Spain. Abstract 1942O.
Recent Videos
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
Related Content