In an interview, Anil Parwani, MD, PhD, discussed how artificial intelligence can be used in medical education and training, as well as its role in personalized medicine.
The Ohio State University Wexner Medical Center has moved away from the traditional microscope-based examination of glass slides to a fully digital workflow, enabling the integration of artificial intelligence (AI) in pathology.
With this shift, the center has been able to digitize millions of slides, particularly for cancer cases, and develop AI algorithms that enhance the detection and grading of cancers, with a focus on prostate cancer. Already, these AI tools have already undergone extensive clinical trials, showing promising results in accurately diagnosing and predicting patient outcomes.
While challenges such as system compatibility, integration, and costs remain, the potential benefits of AI in pathology offer more objective, reproducible, and faster diagnoses.
In an interview with Targeted OncologyTM, Anil Parwani, MD, PhD, vice chair of anatomic pathology at The Ohio State University and director of digital pathology shared service at The Ohio State University Comprehensive Cancer Center – James Cancer Hospital, discussed how AI can be used in medical education and training, as well as its role in personalized medicine.
Targeted Oncology: Can you discuss the AI technology being used at your institution?
Parwani: Currently, what we are doing is we are one of the few hospitals in the country that have converted from last-slide workflow to a digital workflow. For us to use AI, we need to have images and a digital format. So traditionally, when you think of pathologists, they look at a glass slide and they make a diagnosis under a microscope. At Ohio State, we were able to take millions of these glass slides and convert them into digital images. These are images that you can interact with. Each image is made up of millions of pixels. If I am looking at a patient's prostate biopsy, the biopsy was done in a clinic, the tissue was sent to the lab, and the glass slides are still made, but our process really starts after the glass slides. They become digitized, [and] I review them on the monitor, and that was something we started in 2018.
We have been able to digitize millions of slides, [and we] mostly started with oncology. About 2 years ago, we started to design and think about ways where we can apply artificial intelligence on top of these images. Basically, for a patient who has prostate cancer, are we able to detect that cancer on a monitor. Just like on your iPhone, you have apps; there could be an app for prostate cancer detection, there could be an app for creating prostate cancer, or there could be an app for just finding similar cancers. There could be an app to predict which patients will have a better outcome without actually doing a lot of extensive and aggressive assays.
We have worked in this area for about 6 years now, and we have been able to test algorithms from different companies. We have also had researchers at The Ohio State University who have built algorithms here in the computer science department or the bioinformatics department. We have now completed a few clinical trials on prostate cancer where we have demonstrated that prostate cancer can be detected by the AI tool; it can be diagnosed, [it] can be recreated, it can be cultivated, and we are in the process of, now, integrating it into the electronic medical record [EMR], and the lab information system. We can do it standalone. I can go into a website, pull up a patient, run the algorithm, get diagnostic criteria, and the pathologist will review the heatmap of where the cancer is, and then decide, is this diagnostic? Then they will sign off on the case and send it to the patient's chart.
We have done a lot of careful validation of these algorithms, and we feel they are safe and ready for clinical use. In 2024, our goal is to launch several of these algorithms for clinical use. Right now, they are mainly for quality assurance and looking at research, but we have tested it on actual clinical specimens and the results are very promising. The next thing is now to bring it to integration with the EMR.
How can AI be integrated into the current workflow of pathologists?
AI can be assisting the pathologist in finding things, counting things, segmenting things. It could augment my diagnosis. In other words, it could be making my diagnosis even better and more comprehensive maybe, or it could be autonomous, just like a self-driving car. There are obviously degrees of safety; there are degrees of ethics that you have to be careful about as you launch AI algorithms, but it can assist the pathologist in the very beginning when they have not seen the case.
Let's say I come to work, I have 15 patients’ biopsies, each biopsy code is 10 slides, and it takes me an hour to read each case. Now I have 15 cases, that is 15 hours. A lot of the time that I spend is [on] quantitative things, measuring things, counting cells, and putting them into buckets. That means that if I can save 15% to 20% of time per biopsy, I can put that time back into patient care, I can read the reports more carefully, I can spend more time reviewing the patient's chart. I am not overwhelmed. I am not burnt out. I enjoy what I do. It could assist me.
In the second scenario, I am looking at a rare cancer of the prostate, which is not something I usually see. I am not alert and paying attention to it, especially if the pathologist is a general pathologist, and [if they are] not a specialist in prostate cancer like I am, they might miss things. The augmenting part is the computer can circle the areas, [and say] hey, look at this, just like Google Maps guiding to make sure [we are] seeing everything.
The third area is checking my work. Let's say I have looked at all those 15 cases, and I am now ready to release them into the patient's chart. AI can be a gatekeeper. It can screen those cases one more time and make sure everything I wrote in my report is accurate. One error means one error for a patient, which means wrong diagnosis, right? There are different ways AI can help us. We are a teaching institute, so we teach trainees, medical students, residents, [and] fellows, and AI can actually help them too. It can guide them.
We are doing a study now where we are watching the eye movements of pathologists as they look at the monitor as they interact with you. We are building algorithms like a Google Maps for pathologists. These types of technologies will lead to better cancer diagnostics; it will lead to designing assays and diagnostic modalities, which can help pathologists in all aspects of their workflow.
Finally, the last piece is, how does it help oncologists guide their treatment modalities? If there are these 15 patients, they have differences in their genetics or different mutations. Today, we know that AI can help differentiate between those. We are testing a new algorithm with a company which has built a prognostic assay based on images from pathology. Basically, the algorithm runs through those biopsies and predicts the risk score for each biopsy. We are testing that algorithm. Now, it is not ready for prime time yet. But there have been 2 or 3 publications already outlining the benefits of this approach, because it is not as expensive. A traditional molecular assay might take 2 weeks to get the results back. This assay could be done that same day. It could be done on the same material, [and] the patient does not need to come back for a second invasive procedure. There are a lot of things that AI will start to help us with. We are fortunate because we were one of the first hospitals in the country to go live with digital pathology.
How do you address potential biases or limitations in the AI algorithm, particularly concerning factors like patient demographics or socioeconomic status?
We absolutely want to be very careful with this. In our hospital, we know what type of population of patients we serve: the greater Columbus area or central Ohio region. But what if the AI algorithm was designed for patients in Sweden, or patients in China, or patients in Japan? So, is it directly applicable? Can we find the same answers that those algorithms are designed for? Generally speaking, these algorithms have extensive validation done before they are ready to be used. But we at Ohio State are even more cautious. We do our own internal validation. We take a known patient with a known cancer, a known grade, and we use that to train the algorithm and test the algorithm over and over again.
We have published this data recently where we have demonstrated that the pathologist’s diagnosis and the AI diagnosis are not different from each other; they are not inferior to each other. We do this very carefully because, once we lock down the algorithm, then it becomes a routine algorithm for cancer care. That is why, in the next few months, our goal is to take the commercially available algorithms—they were probably designed on patients from different parts of the world—and to bring it to Ohio, test it here on our patients, and then release it for effective use for prostate cancer detection.
We are also doing the same thing for breast cancer, and gastric cancer. We are also doing it for noncancers, like detecting microorganisms in a gastric biopsy like H. pylori. It is really an exciting time. When I went to medical school, I didn't even know this would be a possibility in the future. But today, I am very excited and I am very pleased that we have these tools now for patient care and for oncology care.
What are some of the biggest challenges and opportunities that you have encountered in either implementing or utilizing this technology in the clinical setting?
Some of the challenges have been the [identification] component of it. Many of these are apps that are designed by different phone companies. [If you] think about apps, like you have an iPhone or a smartphone, [you can] generally can go to the app store and buy any app, put it on your phone, and start using it. It is not the case with AI. They are all working on different image formats and different proprietary technology.
One of the challenges has been to harmonize those and bring them into one place. Let's say I have an AI app for prostate cancer, and there are 3 companies selling it. All 3 of them will not work with my system; they are not compatible. One challenge is compatibility. The second piece is the integration. These are standalone apps, so you have to launch them on a separate website, review the cases, [and] log into the other system. It is not very well integrated in 2024. But I expect that as more pathologists [and] institutes launch these apps, they will become more universal [and] easier to use. I think the third thing is the cost. Currently, some [AI algorithms] are not reimbursable. Some of them are starting to get reimbursed, but the cost part is prohibitive because when the executive says, we want to launch a prostate cancer algorithm and they want to ask you, how much does it cost? How is it? Why are you not doing what you currently do? How will this help you? We have to go through this specification. On the flip side, because we do not get revenue, it becomes a challenge.
The hospital has to focus on launching these AI algorithms based on the quality and the efficiency gains. But January [of 2024] was the first time where they introduced a [Current Procedural Terminology (CPT)] code for a prostate AI algorithm. If this continues, then we will get reimbursed for it. It will be easier to buy this and make it available for patient care. Right now, only a handful of hospitals are doing it, and they are doing it in the context of patient safety, quality gains, efficiency gains. But soon, there will be revenue coming from it. Those are the biggest challenges.
The advantages are more collaboration and more ability to share cases with oncologists, making it more objective [and] reproducible. My results in my hospital running that algorithm will be the same as Cleveland Clinic or Case Western. [We have] taking away the subjectivity that sometimes comes with pathology diagnosis and making it more objective, accurate, [and] maybe faster. In the end, [we are] freeing up some of the time the pathologist might have spent quantitating things.
What are your visions or plans for further development and integration of AI in cancer diagnosis and prognosis?
Ultimately, what would be great is to have a tool, more like a pan-AI tool, not just focused on prostate cancer or bladder cancer or gastric cancer, but this becomes an app on your monitor just like Microsoft Word or an email where, when it is launched, [we] can see a heat map of where the cancer is, how much of the cancer there is, and start to create very powerful signatures of the patient's outcomes, looking at the whole patient, not just their prostate, and get a more integrated view of the patient with their genomics and blood tests. Ultimately, many of the many data points that we collect in medicine are in silos. They are in different systems, and the goal will be [to see] if AI can help us bring this information together to know exactly what is going on with these patients. Even though 2 patients have the same disease, their cancer is very different, even though it looks the same. So, can we go beyond just a very superficial diagnosis and grading by AI, but create a more comprehensive, predictive profile of their disease or their cancer and make it so it is precision pathology? My vision is to create precision pathology for every patient using these tools.
How do you see this technology contributing to personalized medicine and tailoring treatment plans for individual patients?
Just like I mentioned, there are many patients whose prostate cancer looks the same, but they are very different in their outcomes [and] how the cancer progresses. There are other factors in the cancer, the tumor microenvironment, or the patient's demographics, or their family history. Through building the AI tools, which are predictive and comprehensive, they can pinpoint which patient will benefit from this therapy or not. We think about personalized medicine, but I think about precision. I think that precision diagnosis leads to precision medicine, and AI will help us get to the precise diagnosis and help us create a bridge from just a diagnosis to a more powerful profile of the patient.
That is where I think this is headed toward, and I can start to see this. When I went to medical school and when I was a resident, I did not even think that we would come to a point where I would be looking at a monitor and at signatures of cancer and how they will respond to treatments. My role as a pathologist has evolved completely now because I am not somebody who is just making a diagnosis. I am part of the whole patient care team now, which will help oncologists get a better and more precise diagnosis for their patients, which will lead to a more precise treatment.
Do you foresee a future where AI could replace some of the roles the pathologist has in routine cancer diagnosis?
I think there are different types of algorithms. There are algorithms which are very quantitative. Some of those roles will be taken over by AI. I see that as a distinction between artificial intelligence and real intelligence. Real intelligence comes with context-based learning, experience, and lots of reviews and integration of data from multiple sources. AI algorithms are very limited. You design an algorithm to do a task. If I can build small algorithms and start taking away a lot of the manual tasks from the humans, they will have time to do other things, and they will help us to do better things. In summary, I think that is the journey that we need to take.