[ad_1]

This conversation is part of a series of interviews. Japan Automobile Manufacturers Association Editor-in-Chief Kirsten Bivins-Domingo, Ph.D., MD, MAS, and expert guests explore issues surrounding the rapidly evolving intersection of artificial intelligence (AI) and medicine.

Can AI improve the speed and efficiency of ultrasound and echocardiogram interpretation and minimize diagnostic errors?

Dr. Rima Arnaout

For cardiologist Rima Arnaout, M.D., associate professor of medicine, radiology, and pediatrics at the University of California, San Francisco (UCSF) and Chan Zuckerberg Biohub investigator, imaging is the “last frontier” in phenotyping. is. Her AI’s ability to analyze images could help echocardiographers like her rule out diseases and abnormalities.

Arnaout is a faculty member at UCSF’s Bakar Computational Health Sciences Institute and Center for Intelligent Imaging, as well as the UCSF-UC Berkeley Joint Program in Computational Precision Health, where he researches whether machine learning can help detect both standard and patterned patterns. I am researching. A new technique for cardiac ultrasound in a scalable manner. His goals are twofold. The goal is to reduce diagnostic errors in medical images and uncover new phenotypes.

In this interview she participates Japan Automobile Manufacturers Association Editor-in-Chief Kirsten Bibbins-Domingo, Ph.D., MD, MAS, discusses the transformative potential of AI in cardiac imaging.

The following has been edited for clarity and length.

Dr. Bivins Domingo:As it relates to your clinical practice, I’d love to hear how AI has come to be incorporated into the research you’re doing.

Dr. Arnaout:It started when I was a trainee studying basic science. It turns out that in both the laboratory and the clinical world, the genotype is only as good as the phenotype to which it can be mapped.

Imaging is an extremely data-rich and valuable source of phenotyping, but it is extremely difficult to do correctly. So I started thinking about what we could do to make phenotyping, especially image-based phenotyping, more accurate, reliable, and scalable, consistent with what we’re doing on the genomic side. I was very interested.

People conduct phenotypic analysis mainly using structured data, test values, and numerical values. They do a lot with text. And imaging felt like the final frontier for me. In the basic science laboratory, imaging was the basis for phenotyping. During my clinical training, I also became fascinated with clinical images.

Dr. Bivins Domingo:In the case of echocardiograms, with advances in computing power and artificial intelligence, we’re really at a point where we can explain all the patterns we’re seeing in a way that we couldn’t. in front.

Dr. Arnaout:We have that ability now, but we need to work to make it a reality.

Dr. Bivins Domingo:Please tell us a little about the field you are working on. What type of phenotype are you interested in?

Dr. Arnaout:I’m an echocardiographer, so I’m concerned about ruling out structural disease, valvular disease, and wall motion abnormalities. When a patient undergoes an echocardiogram, the doctor looks at the overall picture. We are not focused on one phenotype or the other. We need to look at and figure out everything that could be wrong with the patient, whether it’s an indicated cause or something accidental or surprising. Therefore, we want to train computers to help with that comprehensive task.

Dr. Bivins Domingo:right. And I’ve heard that it’s just as difficult to understand what’s normal as it is to understand patterns in outliers. What do you think about using all the tools currently available with AI in echocardiography? Where do you think it holds the most promise in improving clinicians’ work?

Dr. Arnaout:It really depends on the individual use case, and there are countless for echocardiography alone. Some are meant to assist in the screening process. Doctors who go out into the community to test for a disease, especially a rare disease, don’t know what they’re seeing when they encounter a disease. We have a screening use case for detecting congenital heart disease with fetal screening ultrasound. And because congenital heart disease is the most common birth defect, it’s still very rare in the population, so we found a huge gap and kind of a discrepancy in diagnosis. This makes it difficult for physicians to build and maintain skill in detecting congenital heart defects, especially when using noisy and difficult imaging modalities such as ultrasound.

Dr. Bivins Domingo:What you’re saying here is that it could be expanded to not only make detection of specific clinicians more accurate, but also available to patients who may not come to UCSF for an appointment. I heard. your clinic.

Dr. Arnaout:Yes, exactly. And another challenge with this issue is that there are few specialized centers and pregnancy occurs everywhere on the planet. Therefore, we need to create solutions that are scalable to everywhere pregnancy occurs, whether it is a tertiary care center or a small rural clinic.

Dr. Bivins Domingo:I really like a lot of the details in your example because I think it’s clear that you need precision and tools to help with precision, but scalability is also very important for what you’re talking about. About what I think is a good angle. We recently interviewed Atul Butte, who talks about scalable privilege when talking about the potential of AI. That’s what I think as I listen to him talk about why he’s excited about the discovery of congenital heart disease.

Dr. Arnaout:That’s a wonderful word, isn’t it? scalable privilegesAnd we’re thinking about that a lot in terms of how we design solutions for patients and providers who need help the most. These equity issues are not limited to patient populations. It also has an impact on a technical level. One of the unglamorous parts of data science is understanding what types of machines, how many clinics, and how many sonographers are running in order to properly tailor the solution.

It’s not just about fancy neural networks. Where the rubber meets the road is the clinic. In one of our recent papers, we found that there is great diversity in the types of imaging data, the amount of imaging data, and even the protocols in place for fetal screening ultrasounds around the world. And this is an area where there are clear clinical guidelines. Therefore, solutions must be designed to overcome that variability.

We also know that many parts of our community do not have modern, high-quality ultrasound equipment. In fact, the World Health Organization says that about one-third of the world receives no prenatal care at all. We thought seriously about issues of access and bias as we designed a solution that we hope can be implemented around the world.

Dr. Bivins Domingo:Does this mean that even low-tech machines in other parts of the world can implement these kinds of high-tech solutions to improve diagnostic accuracy?

Dr. Arnaout:I think that’s a really important goal. We always keep the use case and the patient in mind. It’s another thing to be on the cutting edge of AI development, and these things absolutely need to be done to make these solutions work for difficult tasks like fetal diagnostic ultrasound. However, we must also remember the challenges of implementation and our goal of ensuring adoption across a wide variety of clinical settings.

Dr. Bivins Domingo:There are advances in fundamental models and generative AI. What are your thoughts on these efforts and their application to cardiovascular disease and image processing?

Dr. Arnaout:There is still some work to be done to adapt the basic models that can be generated for imaging to medical imaging. What should I consider regarding the future basic model? One is its size. If you want to be able to deploy a basic model quickly or in a low-resource environment, do you essentially need to spin up an entire supercomputer to manage and operate this?

And it’s also about issues of access and equity, and the use of these models in research and medical use cases. It also involves whether these models increase or decrease research diversity. On the incremental side, we’ll have these pre-trained base models that people can borrow off-the-shelf and fine-tune to their own tasks. This is a tide that is likely to lift all boats and increase the amount of research underway. At the same time, if these foundational models are relatively few, is everyone building research solutions from a small number of foundational models with their own biases and constraints? After all, the solutions we build on those models Will it impose constraints? These are all things you need to study.

Additionally, the underlying model has been demonstrated to fabricate information that sounds plausible but is not actually true. And that’s a problem no matter what you use these models for. This is an area of ​​active research across multiple industries.

Dr. Bivins Domingo:Let’s talk a little bit about your daily life. What is the baseline for what the average cardiologist thinks about these tools that improve our ability to interpret images and detect signal from noise? How can we integrate additional knowledge from different tools?

Dr. Arnaout:This is an entire area of ​​research on implementation and human-algorithm interaction. I do not believe that these issues have been addressed holistically in order to apply AI to daily clinical workflows. We are confident that we will find use cases where AI can be useful, such as automating some of the tedious measurements made in cardiac imaging tests.

And that’s right. Research shows that when a computer says something, humans are more likely to believe it because it appears on the computer screen. That may or may not be true. And I think it has more to do with humans and how humans interact with computers than the algorithms themselves. But when applying any kind of algorithm, artificial intelligence or otherwise, in a high-stakes situation, these need to be evaluated.

Dr. Bivins Domingo:This is a theme we came across in several conversations. We need these models and machines to work better, but we also need to understand how humans interact with information to make decisions that ultimately benefit patients. I also really like the focus on clinical applications, and that started with this discovery. The human mind has the potential to discover things it has never thought of before. This will not only help us better diagnose congenital heart disease as we know it, but also patterns that may have been previously unexplained.

Dr. Arnaout:that’s right. That’s very exciting for me. How do we take standard image-based phenotyping, the information we already know exists within imaging, such as measurements and wall motion, and make it more accurate, reliable, and scalable? How do I do that? But then there’s a whole other layer. Do images contain more information than is visible to the human eye? Is there a way to automate that discovery and validation of potential image-based phenotypes? What needs to be done first? I think one of the things is we need to have a model that learns clinically relevant features within the imaging. If they’re classifying heart disease, they should look into that model to make sure it’s looking at the heart and not some weird text in the top left corner of the image. Otherwise, you may think you have a new phenotype, but in reality it is just an artifact.

Published online: March 6, 2024. doi:10.1001/jama.2023.23070

Conflict of interest disclosure: There were no reports.

[ad_2]

Source link