
AI more effective than humans at analyzing heart scans
Using more than 180,000 real-world echocardiogram (echo) images to train a computer to assess the most common echocardiogram views, the researchers then tested both a computer and skilled human technicians on new samples. The researchers found that the computer accurately assessed echo videos up to 97.8% of the time, versus 83.5% for their human counterparts.
“These results suggest our approach may be useful in helping echocardiographers improve their accuracy, efficiency, and workflow, and also may provide a foundation for better analysis of echocardiographic data,” says senior author of the study Rima Arnaout, MD, UCSF Health cardiologist and assistant professor in the UCSF Division of Cardiology.
Interpreting medical images such as echocardiograms – which consist of numerous video clips, still images, and ultrasound recordings measured from more than a dozen different angles – is a complex, time-intensive process that typically requires extensive training in human practitioners. While machine learning has thus far proven useful with image-based diagnosis in radiology, pathology, dermatology, and other fields, it has not yet been widely applied to echocardiograms, partly due to their complex multi-view, multi-modality format.
In their study, the researchers used 223,787 images from 267 UCSF Medical Center patients aged 20 to 96. The randomly selected, real-world echo images came from multiple device manufacturers and covered various echo indications, technical qualities, and patient variables.
The researchers built a multilayer neural network and used supervised learning to simultaneously classify 15 standard views. They randomly chose 80% of the images (180,294) for training, and reserved 20% (43,493) for validation and testing. For performance comparison, each board-certified echocardiographer participating in the study was given 1,500 randomly selected images, 100 from each view, drawn from the same test set given to the model.
Overall, the computer classified images from 12 video views with 97.8% accuracy, says Arnaout. Even on single, low-resolution images, accuracy among 15 views was 91.7% compared to 70.2% to 83.5% for the echocardiographers. Additional analysis showed the model found recognizable similarities among related views and made classifications using clinically relevant image features.
“Our model can be expanded to classify additional sub-categories of echocardiographic view, as well as diseases,” says Arnaout, “work that has foundational utility for research, for clinical practice, and for training the next generation of echocardiographers.”
For more, see “Fast and accurate view classification of echocardiograms using deep learning” (PDF).
Related articles:
Algorithm beats radiologists in diagnosing x-rays
Deep learning improves medical imaging
Google algorithm predicts cardiovascular risk from eye images
AI speeds up precision medicine, says IBM Watson study
