Author + information
- Published online November 5, 2018.
- Nikant K. Sabharwal, BSc, BM BCh, DM∗ ()
- ↵∗Address for correspondence:
Dr. Nikant K. Sabharwal, Oxford Heart Centre, John Radcliffe Hospital, Headley Way, Oxford OX3 9DU, United Kingdom.
- convolutional neural network
- deep learning
- myocardial perfusion imaging
- obstructive coronary artery disease
- single-photon emission computed tomography
“Medicine is a science of uncertainty and an art of probability” (1). William Osler, who died nearly a century ago, is credited with this quote. Will rapid advances in modern medicine render this still highly relevant quote redundant in the next 100 years?
Technology has revolutionized medical diagnostics and care. The ultimate challenge is the replacement of human interpretation by a fully automated, reliable, and reproducible system. Putting the ethics of artificial intelligence to the side, this system has to be superior to human experts in all aspects. Understanding this technique is not easy or straightforward without getting stuck in computer code or complex mathematics. Although deep learning has been available for years, it is only the advent of significant computing power and research breakthroughs that have pushed this technology to the fore (2). Deep learning uses neural networks (3), which mimic the pathways in the human brain, to identify patterns and to learn from the process. The convolutional network, an even more complex form of deep learning, is particularly useful in this process and will be found, in particular, in photograph-recognition algorithms employed by social media companies. Medical applications for deep learning have become apparent with high sensitivity and specificity (4). Medical imaging technologies are particularly suited to this form of deep learning, and we should expect the next decade to crystallize its overall utility.
Myocardial perfusion imaging (MPI) with single-photon emission computed tomography (SPECT) has been available for decades, is the most commonly used test for ischemia testing worldwide, and is already a highly automated process. Human error is effectively limited to patient positioning, patient movement, and inadequacy of stress. This promotes MPI against modalities that rely less on validated automated systems for acquisition and interpretation. Its popularity has waxed and waned according to its availability, economics, competition, and clinical utility. Major advances have occurred in each decade. Vasodilator stress in the 1970s and SPECT in the 1980s, followed quickly by the technetium 99m agents. Electrocardiographic gating arrived in the 1990s followed by the wealth of diagnostic and prognostic data that was published in the next 2 decades. Since 2000, we have a cardioselective stress agent and solid state gamma cameras to add to our armamentarium. Each technological advance has been embraced with enthusiasm by the nuclear cardiology community. However, there are still areas for improvement. One such is the use of automated computer algorithms to assist with the interpretation of more challenging scans. This software has been available for years and has been used to provide semiquantitative and quantitative values that assist both the expert and novice reader in the more complex scans. These algorithms have always been sold as an assistant to, rather than a replacement for, the interpretation of the MPI scan.
In this issue of iJACC, Betancur et al. (5) should be congratulated for taking on the task of a multicenter study to improve currently available technology for MPI and compare it with a form of artificial intelligence utilizing a deep convolutional learning technique. The logic to employ this technique in MPI is sound and sensible.
In this study, the investigators have taken 1,638 patients with MPI and invasive angiography and compared total perfusion deficit (TPD) from polar plots against a commercially available deep convolutional neural network. The deep learning neural network was compared with TPD in terms or per-patient and per-vessel sensitivity for the detection of a 70% diameter stenosis on invasive angiography. All the patients were scanned using the latest solid-state cardiac gamma cameras without attenuation or scatter correction. The extra computational time required was negligible. The per-patient sensitivity from TPD to deep learning improved from 79.8% to 82.3%. The per-vessel sensitivity from TPD to deep learning improved from 64.4% to 69.8%. Both improvements reached statistical significance, confirming the superior approach for convolutional neural networks over TPD. Interestingly no clinical data was utilized for the model, which may be a reason for the modest improvement in sensitivity.
Although deep learning outperformed TPD, it would be reasonable to suggest that there is further to go to improve sensitivity for coronary artery disease further. The use of an angiographic reference standard will generate debate (6). The correlation between epicardial anatomy and cellular perfusion is not perfect (7), and, in general, there is a need to focus on better surrogates or hard endpoints such as death and myocardial infarction prediction.
This is the first step in a program of research studies to identify the role of deep learning algorithms in MPI. This study hints of further work that may include special populations, for example, elderly, female, diabetic, and left bundle branch block. Utilizing raw 3-dimensional data rather than polar plots is also suggested. Refined algorithms will take time to develop and validate, but there is no doubt that this is the direction of travel for MPI. We should expect many more publications looking at differing convolutional neural networks with varying clinical and imaging inputs to identify the best sensitivity and specificity within a fully automated system.
This study will raise many more questions for the cardiac imaging community to reflect on. Will these new algorithms replace “experts,” and how will we define an expert in the future? Will referrers trust the results from deep learning, and at what threshold of sensitivity can we routinely automate reporting without any human input?
The innards of the “black box” of these convolutional networks may also need to be explored in detail. Do we need to know how the results are generated, or are we happy to accept the results based on multiple validation trials? Who will be qualified to delve into the inner workings of these software programs, or are they already the intellectual property of their owners? Do we use the results to validate our opinions, or are we to be replaced? Will the regulatory authorities grant a license for deep learning networks to produce a clinically valid report based purely on a complex mathematical algorithm?
Some of these questions are for the future, and some need to be answered now. How do we answer the question whether deep learning could change our working lives? The answer is that it already has and will continue to do so in a way that might be difficult to predict. However, fear and lack of understanding is no excuse for not engaging and working with these complex algorithms to improve our raison d’etre, patient care.
In the meantime a crash course in understanding deep learning convolutional neural networks may become essential for all aspiring medical students and doctors especially those with an interest in any form of medical imaging.
↵∗ Editorials published in JACC: Cardiovascular Imaging reflect the views of the authors and do not necessarily represent the views of JACC: Cardiovascular Imaging or the American College of Cardiology.
Dr. Sabharwal has reported that he has no relationships relevant to the contents of this paper to disclose.
- 2018 American College of Cardiology Foundation
- ↵Wikiquote. William Osler. Available at: https://en.wikiquote.org/wiki/William_Osler. Accessed March 2, 2018.
- Ting D.S.W.,
- Cheung C.Y.,
- Lim G.,
- et al.
- Betancur J.,
- Commandeur F.,
- Motlagh M.,
- et al.
- Gould L.