Artificial intelligence in radiology: Friend or foe?

Article author: 
Dr Grant Mair and Professor Joanna M Wardlaw, Centre for Clinical Brain Sciences, University of Edinburgh Professor Phil White Institute of Neuroscience, Newcastle University

All three authors are academic neuroradiologists specialising in stroke imaging and, within the Real-world Independent Testing of e-ASPECTS Software (RITeS) Collaborative, are testing a commercially available artificial intelligence software for stroke.

Radiologists are at a turning point in care delivery with artificial intelligence (AI) software. We are promised greater consistency and precision, less error and delay, but is AI a panacea?

In 2016, Geoffrey Hinton, an AI expert (in) famously said ‘Stop training radiologists now…’,1 but his vision of comprehensive, AI-enabled radiology has not yet become reality and arguably never will. Concepts of narrow versus general AI and underestimation of the role of radiologists likely explain his misstep. Narrow AI is trained for one task (for example, playing chess) and is often portrayed positively by the media. However, narrow AI is context specific and requires human guidance, such as for image selection. It will fail if asked to perform tasks without training and may struggle when presented with unusual or poor-quality examples of tasks it was trained for (consider a motorwaytrained self-driving car used in a busy town centre without modification).

In comparison, general AI would be human-like and include problem-solving, understanding context and independent learning. General AI remains science fiction, usually with terrible consequences for humans (see the Terminator or 2001: A Space Odyssey movies).

Narrow AI could add value at all medical imaging steps: protocolling, scheduling, quality control, triage, imaging/non-imaging data retrieval and display, highlighting abnormalities, quantitatively scoring disease and communicating results. Ken Sutherland, President of Canon Medical Research Europe, estimated that given these needs, the number of diseases per body part and the myriad imaging features per modality, ~100,000 different narrow AI applications would be required to replicate radiologists.Since each AI application requires development, testing and integration with healthcare information technology (IT), delivering an AI-driven radiology future will be challenging for several reasons.

The challenges

First, AI learns from example yet training data are not widely available. Access to medical imaging is controlled rightly. AI development tends toward small, cultivated research datasets rather than routinely collected imaging, thus trading clinical representation for quality. Real-world data are messy and require substantial preparation prior to AI development, but highly selected datasets produce results that are not relevant to average patients. Initiatives like Scottish Medical Imaging will help by offering research safe havens for routinely collected imaging.

Second, some problems solved using AI are redundant, for example identifying large ischaemic stroke lesions on magnetic resonance imaging (MRI).3 To counter wasted effort, the American College of Radiology seeks suggestions for AI imaging methods that would be clinically useful.4

Third, current regulation and testing are inappropriate. The UK Medicines and Healthcare products Regulatory Agency (MHRA) defines software applications as medical devices that require CE marking.While CE confirms patient safety when products are used ‘… under the conditions and for the purposes intended’, consider the differences between a CE-marked hip prosthesis used in a controlled surgical environment compared to point-and-click user-friendly AI software for radiology accessible to anyone using a picture archiving and communication system (PACS). An expert consensus review of current regulation for radiology AI highlighted a lack of incentives for AI to be safety/performance tested independent of manufacturers or under real-world conditions.6 A large, systematic review (>20,000 studies considered) assessing the diagnostic accuracy of deep-learning methods for medical imaging found only 82 studies worthy of detailed review, most had poorly reported scientific methods and were rarely externally validated.7 Linking AI development and testing in a multiphase process like drug development is suggested.6

Finally, there are unresolved issues regarding ethics and accountability (who is responsible if AI is wrong and patients are harmed?), interpretability and trust (can we believe the results?). A joint statement from seven European and North American societies concludes that radiology AI should be transparent and highly dependable (that is, rigorously tested using agreed methods with openly available results) and that accountability should remain with humans.8 Thus, AI should only assist radiologists to make better decisions. When asked, patients want human doctors even if machines are less prone to error.9

The future

Commercial development of radiology AI has focused on selected body systems/diseases, including computed tomography (CT) chest nodules, breast screening mammography and magnetic resonance (MR) hippocampal volume. In stroke, software aspire to identify  hyperacute ischaemia, dense obstructed arteries or haemorrhage on CT, arterial occlusion on CT angiography and viable versus non-viable brain on CT/MR perfusion. This all occurs automatically within minutes of scanning and could be very useful for rapid decision-making, especially with apps instantly conveying images to mobile devices. Unfortunately, testing remains limited.7,10 Nevertheless, use of commercial AI software for patient selection in landmark clinical trials and persuasive salesmanship herald the inexorable introduction of AI to clinical practice.11,12

AI will ultimately outperform humans in many tasks but should be embraced to improve radiology, not feared or avoided. Narrow AI requires human oversight and patients are currently unlikely to accept purely computational decisions. Radiologists should work with developers to ensure software is useful. We should lead efforts to define and conduct testing of AI and ensure appropriate regulation and ethical standards. Radiologists have always existed at the forefront of technology and adapted our practice to suit; AI is no different.


References

  1. www.youtube.com/watch?v=2HMPRXstSvQ (last accessed 5/2/21)
  2. www.nvidia.com/en-us/gtc (last accessed 5/2/21)
  3. www.isles-challenge.org/ISLES2015 (last accessed 5/2/21)
  4. www.acrdsi.org/DSI-Services/Define-AI (last accessed 5/2/21)
  5. Medicines and Healthcare Products Regulatory Agency. Guidance: Medical device standalone softward including apps (including IVDMDs). London: Medicines and Healthcare Products Regulatory Agency, 2020. 
  6. Larson DB, Harvey H, Rubin DL et al. Regulatory frameworks for development and evaluation of artificial intelligence–based diagnostic imaging algorithms: summary and recommendations. J Am Col Radiol 2020; doi.org/10.1016/j.jacr.2020.09.060.
  7. Liu X, Faes L, Kale AU et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health 2019; 1: e271–e297.
  8. Geis, JR, Brady A, Wu CC et al. Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Insights Imaging 2019; 10: 101. 
  9. Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res 2019; 46: 629–650. 
  10. Mikhail P, Le M, Mair G. Computational image analysis of nonenhanced computed tomography for acute ischaemic stroke: a systematic review. J Stroke Cerebrovasc Dis 2020; 29: 104715.
  11. Ma H, Campbell BCV, Parsons MW et al. Thrombolysis guided by perfusion imaging up to 9 hours after onset of stroke. N Eng J Med 2019; 380: 1795–1803. 
  12. Nogueira RG, Jadhav AP, Haussen DC et al. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Eng J Med 2018; 378: 11–21.

Declared interests

Dr Grant Mair: No interests declared. 

Professor Joanna Wardlaw: MRC Neuroscience and Mental Health Board 2018–2021, European Stroke Organisation Conference Planning Group 2021–2022, European Stroke Organisation Guideline Writing Group Small Vessel Diseases, holder of academic research grants from the Research Councils UK, Stroke Association, British Heart Foundation, EU Horizon 2020 grant, Fondation Leducq, Alzheiemer’s Society, Alzheimer’s Research UK, Sainsbury’s Foundation. 

Professor Philip White: Chair UK Neurointerventional Group, Member NHS England Neurosciences Clinical reference Group, Thrombectomy Oversight and Implementation Groups, Intercollegiate Stroke Working Party (RCR Rep), Member British Society of  euroradiologists Executive and Academic Sub Committees, Member NIHR Hyperacute Stroke Research Centre Oversight Board, Educational Consultancy work for Microvention Terumo, recipient of Grants for research or educational activity from Medtronic, Stryker &  Penumbra, member of Editorial Board of Journal of Neurointerventional Surgery, part of BMJ Group, recipient of British Society of Neuroradiologists President’s Medal 2020.