The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
To see or not to see, that is the question of AI in radiology
By Felipe Kitamura  |  Jul 27, 2023
To see or not to see, that is the question of AI in radiology
Image courtesy of and under license from Shutterstock.com
AI should not make key decisions unsupervised. It does subtly prompt human decisions and this often goes unnoticed in radiology, notes Neuroradiology Prof Felipe Kitamura, Stanford AIMI scholar, Mayo Clinic visiting professor, and RSNA Machine Learning Steering Committee member.

SAO PAULO - As a radiologist testing artificial intelligence (AI), I have witnessed firsthand its advent and progression in my field. Most discussions today about AI in radiology fall into the categories of bias, fairness, clinical monitoring, deployment, regulatory affairs, return on investment, and lack of model generalization. These are undoubtedly all worthy subjects, but not enough attention goes to the interaction between AI advice and humans and how this influences decision-making. With this in mind, it is crucial for radiologists to consider a few key points to optimize their use of AI and improve patient care.

Firstly, one must always remember that AI is a tool, not a crutch. AI systems can provide invaluable support in analyzing complex datasets and identifying patterns beyond the immediate perception of humans. However, these systems should never be treated as a substitute for humans’ critical thinking and professional judgment. Even with the increasing sophistication of AI, it is important that one approach each case independently and critically, without undue reliance on advice provided by an AI.

A significant finding from recent studies is humans’ propensity to cognitive biases - such as anchoring and confirmation bias - which are evidently common when people receive advice from either AI or other humans.1,2 These biases can lead people to pay more attention to information consistent with the advice, potentially blinding them to other pertinent information. A good strategy to mitigate this is to always conduct a thorough, independent review of a case before considering any advice, whether from an AI or a fellow human. This approach lets people form initial hypotheses on their own first and then assess any advice in light of their own insights.

The studies also highlight an interesting dichotomy between people’s perceptions of AI advice and their actual reliance on it. Experienced radiologists

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.