The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
The Medical AI Black Box Problem
By Nemanja Kovačev  |  Feb 24, 2022
The Medical AI Black Box Problem
Image courtesy of and under license from Shutterstock.com
The ‘Black Box’ problem in healthcare AI is a scenario in which a AI decision-maker has arrived at a decision that is not understandable to the patient or those involved in the patient's care because the system itself is not understandable to either of these agents. Dr Nemanja Kovačev investigates the mystery of the ‘Black Box’ and its function.

NOVI SAD, SERBIA - When we say artificial intelligence (AI) we often think of machine learning (ML) which is a subdomain of AI that uses statistical tools and algorithms to infer patterns in vast amounts of given data without explicitly being programmed to do so. Here we will use AI and ML terms interchangeably.

Various techniques of ML are already being increasingly used as a support tool in different fields of medicine: radiology, dermatology, cardiology, endocrinology, and pathology, to name just a few. In addition, numerous startups are currently developing different innovative digital health tools based on AI technology. 

It’s pretty remarkable what conclusions can be reached by using these powerful AI methods. AI algorithms, in some cases, have managed to be more accurate than doctors, in other cases surprising correlations and/or causalities were discovered. So, if this is all so extraordinary, a more pervasive role of AI in medicine should have been accepted by now, right?

Resistance to the utilization of medical AI is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘Black Box’) and by an illusory subjective understanding of human medical decision-making.1 In other words, we have difficulties understanding complex algorithms, and we as humans often think we understand complex systems more than we really do. ML, e.g., diagnoses are categorical representations of various medical conditions - although successful in classifying, they are often devoid of clinical context, so the judgment made by a such system is not trusted by clinicians. We sometimes doubt whether the conclusion is right or not since irrelevant things are taken into account, at least from a clinician’s perspective - we are not sure whether an AI algorithm has made an accurate conclusion based on the proper parts of the data or the conclusion was made based on some irrelevan

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.