The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
AI's Data Problem
By David H. Freedman  |  Aug 23, 2021
AI's Data Problem
Image courtesy of and under license from Shutterstock.com
US-based hospital network Kaiser Permanente unveiled an experimental AI system two years ago that can do a better job than psychiatrists and therapists at predicting which patients are most at risk of a suicide attempt within three months. This was a ground-breaking discovery but the big challenge in the field still remains the same: getting access to far more data than is currently accessible to most AI projects.

BOSTON - United States-based hospital network Kaiser Permanente unveiled an experimental artificial intelligence (AI) system two years ago that can do a better job than human psychiatrists and therapists at predicting which patients are most at risk of a suicide attempt within three months. The system, deployed as a pilot project that may eventually be rolled out to all patients at Kaiser Permanente, is based on machine learning (ML) algorithms that were trained by exposing them to detailed patient records, including whether each patient had attempted suicide.

If that ML system is ground-breaking in its capabilities, it may be because the physicians, researchers and programmers who developed it had a secret weapon: Access to the records of nearly 22 million patient records gathered from three different hospital systems over several years.

AI is already beginning to improve medicine and is certain to have an increasing impact on every aspect of human health in the coming years, from diagnosis and treatment to drug development. But often overlooked amid all the anticipation is a big challenge with which the field hasn’t yet come to grips: Creating effective AI systems depends intimately on getting hold of massive amounts of data, far more data than is currently accessible to most AI projects.

At the heart of the problem is the fact that AI systems work not through programmed logic, but by learning through repeated exposure to examples and desired results. Even when the task at hand is a relatively simple one, like enabling a self-driving car to differentiate a stop sign from a lamppost, so-called ML programs usually need to be shown hundreds of thousands of examples before they start to become proficient at doing it on their own. When the task is a subtle, complex one that even human experts struggle with, like diagnosing a disease from a set of symptoms that vary from patient to patient or determi

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Comments
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.