The Yuan requests your support! Our content will now be available free of charge for all registered subscribers, consistent with our mission to make AI a human commons accessible to all. We are therefore requesting donations from our readers so we may continue bringing you insightful reportage of this awesome technology that is sweeping the world. Donate now
Deep generative models deploy in radiology
By Moein Shariatnia  |  Jan 31, 2023
Deep generative models deploy in radiology
Image courtesy of and under license from
Generative models are hot items in tech news today. Stable Diffusion and Midjourney - which create amazing artwork - and ChatGPT, which writes excellent essays for an input prompt - have astounded many. Similar models are also in the works for healthcare - especially for radiology - to address their real-world challenges. ML developer and medical student Moein Shariatnia sketches a brief history of deep generative models and explains how they work and their use in medical imaging.

TEHRAN - Generative models learn data distributions, allowing them to generate similar data samples after training. Until recently, variational autoencoders (VAEs) and generative adversarial networks (GAN) were two favorite approaches for building generative models. However, diffusion models have recently taken their place and outperformed them in almost every application. VAEs are encoder-decoder models that learn a latent space in which the input data is encoded. They can then sample from that distribution to generate new data samples using their decoder.

Different families of generative models in deep learning

Figure 1 | VAE vs GAN vs Diffusion models | Image by the author

GANs consist of two networks trained simultaneously: a generator and a discriminator. The generator tries to turn random noise into new data samples and fool the discriminator into believing these synthesized samples are real, while the discriminator sees both real and generated samples during training and learns to distinguish them - without being fooled by the generator - as a binary classifier. To train diffusion models, random Gaussian noise is added in multiple steps to each image in the training set until that image is no longer recognizable and simply becomes pure noise. Up until this point, no learning occurs, and this process is done using simple code. Here, the diffusion model must learn to denoise these images and recover the main image.

This denoising is also done in multiple steps, with each removing a part of the artificial noise added before. By so doing, the model gets a sense of the training data distribution and different noise distributions

The content herein is subject to copyright by The Yuan. All rights reserved. The content of the services is owned or licensed to The Yuan. Such content from The Yuan may be shared and reprinted but must clearly identify The Yuan as its original source. Content from a third-party copyright holder identified in the copyright notice contained in such third party’s content appearing in The Yuan must likewise be clearly labeled as such.
Continue reading
Sign up now to read this story for free.
- or -
Continue with Linkedin Continue with Google
Share your thoughts.
The Yuan wants to hear your voice. We welcome your on-topic commentary, critique, and expertise. All comments are moderated for civility.