Can AI Add Value to Radiology? Informatics Experts Share Latest Findings with Overflow Crowd

Thursday, Nov. 29, 2018

Judging by the quality and quantity of papers published in major medical journals – and the overflow crowd attending the session – this year has seen significant advances in imaging informatics, according to two leading experts who spoke Wednesday.

Kahn

Kahn

At the packed session, Charles E. Kahn Jr., MD, editor of the new RSNA online journal, Radiology: Artificial Intelligence, and deputy editor, William Hsu, PhD, shared some of the most significant studies on informatics published in scientific journals in the last year.

One study highlighting the use of deep learning (DL) for the reconstruction of images from MRI described how the method — automated transform by manifold approximation — or AUTOMAP, could improve upon the performance of existing acquisition methods.

"This is a unified framework for image reconstruction that exploits the network's inherent ability to compensate for noise and other perturbations and it really goes beyond MRI reconstruction," said Dr. Hsu, an associate professor of radiology at the University of California in Los Angeles (UCLA). "There's a lot of interest in applying similar approaches to reconstruct CT images."

Dr. Hsu also shared results from studies on machine learning (ML) models for the annotation of radiology reports and the use of algorithms to reduce errors due to reader variability that further underscore the great potential of artificial intelligence (AI).

"Can AI add value to radiology?" Dr. Hsu asked. "I think most of us would agree it can. We can enhance diagnostic accuracy, optimize worklists, perform initial analyses of cases in high-volume applications impacted by observer fatigue, extract information from images that are not apparent to the naked eye and improve the quality of reconstruction."

Still, significant challenges remain, including a shortage of quality data, according to Dr. Kahn, professor and vice chair of radiology, at the University of Pennsylvania's Perelman School of Medicine in Philadelphia.

"Most people who have done work in this area have discovered that about 70 to 80 percent of the work that you do is not building the model or testing it," he said. "It's curating, cleaning and massaging the data to get it into shape."

Recent studies have shown the potential for DL to address this dearth of quality of data. A study shared by Dr. Kahn looked at the potential of institutions to distribute DL models rather than patient data, an approach that would lessen the need for the labor-intensive work of de-identifying images.

"That would solve for many of us the problems that we face in terms of building something like ImageNet with data from each of our institutions," Dr. Kahn said.

Radiology Study on Training Algorithms

New research reached eye-opening conclusions about the optimal number of images needed to train an algorithm. A study in Radiology that looked at the automated classification of chest radiographs found that the DL model's accuracy improved significantly when the number of images used to train the algorithm jumped from 2,000 to 20,000. However, accuracy improved only marginally when the number of training images increased from 20,000 to 200,000.

"That's actually a useful thing, that maybe we don't need to have millions of images in order to train the system," Dr. Kahn said. "Maybe having a modest number would be a good start, along with other approaches that you could perhaps superimpose on top of that."

As for mining the data itself, Dr. Kahn pointed to Natural Language Processing (NLP) as a promising avenue of research. NLP is the overarching term used to describe the process of using of computer algorithms to identify key elements in everyday language and extract meaning from unstructured spoken or written input.

"NLP is using various systems to help mine data out of electronic health records," he said. "Most of the information in electronic health records is text, and a lot of the resultant information is in the form of narrative text."

Links to the papers shared at the session and related studies can be seen online at http://bit.ly/Imaging2018.