RSNA2023 Leading Through Change
Daily Bulletin

Demonstrating How Transparency in AI Can Support Research and Clinical Practice

Tuesday, Nov. 28, 2023

By Nick Klenske

The lack of transparency in radiologic research that uses AI datasets and source code is one reason that some radiologists are slightly distrustful of AI.



During a Monday session, Eline Langius, MD, a radiology resident and PhD candidate at Isala Hospital in the Netherlands, discussed this affect of this distrust on radiology research and in clinical practice.

"When we use AI in clinical practice, there's usually little transparency, meaning we don't exactly know what the algorithm looks like, whether the version we get to use is the same as the one used in the most recent study or what exactly the algorithms are trained on," Dr. Langius explained. "A general lack of trust in AI is a major barrier to its widespread implementation in radiology."

Confident this barrier could be overcome through open access, Dr. Langius led a study evaluating the diagnostic performance and generalizability of the winning deep learning (DL) algorithm of the RSNA 2020 Pulmonary Embolism Detection Challenge.

Published Source Code Enables Research

The RSNA Challenge, which invited researchers to develop machine-learning algorithms to detect and characterize instances of pulmonary embolism on chest CT studies, was unique in that it required competitors to publish their source code.

"This allowed the fog to lift as we were able to see the algorithm, how it worked and, perhaps most importantly, what it worked on," Dr. Langius explained.

With this information in hand, researchers retrained the winning algorithm on the RSNA-STR Pulmonary Embolism CT (RSPECT) dataset. The retrained algorithm was tested at Isala Hospital on multidetector CT (MDCT) images and on spectral detector CT (SDCT) and virtual monochromatic images at the University Medical Center Utrecht, Netherlands.

The output was compared with a reference standard, which included a consensus reading by at least two experienced cardiothoracic radiologists.

What researchers found was that the retrained algorithm showed high diagnostic accuracy on MDCT images with an AUC of 0.96. However, a somewhat lower performance was observed on SDCT images, suggesting that additional training on more advanced CT technology could improve the generalizability of the algorithm.

But perhaps the study's biggest takeaway was the benefits of transparency and open access.

"We were able to obtain this information because of the transparency of the RSNA RSPECT CTPA dataset and the source code of the DL algorithm, both of which are not typically available to radiologists using commercial AI products in the clinical setting," Dr. Langius said.

The Benefits of Transparency and Open Access

According to Dr. Langius, when a radiologist knows how an algorithm is designed, how it works and how it was trained, they will feel more confident in using it.

Another benefit of having access to an algorithm's source code is that it can be trained on local CT pulmonary angiogram data, meaning its diagnostic accuracy can be optimized on specific protocols and scan techniques.

"Reporting large scale results of open access DL algorithms also has the potential to encourage the innovation, competition and collaboration we need to drive developments in this exciting field," Dr. Langius concluded.

Access the presentation, "External Validation of a Deep Learning-Based Model to Detect Pulmonary Embolism on CTPA," (M3-SSCH03-1) on demand at