Researchers in Denmark tested an AI tool that could help diagnose chest X-rays. The focus was primarily on ruling out diagnoses without increasing the number of fatal errors. The research team wanted to see if the quality of errors made by AI and radiologists differed, and whether AI errors were, on average, objectively worse than human errors.
AI Tool for research It was pre-modified to generate what is called a “prominence probability” for chest X-rays. This was then used to calculate specificity (a measure of a medical test’s ability to correctly identify people who do not have a disease) at different AI sensitivities.
AI tool for pathology
The researchers wanted to estimate the proportion of chest X-rays that were missed, where AI could correctly rule out disease without increasing diagnostic errors. The Danish study found that the AI tool was effective at ruling out disease, and the rates of critical errors on chest X-rays were the same or lower than those of radiologists. The study results are recent Published In the rays.
“We, along with other researchers, have previously demonstrated that AI tools are able to rule out pathologies in chest X-rays with high reliability and thus provide an independent normal report without the intervention of a human radiologist. These AI algorithms miss only a very small number of abnormal chest X-rays. However, for our current study, we did not know what the appropriate threshold for these models was,” said lead author Louis Linde Plesner, from the Department of Radiology at Herlev & Gentofte Hospital in Copenhagen.
Control by radiologists
Two thoracic radiologists, unaware of the AI output, rated chest radiographs as “excellent” or “unremarkable” based on pre-specified unremarkable findings. Chest radiographs that were not found by the AI and/or radiology report were rated as critical, clinically significant, or clinically insignificant by one thoracic radiologist—without knowing who made the error (human or AI).
The reference standard classified 1,231 of 1,961 chest radiographs (62.8%) as outstanding and 730 of 1,961 (37.2%) as unremarkable. The AI tool correctly excluded pathology in 24.5% to 52.7% of unremarkable chest radiographs with a sensitivity of 98% or greater, with critical error rates lower than those found in the radiology reports accompanying the images.
It was striking that AI errors were, on average, more clinically dangerous for the patient than radiologists’ errors. “This is likely because radiologists interpret results based on the clinical scenario, something AI does not do. So when AI aims to provide a standard automated report, it needs to be more sensitive than a radiologist to avoid lowering the standard of care during implementation,” said Plessner. “This finding is also interesting in general in an era of AI capabilities that span multiple high-stakes environments and are not limited to healthcare.”
AI tool for lung imaging
Recently, six imaging networks in the UK chose Annalize.ai’s AI tool to support early lung cancer diagnosis. This X-ray tool uses AI to detect abnormalities in X-rays and supports radiologists in their assessment. In an Annalize.ai study published in July 2021 in The Lancet, the tool could identify 124 different findings on chest X-rays. In this way, diagnoses such as lung cancer could be made more quickly.
“Total coffee specialist. Hardcore reader. Incurable music scholar. Web guru. Freelance troublemaker. Problem solver. Travel trailblazer.”
More Stories
Brabanders are concerned about climate change.
The “term-linked contract” saves space on the electricity grid.
The oystercatcher, the “unlucky national bird,” is increasingly breeding on rooftops.