– Applying a deep learning model to standard CT scans could generate higher-quality images, enabling providers to extract better insights and improve diagnosis, according to a study published in Patterns.
Conventional CT scans produce images that show the shape of the tissues within the body, but they don’t offer clinicians sufficient information about the composition of those tissues. Even with contrast agents like iodine, which help doctors differentiate between soft tissue and vasculature, it’s hard to distinguish between subtle structures.
Dual-energy CT, a higher-level technology, gathers two datasets to produce images that reveal both tissue shape and information about tissue composition. However, this imaging approach often requires a higher dose of radiation and is more expensive because of the additional hardware needed.
“With traditional CT, you take a grayscale image, but with dual-energy CT you take an image with two colors,” said Ge Wang, an endowed professor of biomedical engineering at Rensselaer Polytechnic Institute. “With deep learning, we try to use the standard machine to do the job of dual-energy CT imaging.”
The research team set out to produce these more complex images using single-spectrum CT data and a deep learning model. The group used images produced by dual-energy CT to train their model and found that it was able to produce high-quality approximations with a relative error of less than two percent.
“We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis,” said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer.
The new deep learning method could lead to improved diagnostics and higher-quality, less expensive medical imaging methods.
“Professor Wang and his team’s expertise in bioimaging is giving physicians and surgeons ‘new eyes’ in diagnosing and treating disease,” said Deepak Vashishth, director of CBIS. “This research effort is a prime example of the partnership needed to personalize and solve persistent human health challenges.”
Deep learning continues to demonstrate its potential in the field of medical imaging. Researchers at the Moffitt Cancer Center recently showed that combining deep learning with medical imaging techniques could help determine the best treatment options for patients with non-small cell lung cancer.
The model was able to accurately classify the mutation status of EGFR, or epidermal growth factor receptor, which is a common mutation found in non-small cell lung cancer patients.
“Prior studies have utilized radiomics as a noninvasive approach to predict EGFR mutation,” said Wei Mu, PhD, study first author and postdoctoral fellow in the Cancer Physiology Department.
“However, compared to other studies, our analysis yielded among the highest accuracy to predict EGFR and had many advantages, including training, validating and testing the deep learning score with multiple cohorts from four institutions, which increased its generalizability.”
A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) also recently set out to improve medical imaging with AI techniques. The group developed a machine learning tool that can detect the severity of excess fluid in patients’ lungs, a common warning sign of acute heart failure.
“By learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,” said Tanveer Syeda-Mahmood, a researcher not involved in the project who serves as chief scientist for IBM’s Medical Sieve Radiology Grand Challenge.
“Of course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.”