New framework for super-resolution ultrasound
Researchers at the Beckman Institute for Advanced Science and Technology used deep learning to develop a new framework for super-resolution ultrasound. Credit: Xi Chen; Matthew R. Lowerison; Zhijie Dong; Nathiya Vaithiyalingam Chandra Sekaran; Daniel A. Llano; Pengfei Song. Researchers at the Beckman Institute for Advanced Science and Technology used deep learning to develop a new […]
Researchers at the Beckman Institute for Advanced Science and Technology used deep learning to develop a new framework for super-resolution ultrasound.
Credit: Xi Chen; Matthew R. Lowerison; Zhijie Dong; Nathiya Vaithiyalingam Chandra Sekaran; Daniel A. Llano; Pengfei Song.
Researchers at the Beckman Institute for Advanced Science and Technology used deep learning to develop a new framework for super-resolution ultrasound.
Traditional super-resolution ultrasound techniques use microbubbles: tiny spheres of gas encased in a protein or lipid shell. Microbubbles are considered to be a contrast agent, which means they can be injected into a blood vessel to increase the clarity of an ultrasound image.
Conventional ultrasound has been commonplace for over 50 years. The development of super-resolution technology in the last decade has introduced new challenges. Super-resolution ultrasound provides a much clearer picture than the traditional method. Although useful for research and diagnostics, its processing speeds are much slower.
“Conventional imaging cannot differentiate vessels that are too close to each other,” said Pengfei Song, an assistant professor of electrical and computer engineering at the University of Illinois Urbana-Champaign and an author on the paper. “With super resolution imaging, you can actually make that distinction and tell if there are two vessels or a single vessel. You can tell the shape of the vessels, and sometimes there are diagnostic implications, like with cancerous tumors. But clinical translation has been difficult because nobody’s going to wait in a hospital for hours for these images to get processed.
“Ultrasound is expected to be a real-time imaging modality.”
This challenge prompted Song to team up with fellow Beckman researcher Dr. Daniel Llano, an associate professor of molecular and integrative physiology and a neurologist at Carle Foundation Hospital. Together, the researchers tested a new approach to super-resolution ultrasound technology.
Their paper appears in IEEE Transactions on Medical Imaging.
“As engineers, we develop tools that we think will be useful for researchers, but sometimes we miss the mark,” Song said. “This is a case where the user of the technology, like Professor Llano, tells us how we have to improve the technology: make it faster.”
Traditional super-resolution ultrasound techniques produce crisp, vibrant images, but the process is lengthy because it requires a very low concentration of microbubbles. For researchers like Llano, every minute counts.
In response to Llano’s feedback, the Song group returned to the drawing board and decided to revamp the super-resolution technology, forgoing microbubble localization and tracking entirely. Instead of evaluating data frame by frame, the researchers used a holistic approach and evaluated the information spatiotemporally — over space and time. Using an artificial intelligence network, the technology was able to determine the speed of blood flow and convert a blurred image to a clearer one with a high resolution.
“In conventional super-resolution ultrasound, the signal is very blurred out,” said Matt Lowerison, a Beckman Institute Postdoctoral Fellow and an author on the paper. “We have to try and find the center of this very blurred-out point to produce these super localized dots on an image. And then over time, we can slowly start to accumulate these dots into a super-resolved image. But the big limitation is that it takes forever. Our approach, which uses a deep learning network, avoids this whole very expensive process and just produces a super resolution image without having to worry about any of this explicit identification of microbubbles.”
Because conventional super-resolution ultrasound is so slow, the end product is likened to a still image. But with the researchers’ new method, blood flow can be visualized in real time.
“To the best of our knowledge, this is the first paper that achieved direct calculation of the blood flow velocity, both speed and direction, using raw ultrasound data without any explicit microbubble localization or tracking,” Song said.
As a result, processing speeds have been reduced from minutes to seconds, and post-processing can be done in real time. The researchers hope that speeding up the higher-resolution technology will make it a useful option for clinicians.
“We’ve done human imaging before with conventional imaging, but it’s challenging,” Song said. “We think that this technique has the potential for super resolution to be finally used in a clinical setting.”
Collaboration between the two research groups was made possible through the shared environment at Beckman.
“Professor Llano’s home department is molecular and cellular biology, so without Beckman this collaboration would not have been possible, because we need to have a common lab space,” Song said. “It’s really the common physical space that made this happen.”
Editor’s note:
The paper titled “Localization free super-resolution microbubble velocimetry using a long short-term memory neural network” can be accessed online at https://doi.org/10.1109/TMI.2023.3251197.
Additional authors of the paper included Xi Chen and Zhijie Dong of the department of electrical and computer engineering and Nathiya Vaithiyalingam Chandra Sekaran of the department of molecular and cellular biology. Sekaran is a current Beckman Institute Postdoctoral Fellow.
Journal
IEEE Transactions on Medical Imaging
DOI
10.1109/TMI.2023.3251197
Method of Research
Imaging analysis
Subject of Research
Animal tissue samples
Article Title
Localization free super-resolution microbubble velocimetry using a long short-term memory neural network
Article Publication Date
1-Mar-2023
What's Your Reaction?