HomeNewsNew AI method detects uncertainty in medical images

New AI method detects uncertainty in medical images

In biomedicine, segmentation involves annotating pixels of a crucial structure in a medical image, equivalent to an organ or cell. Artificial intelligence models can assist doctors by highlighting pixels that will show signs of a selected disease or abnormality.

However, these models typically only provide one answer, while the issue of segmenting medical images is usually anything but black and white. Five expert human annotators could provide five different segmentations and should disagree in regards to the existence or extent of boundaries of a nodule in a lung CT image.

“Having options can assist with decision making. The mere undeniable fact that there’s uncertainty in a medical picture can influence an individual's decisions. “It is due to this fact vital to take this uncertainty under consideration,” says Marianne Rakic, a doctoral student in computer science at MIT.

Rakic ​​​​is the fundamental writer of a Paper with others at MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital, introducing a brand new AI tool that may capture uncertainty in a medical image.

Known as Tyche (named after the Greek deity of likelihood), the system offers multiple plausible segmentations, each highlighting barely different areas of a medical image. A user can specify what number of options Tyche gives out and select probably the most suitable one for his or her purpose.

Importantly, Tyche can handle latest segmentation tasks without the necessity for retraining. Training is a data-intensive process that involves showing a model many examples and requires extensive machine learning experience.

Because no retraining is required, Tyche might be easier for clinicians and biomedical researchers to make use of than another methods. It might be used “out of the box” for quite a lot of tasks, from identifying lesions in a lung X-ray to locating abnormalities in a brain MRI.

Ultimately, this technique could improve diagnoses or support biomedical research by drawing attention to potentially vital information that other AI tools may miss.

“Ambiguity has not been sufficiently researched. If your model completely misses a nodule that three experts say is there and two experts say isn't there, you need to probably listen to that,” adds senior writer Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a study by scientists on the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

Her co-authors include Hallee Wong, a graduate student in electrical engineering and computer science; Jose Javier Gonzalez Ortiz PhD '23; Beth Cimini, associate director of bioimage evaluation on the Broad Institute; and John Guttag, Dugald C. Jackson Professor of Computer Science and Electrical Engineering. Rakic ​​​​will present Tyche on the IEEE Conference on Computer Vision and Pattern Recognition, where Tyche was chosen as a highlight.

Address ambiguities

AI systems for medical image segmentation typically use neural networks. Loosely based on the human brain, neural networks are machine learning models that consist of many interconnected layers of nodes, or neurons, that process data.

After speaking with staff on the Broad Institute and MGH who use these systems, researchers found that two major problems limit their effectiveness. The models cannot capture uncertainty and should be retrained for even a rather different segmentation task.

Some methods try and overcome one pitfall, but addressing each problems with a single solution has proven particularly difficult, Rakic ​​says.

“If you ought to take ambiguity under consideration, you frequently need to use an especially complicated model. Our goal is to make use of the tactic we propose to simplify the appliance with a comparatively small model in order that predictions could be made quickly,” she says.

The researchers built Tyche by modifying a straightforward neural network architecture.

A user first feeds Tyche some examples showing the segmentation task. Examples might include multiple images of lesions in a cardiac MRI which have been segmented by different human experts to permit the model to learn the duty and recognize that ambiguities exist.

The researchers found that just 16 sample images, called a “context set,” are enough for the model to make good predictions, but there isn’t any limit to the variety of examples you should utilize. The context set allows Tyche to unravel latest tasks without retraining.

To allow Tyche to capture uncertainty, the researchers modified the neural network to output multiple predictions based on a medical image input and the context sentence. They adjusted the layers of the network in order that the candidate segmentations created at each step can “talk” to one another and to the examples within the context set as the info moves from layer to layer.

This allows the model to be sure that the candidate segmentations are all barely different but still solve the duty.

“It's like rolling dice. If your model can roll a two, a 3, or a 4, but doesn't know that you just have already got a two and a 4, then considered one of them might respawn,” she says.

They also modified the training process to reward him with maximizing the standard of his best prediction.

If the user asks for five predictions, they might find yourself seeing all five medical image segmentations that Tyche created, although one could also be higher than the others.

The researchers also developed a version of Tyche that could be used with an existing, pre-trained model for medical image segmentation. In this case, Tyche allows the model to output multiple candidates by performing slight transformations on images.

Better and faster predictions

When the researchers tested Tyche on datasets of annotated medical images, they found that its predictions captured the range of human annotators and that its best predictions were higher than any of the baseline models. Tyche was also faster than most models.

“Selecting multiple candidates and ensuring they’re different from one another really gives you a bonus,” Rakic ​​says.

The researchers also found that Tyche was capable of outperform more complex models trained using a big, specialized data set.

For future work, they plan to make use of a more flexible context set, perhaps with text or multiple image types. In addition, they need to explore methods that would improve on Tyche's worst predictions and improve the system in order that it could actually recommend one of the best segmentation candidates.

This research is funded partly by the National Institutes of Health, the Eric and Wendy Schmidt Center on the Broad Institute of MIT and Harvard, and Quanta Computer.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read