When a patient climbs into an MRI scanner, it peers inside their body to reveal the complex anatomy within, like the ligaments and tendons in a knee. But in January, before COVID struck, some patients who needed their knee scanned at NYU Langone Health started getting intentionally scanned twice. A scan for a typical human knee takes around 10 minutes, and these subjects—who had consented to taking part in a study—had their joint scanned at the normal speed, as well as about twice as fast (with the help of AI). After the coronavirus interruption, the work has since resumed using one scanner at the hospital.

That initiative is part of an ongoing effort at the medical center, in partnership with Facebook Artificial Intelligence Research, to see if running an MRI machine faster—and grabbing less data in the process—can produce images that are just as good those that arise the normal way. Reducing an approximately 10-minute knee scan to about 5 minutes, or shortening the scan time for other body areas, has obvious benefits: A patient could spend less time in a clanging tube (a procedure that demands they hold as still as possible) and hospitals could do more with the expensive, limited hardware they have.

To make this possible, radiologists and computer scientists need to employ artificial intelligence. If they were to run an MRI machine twice as fast as usual and then try to spin the data they collected into an image with the normal method, the result would be unusably bad. Enter AI: Using machine learning to analyze that comparably scant data and then create a picture produces something that is indeed usable, and in fact, looks to be of nicer quality to some radiologists’ eyes than the alternative.

The project reported good news last month. The researchers involved published the results of another study that aimed to determine if radiologists could tell the difference between typical MRI images and those that used AI, and if those scans were interchangeable diagnostically. Last year, Popular Science took a deep, exclusive dive into that process, shadowing a physician who took part in the experiment. The study’s results were published in the American Journal of Roentgenology last month.

What the study showed was encouraging. Dr. Michael Recht, the first author on the published study and the chair of the radiology department at NYU Langone Health, says that the images created by artificial intelligence (from a slimmer amount of data than is usually gathered) held up well compared to images made via the normal process. “There is no difference in how people read the scans, whether they’re reading the accelerated or the clinical [traditional] sequences,” Recht says. “They’re able to make the diagnosis equally well on either of the scans.”

In fact, he says he would rely on an AI-generated image of a patient’s knee to arrive at a diagnosis—a conclusion that a surgeon may then use when deciding whether or not to operate. “The sequences really are interchangeable, and I’m very, very comfortable using those sequences to make a diagnosis,” he says. Of the six radiologists in the study, only one of them was able to discern whether the scans were made the normal way or with AI.

With this recently published study, patients were not actually scanned twice. Instead, the team took MRI scans of patients’ knees and simulated the process of what a faster imaging process would have created by stripping some of the raw data out, and then used AI to knit that data into a complete picture.

(First published in Popular Science in September 2020)