Although X-ray vision sounds like a thing of the faraway future, the superhuman strength is already a reality at MIT — and only getting stronger.

Last year, the MIT Computer Science and Artificial Intelligence Laboratory unveiled a Wi-Fi-reliant system aptly named “Wi-Vi.” The wireless technology made it possible for mere mortals to track moving objects behind a wall, yet, at the time, still had a very low resolution. Researchers’ latest version, however, allows users to detect gestures as subtle as a baby’s breathing, and with 99 percent accuracy, no less.

“It has traditionally been very difficult to capture such minute motions that occur at the rate of mere millimeters per second,” said MIT Professor Dina Katabi, director of NETMIT, in a statement. “Being able to do so with a low-cost, accessible technology opens up the possibilities for people to be able to track their vital signs on their own.”

The update, also capable of measuring heart rate, is expected to have implications on everything from health-tracking apps and the military to baby monitors:

As noted by the MIT CSAIL team, today’s baby monitors cannot monitor a baby’s breathing remotely. Their device monitors breathing and heart rate, however, by using an embedded wireless sensor that captures low-power wireless reflections.

On the other end of the spectrum, Wi-Vi could help military or law enforcement officials determine how many people are in a room, as well as their movements. Armed with that information, they could successfully avoid an ambush or emergency responders could more easily find survivors inside a burning building.

Wi-Vi sends two Wi-Fi radio waves through a barrier and then measures the way those signals bounce back. “As the signal is transmitted at a wall, a portion of the signal penetrates through,” describes MIT CSAIL, “reflecting off a person on the other side.” In order to monitor breathing, researchers needed to add a complex metric “that approximates, and then observed and amplified its changes to distinguish the breathing.”

The team, comprised of Katabi, Fadel Adib and Zachary Kabelac, will continue strengthening the system. Up next? Detecting body silhouettes and emotions. Or rather, turning what you see on the silver screen into a real-world actuality.