UD engineer explores use of computer vision to enable better health care
Google Glass Explorers are using the novel wearable computers for applications ranging from wildlife preservation and museum tours to precision sports training and on-the-go language translation.
For the University of Delaware’s Jingyi Yu, Google Glass is one more device in a “smart health” toolkit that has the potential to profoundly change medical training, diagnostics and treatment.
Director of the Graphics and Imaging Lab in UD’s Department of Computer and Information Sciences, Yu jumped at the opportunity to be a Glass Explorer, joining a select group of people invited “to make, to tinker, to create, to shape, and to share through Glass.”
Yu is now exploring the use of this innovative but affordable technology to fit and design prosthetics.
For the past several years, he has been “tinkering” with mainstream 3D imaging technologies for health sciences applications, including the use of Microsoft Kinect to provide remote surgical training. Initially launched as an add-on to the X-Box gaming system, Kinect is now changing the game in a broad range of arenas by supporting movement, voice and gesture-recognition technology.
Google Glass has the potential to change the game even more.
Yu explains that patients are typically measured for lower-extremity braces or orthotics by a technician using a coordinate measuring machine that “pinches” the leg at 20 to 30 points identified by a physician. The machines can cost $50,000 or more, and the procedure takes about a half hour.
Yu is not a medical professional, but when he puts on his orange-rimmed $1,500 Google Glass, he gets the job done in 20 seconds.
‘OK, Glass, record a video’
With the patient sitting on an exam table, Yu starts by asking Glass to record a video. He then moves his head around the patient’s ankle-foot area to capture video. Computer vision algorithms next use the captured imagery data to create a 3D model of the lower extremity, which is ultimately sent to a 3D printer to fabricate the custom device.
“Patients and their families can actually do this at home and upload the files to a server for the 3D reconstruction,” Yu says. “Then the model can be sent off to the printer or even back to the patient if he or she has access to a 3D printer. With this approach, we’re eliminating the need for an elaborate laboratory setup, expensive devices and a complex procedure.”
Steven Stanhope, who is collaborating with Yu on the project, sees the work as a great example of smart health. Stanhope is director of the BADER Consortium, which is advancing and strengthening evidence-based orthopedic rehabilitation care to improve quality of life for wounded warriors.
“We see the field of orthotics and prosthetics as an absolutely ideal setting for these types of technologies because to attain an optimal level of function for a person, you need a personalized device,” says Stanhope, a professor in the Department of Kinesiology and Applied Physiology.
“That means the size and shape of the device as well as the functional characteristics — how stiff it is, how springy it is, where it likes to bend and where it doesn’t like to bend, and how it’s aligned — should all be customized,” he adds.
“Historically, that’s all been done by hand by clinicians who are remarkably skilled. What we’re doing is looking at smart systems or smart medical approaches to do that in very objective and automatic ways.”
‘OK, Glass, what’s the future look like?’
Yu explains that with its front-facing camera and projection system, Glass lets the user see both the real and virtual worlds at the same time.
“This opens up a vast array of potential applications,” he says. “For example, in a surgical setting, CT and MRI scans can be superimposed onto the Glass image, enabling the user to ‘see beneath the skin’ and eliminating the need to look back and forth between the surgical site and a screen displaying the scan.”
Yu recently took Glass in a new direction when he initiated a collaboration with UD’s Cole Galloway to use the technology to analyze the motion of children for early detection of disabilities and developmental delays.
“Our goal is to automatically analyze how much time a toddler spends moving,” says Galloway, who leads the GoBabyGo project. “This is typically a very laborious and time-consuming process where an expert observes hours of video and hand marks it for analysis.”
“Glass could be a real game changer here by speeding up the process and giving us very objective data analysis,” he adds. “The earlier we can detect delays and disabilities, the better our chances of intervening to help kids develop physically, socially, emotionally, and cognitively.”
Yu says Google has placed no restrictions on the use of Glass. “They want us to be creative,” he says.
And that’s exactly what Yu is doing.
About Jingyi Yu
Jingyi Yu is director of the Graphics and Imaging Lab in UD’s Department of Computer and Information Sciences, with joint appointments in biomedical engineering and electrical and computer engineering.
Yu has received an Air Force Young Investigator Award, a National Science Foundation Faculty Early Career Development Award, and UD’s College of Engineering Outstanding Junior Faculty Award.
His work on the use of Microsoft Kinect for remote surgical training was funded by the Delaware INBRE Program.
Article by Diane Kukich