At an international health conference this week, scientists with the University of Washington and Microsoft Research will virtually present new technology that allows medical providers to remotely check a patient’s pulse and heart rate.
The tool uses the camera on a smartphone or computer to capture video collected of a person’s face. That video is analyzed to measure changes in the light reflected by a patient’s skin, which correlates to changes in blood volume and motion that are caused by blood circulation.
The UW and Microsoft researchers used machine learning and three datasets of videos and health stats to train its system. And as has been the case with various image and video-related machine learning projects, the technology performed less accurately among people of different races. In this case, the challenge is that lighter skin is more reflective, while darker skin absorbs more light, and the tool needs to perceive subtle changes in the reflections.
“Every person is different. So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in,” said Xin Liu, lead author of the research and a UW doctoral student at the Paul G. Allen School of Computer Science & Engineering.
The researchers came up with a fix to the problem: the system requires the user to collect 18 seconds of video that calibrates the device before it calculates pulse and heart rate. The calibration phase can adjust for skin tone, the patient’s age (thin, young skin on babies and kids behaves differently from the aged skin of an older user), facial hair, background, lighting and other factors. The scientists are still working to improve performance, but the strategy greatly increased the accuracy of the system.
The use of calibration to fine-tune performance means that machine learning can be implemented with smaller datasets that might not be perfectly representative of a population.
That’s good news, said Daniel McDuff, one of the co-authors and a principal researcher at Microsoft Research. Smaller datasets lead to a greater preservation of privacy as fewer people need to contribute information. It democratizes and makes machine learning accessible to a wider range of developers. It means that one entity isn’t left holding massive amounts of information captured in global datasets.
“Personalization is always going to be necessary for the best performance,” McDuff said.
The system also protects private information because it can be run entirely on a phone or other device, keeping the data out of the cloud.
The researchers next step is to test the technology in a clinical setting, which is in the works.
Shwetak Patel, a professor in the Allen School and the Department of Electrical & Computer Engineering, was a senior author of the UW research. Patel has been working for many years on technology that turns ordinary smartphones into health monitoring devices. He is the co-founder of the Senosis Health, a UW spinoff that was acquired by Google.
The research was funded by the Bill & Melinda Gates Foundation, Google and the UW.
As digital health is riding a COVID-fueled wave of popularity and being stoked with millions of dollars in new investments, researchers are hustling to develop tech tools that can deliver more robust healthcare in remote settings.
Developments that turn ordinary tech devices into tools for healthcare are well timed to meet the growing demand for telehealth. Amazon last month said that it will expand its Amazon Care remote health service to non-employees, first in Washington state and then nationwide later this year. Seattle telemedicine startup 98point6 raised $118 million in October as its membership service grows quickly amid the pandemic.
A separate group of UW researchers revealed technology last month that uses machine learning algorithms to turn smart speakers into sensitive medical devices that can detect irregular heartbeats.