Apple adds movement disorder API, plus speech, hearing, and vision tests, to ResearchKit framework

By Jonah Comstock
02:57 pm
Share

In addition to a Health Records API and various fitness updates to the Apple Watch, Apple made one more health announcement last week at WWDC: a slew of upgrades to ResearchKit including hearing, vision, and speech tests; an updated UI; and a new research API for monitoring Parkinson's tremors and dyskinesia.

"One of the identifiable symptoms of Parkinson’s is a tremor and this API monitors for tremor at rest, characterized by a shaking or a trembling of the body when somebody is not intending to move," Gabriel Blanco, an Apple core motion engineer, said in a talk Monday. "Now there are treatments, including medications, that can help control the symptoms of Parkinson’s. However, these very same treatments can often have side effects, such as dyskinesia. One of those that this API can monitor for is a fidgeting or swaying of the body known as choreiform movement."

The API allows for always-on passive monitoring of these movement disorders, using Apple's core motion processor. Researchers can see trends over time and longitudinal data.

Parkinson's has been a focus of Apple behind the scenes for some time — it was one of the topics Apple met with the FDA about in 2015 and was the focus of the Parkinson's mPower study, one of the first ResearchKit apps. But apps like mPower relied on discrete tap tests to monitor patients and more generalized movement data, rather than having access to a bespoke motion disorder API.

At the same WWDC session, Apple announced active tasks for ResearchKit and CareKit that allow developers to incorporate vision, hearing, and speech tests. The available vision test is a digital implementation of the Amsler Grid, which can be used to detect symptoms of macular degeneration.

The hearing test is a tone audiometry test, designed to emulate the Hughes-Westlake method for hearing testing. The app plays a tone and instructs users to tap if they hear it. Developers can also include another task to identify the amount of background noise and inform users if the room is too noisy for the audiometry test.

The speech recognition module prompts a user to recite a sentence, then displays a transcript of that sentence and asks them to correct it. It can collect data on the syntactic, semantic, and linguistic features of speech. One final protocol, Speech In Noise, combines the hearing and speech recognition tests in order to test the user's ability to detect and distinguish human speech in a crowded room, which can detect some types of hearing loss that tend to be missed by traditional tone audiometry. 

The updates to the ResearchKit UI are fairly minor, but one area where Apple has improved a lot is the informed consent document, Apple Health Engineer Srinath Tupil Muralidharan said at the session. The new informed consent module features quicker navigation, real time annotations, search capability, and the ability to share or save the document.

Twitter: @JonahComstock
Email the writer: jonah.comstock@himssmedia.com

Share