Biometric remote patient monitoring: what most people aren’t talking about (yet)

NOTE: I wrote this article two years ago (argh!), and I left it sitting in my drafts thinking I’d make a tweak here and there. One of my commitments for 2022 is “work to publish” — meaning I’m re-committing to getting ideas that I’ve been messing with out of my drafts folder and into the world. Even if it’s not perfect.

Since I wrote this post, Elektra Labs has rebranded to HumanFirst, we raised our Series A, and we’ve worked with 22 of the top 25 pharma companies collecting biometric remote monitoring. In this post, you’ll find some of the inspiration that sparked our work, and considerations as you adopt biometric remote monitoring in your projects and life.

In Jan 2019, Eric Perakslis, a lecturer at Harvard Medical School and former FDA CIO, published an opinion in BMJ on biometric remote patient monitoring, exploring the question:

How do we protect patient privacy and security … while also capturing the advantages from next generation digital medicine and high-performance AI?

Dr. Califf, the former commissioner of the FDA, tweeted Eric’s BMJ manuscript.

Perakslis highlights that even though the FDA has been clearing a lot of wearables and algorithms, like the ECG app for the Apple Watch — which, notably, can’t be used to make a diagnosis — most digital health software products do not yet have clinical grade accuracy or precision.

For example, the FDA response letter for the Apple device itself identifies a number of risks such as false negatives, false positives, misinterpretation, and potential over reliance on the product.

So, what does this mean for society? Isn’t it good for innovation that “lower-risk” technological advancements are getting out on the market faster? Perhaps not, especially if these devices come to market before they have time to develop solid, repeatable clinical evidence.

As Perakslis notes, as these products gain adoption, society faces new data privacy and security concerns. Let’s define what he means by security versus privacy:

Privacy refers to an individual’s right to control their information and its use. Security refers to how that information is protected. They aren’t necessarily congruent. (BMJ, 2019)

The security issues of connected devices are relatively well-understood. Agencies, like the FDA, have been forward thinking on the security front — actively participating in security conferences like DEF CON, and releasing a number of cybersecurity guidances over the past year, which have been both forward-looking and supported by the security researcher community.

However, to Perakslis’s point: who is responsible for overseeing the privacy of our digital specimens?

Our healthcare system has strong protections for patients’ biospecimens, like blood or genomic data, but what about our digital specimens? Due to an increase in biometric surveillance from digital tools — which can recognize our face, gait, speech, and behavioral patterns — data rights and governance become critical. (WIRED Op-Ed on digital medicine, 2018)

Biometric monitoring has been topic of heated discussion within our team at Elektra Labs as we build out a platform to accelerate the adoption of digital biomarkers derived from connected devices and digital health technologies:

  • On one hand, remote biometric monitoring provides an opportunity to transform healthcare by transforming clinical trials (e.g., via decentralized trials) and making personalized care readily available at home.
  • On the other hand, remote monitoring is still surveillance, and determining which trusted-party should be privy to digital biospecimen data becomes increasingly important — especially if the data isn’t reliable yet.

Put another way: It’s one thing to track heart rate at home for your own fitness purposes. It’s another thing if John Hancock uses Apple Watch data to determine whether you qualify for life insurance.

What can you learn from an ambulatory, remote biometric sensor? Searching the Elektra Labs Atlas (editor’s note: now HumanFirst Atlas), a whole lot:

Source: ElektraLabs.com Team. Want more examples? Here’s more listed in a Google Doc.

We all know about the Internet of Things (IoT). And now comes the Internet of Bodies (IoB), networks of smart devices attached to or implanted in bodies, which raise a host of legal and policy questions, as Andrea Matwyshyn (@amatwyshyn) described in an WSJ Op-Ed last fall. The rights around our digital specimens fall in a legal grey zone. Because many biometric trackers are not deemed “medical devices” according to the FDA, many products fall outside of the agency’s scope.

If the FDA is not responsible, that leaves much of the oversight to the FTC, a much smaller agency with limited bandwidth. Because a lot of biometric-derived data isn’t covered by HIPAA, it’s unclear who is left responsible for privacy protections. As a result, individual consumer agreements, like end-user license agreements (EULAs) and privacy policies, become the last line of defense — and when is the last time you’ve read one of those?

Matwyshyn said it best: ready or not, the IoB is here. Are courts and regulators ready?

As we enter the digital era of medicine, and it’s time to end the era of “move fast and break things” and design intentional systems. And that’s what we’re working on at Elektra (editors notes: re-named HumanFirst in 2021). Learn more and join the movement.