Jim Gearhart

by Jim Gearhart

John Wilbanks, Informed Consent Revolutionary

When he spoke at PRIM&R’s Advancing Ethical Research Conference last December, there was little sign publicly that three months later Sage’s innovations would show up in Apple’s ResearchKit. Attendees at the Revolutionizing Informed Consent Conference in August had an opportunity to hear more from John about how Sage Bionetworks is using technology to improve research.

Healthcare blogger Luke Timmerman wrote about John’s presentation, as did NWABR (The Northwest Association for Biomedical Research). Here I’d like to share highlights of the panel discussion that followed.

Three other speakers — Charlotte Shupert from Compliance Solutions, Stephen Rosenfeld from Quorum, and Eric Delente, Managing Director of Enforme Interactive — joined John onstage to exchange ideas about the future of technology and informed consent. It was a fascinating discussion that included observations about informed consent, privacy, data protection, and some insider stories about launching the first ResearchKit studies.

Q: Less than 20 percent of the smart phone population has iPhones. The ResearchKit studies are available only on iPhone. Does it seem exclusionary to limit participants only to people who own iPhones? (asked by Eric Delente)

A: (John Wilbanks) It’s absolutely exclusionary, but we expect Android versions of ResearchKit very quickly. The iPhone was easier to set up, with only two versions to work with, but Sage Bionetworks wants to get to everybody. And while data for iPhone users does skew white, wealthy, and educated, it’s arguable that there’s more diversity in iPhone ownership than in clinical research volunteers, who also skew white, wealthy, and educated.

Q: What kind of questions did you hear from the IRBs that reviewed the ResearchKit apps/studies? (asked by Stephen Rosenfeld)

A: (John Wilbanks) For me, going to an IRB is like going to the dentist: You know they’re going to drill, you just don’t know where. Sage Bionetworks was part of four IRB reviews; an independent IRB reviewed and approved Sage Bionetwork’s own apps, and institutional IRBs reviewed the others. The studies presented very little risk to anyone, so the IRBs had few questions about the consents themselves. Most of the drilling was around the tech. They all said we did a good job on the consent; what I didn’t expect was how much they’d zoom in on things like data validation or electronic signatures. Each of the IRBs at different points got cold feet, but since then feedback has been very good.

Q: How did you actually submit these materials to the IRBs? Most IRBs work on paper packets. (asked by Charlotte Shupert)

A: (John Wilbanks) We provided really long documents full of screen shots with lots of annotations. We decided to meet IRBs where they live, and the IRBs are used to seeing paper. The reviewers saw a copy of every screen that we proposed to create, with cross-references to the study’s full consent form document. That mapping was very important.

Q: What about protecting all of the data that you collected? (asked by Charlotte Shupert)

A: (John Wilbanks) It’s hard to keep out a really well-resourced, really dedicated hacker, but basic industry standards protect the systems from most efforts. Those standards include encrypting data on the phone, encrypting it in transit, using an effective algorithm to create a secure pseudonym, and then separating elements from your system from each other. After all of that come process controls. Most successful cyber-attacks come from effective social engineering, stealing the i.d. of someone with lots of access and weak security practices. So it’s this mixture of really good technical practice and processes that limit the ability of people to do dumb things. Because people doing dumb things is the biggest risk out there. But there’s still this fundamental tension between protecting this data and sharing it for insights.

Q: For the [ResearchKit-based] Parksinson’s study, was there a possibility that a tremor could signal an intent that you really didn’t have? (asked by a member of the audience)

A: (John Wilbanks) Those were essential elements to designing the use of the hardware. The tasks that used the microphone, the touch screen, or the accelerometer were designed to limit the possibilities of false reports. Another step, in a future iteration of the study, will be finding ways to glean useful data from people who signed up but did not engage: Why? What limited them? What was difficult for them?

Q: (From the audience) What happens if in the trial you start seeing that someone’s conditions are getting worse? (asked by a member of the audience)

A: (John Wilbanks) One of our concerns was not to trigger the FDA’s discretionary review for mobile applications. The minute you return a personalized result, you’re more likely to fall under that banner. What we were doing in this first wave was so novel, so weird, that we didn’t want to take any additional risks. But in the over 13,000 comments we got in the study, one of the things people want is a news feed. So future iterations may seek some way to return test results.

I’ll share more of the panel discussion in another, upcoming blog.

Tags: , , , ,