Apple recently announced new requirements aimed at anyone using its ResearchKit to create a study app. The requirements should look familiar to those working in research with human subjects, because Apple now requires that any study using the ResearchKit platform have
- Review and approval by an independent ethics committee, and
- An informed consent.
I had the opportunity to discuss these changes with Jeremy Block, PhD, MPP, who has special expertise in digital and mobile health technology and human subject protection. Jeremy was chair of the institutional review board (IRB) at the Icahn School of Medicine at Mount Sinai when the school developed one of ResearchKit’s first study apps, Asthma Health. He shared with me some observations about Apple’s new guidelines.
Jim Gearhart: From the beginning, the Asthma Health app had IRB approval and it included an informed consent process. Now, 18 months later, Apple formally says every ResearchKit app needs both. What are your thoughts about that progression?
Jeremy Block: The guidelines in the App Store are pretty straightforward. I’m excited because they clearly are going in the right direction, and they could go further. There is an opportunity here, and I hope the IRB community and Apple both see it as one.
JG: Why do you think Apple introduced these requirements now?
JB: It’s tough to say. Trying to predict Apple is something people get paid to fail at. It’s like weathermen: they’re often wrong, but we still listen to them every day.
We now have two or three dozen ResearchKit applications that have been put out into the world, so we have enough examples to see what people have attempted to do. It makes good sense to look back and ask how did Apple approach this?
I think we can say that Apple waited, saw what people proposed and did, and then moved forward with some level of deliberation. I infer from this that Apple wanted to give people room to innovate, but also wanted the space to think things through. The guidance reflects that. It’s short and relatively open, it shouldn’t stifle innovation; yet it has specific statements that create some boundaries.
My overall impression is that this guidance from Apple is really good, and that it does many things right.
JG: Any initial suggestions on how Apple could do more?
JB: Apple states it could reject an app if there’s physical risk. That’s good; it matches up with what we do in human subject protections and it aligns with the ethical principles enshrined in the Belmont Report.
But when I read that part of the guidance, I think, Only physical risk? The human subject protection community got beyond physical risk almost 50 years ago when Stanley Millgram’s experiment and other atrocities informed our thinking.
Lucky for Apple, IRBs have been down this road before. We have been talking about all kinds of different risks for decades and we could say to Apple, Let’s fast-forward and give you some ways to jump start your program.
In the end, Apple took a position on risk within ResearchKit. It said, If we don’t like it we reserve the right to squash it.
JG: That’s a pretty powerful stick to wield, isn’t it?
JB: It shows that Apple has decided to take responsibility. That’s huge. They’ve said that they can—and broadly when they will—wield that power. At the same time, Apple is implying there’s a shared responsibility here, which is great.
In a way, Apple is acting like an Institutional Official (IO) at a research hospital or university. I think this is a neat way to think about it, because the research is being done on Apple’s platform, which is kind of like doing the research at an institution. And now, Apple will not let research go forward without an ethics review. But it also can stop research that it concludes should not be done under its umbrella. Similarly, the IO has the authority to say, Maybe the IRB said this is okay, but I don’t want it here.
Congratulations Tim Cook and Mike O’Reilly; jointly you are Apples first IO’s for research!
More to Come
We’ll continue the conversation with Jeremy in an upcoming blog, to consider how Apple’s approach to the oversight of apps compares to the FDA’s and what lessons programmers and IRBs might learn from each other.