About a year ago, STAT News—an impressive team of experienced journalists recruited by The Boston Globe and launched as an online source of the latest medical news—hit the healthcare and research scene with authority.
Quorum Review Board Executive IRB Chair Dr. Stephen Rosenfeld and I were honored last week when STAT ran an essay we wrote in response to an earlier article titled, “In clinical trials, for-profit review boards are taking over for hospitals. Should they?”
In the essay we discussed how the research community can most effectively gauge how well we do our jobs as IRBs. The original article considered the future of ethical reviews of research given the National Institutes of Health (NIH) new rule mandating that, as of May 2017, its multisite studies must rely on a single IRB of record. The article presented some dire scenarios of a future in which independent IRBs subsume the roles and purpose of local IRBs.
Our essay focused on a particular comment in the article, which stated, “Part of the problem with assessing the relative merits of different review boards is that their overall performance is hard to measure.” To us, this inability to measure review quality seemed a more urgent topic than challenging the ownership or affiliation of a given review board. The statement echoes others that crop up regularly throughout comments about studies of IRB operations. For example:
“No identified published study included an evaluation of IRB effectiveness.” 1
“Additional research is needed to understand . . . what quality IRB review is, and how effective IRBs are at protecting human research participants.” 2
“Future research is needed to understand how these investments relate to the quality of IRB review and oversight”3
“Systematic studies demonstrating the degrees to which IRBs in fact reduce concrete harms to subjects are still lacking.”4
The research community has tried to judge the quality of IRBs with such measures as accreditation, numbers of warning letters, and response times, but not with a standardized rubric. We felt this article provided an opportunity to consider what such a rubric might be.
At an AAHRPP conference earlier this year, Dr. Rosenfeld presented the idea of a learning system for IRB reviews. He described a program which gauges review boards by how they follow precedent, represent ethical norms, and have a mechanism for collaboration.
As everyone prepares for the NIH’s single IRB policy—and speculates on the likelihood of a similar policy under a new Common Rule—this question of evaluating IRB reviews takes on greater importance.
These policies will require single IRBs of record to demonstrate they are up to the task of multisite reviews. To join a multisite study, researchers around the country will need to assure themselves and their institutions that the IRB selected for that study can be relied upon. Something like Dr. Rosenfeld’s IRB learning system could help provide those assurances.
To continue this conversation, we have developed a whitepaper that examines how three research institutions work successfully with multiple independent IRBs. If you are attending the PRIM&R AER conference in November, visit Quorum at booth 404 to get a copy. Otherwise, watch for this publication in the coming weeks.
Correction: We’ve revised this article to reflect the correct effective date of the NIH sIRB policy. The policy will be effective May 25, 2017, not March 2017 as stated in an earlier version.
 Lura Abbott and Christine Grady: “A systematic review of the Empirical Literature Evaluating IRBs: What We Know and What We Still Need to Learn”, Journal of Empirical Research on Human Research Ethics, March 2011, pages 3 – 19. University of California Press, http://www.jstor.org/stable/10.1525/jer.2011.6.1.3?origin=JSTOR-pdf
 Jeremy Sugarman M.D., M.P.H., Kenneth Getz, M.S., M.B.A., et al.: “The Cost of Institutional Review Boards in Academic Medical Centers,” New England Journal of Medicine, April 28, 2005.