It was months ago, and I’m still haunted by the phrase. I was on a call for work, where a group of biomedical researchers was discussing ethical issues in engaging sensitive and vulnerable populations in genomic research. (An aside: why is this such an important and timely topic? In genomics, there is a well-warranted call to diversify the participants and genomes that are studied. Yet historical harms perpetuated against racial and ethnic minorities are still fresh on the mind, and may yet be continuing in more subtle and under-examined ways.) A comment was made by a researcher on the call to distinguish substantive considerations of participant and community wishes and concerns from “procedural ethics,” with the implication that the latter was not really worth talking about further.
Shudder. Procedural ethics? What did they mean? It was not discussed in the moment, but my further reflection (and a quick literature search) led me to understand that the researcher was referring to the standard rules and procedures that investigators must follow to conduct research. Notably, Guillemin and Gillam (2004) contrast procedural ethics with “ethics in practice,” or the day-to-day ethical issues that arise and are generally not addressed or even anticipated by the required (at least for federally funded research) upfront ethical review.
Checkboxes everywhere
My take on procedural ethics is a bit different, in that I experience quite a bit of it in my current day-to-day as a project manager in a large-scale genomic research initiative. Yes, there is the upfront submission of research protocols to institutional review boards (IRBs) for ethical oversight. But there is quite a bit of ongoing procedural ethics as well. For instance, I request and maintain access to protected genomic and clinical data in federal repositories for our group. I’ll spare you the details, but there are a lot of checkboxes involved. There is also quite a bit of procedure on the other end of genomic data sharing: to share data through these repositories, which is also typically required for federally funded research, you also need IRB review and sign off from your institution. It can take months just to get those signatures and – yes – check those checkboxes.
What is the antidote?
It’s no surprise that the original intent and importance of these procedures gets buried in a pile of paperwork and forms. And while these requirements are tedious and time-consuming, I’m by no means suggesting we should throw them out the window. (Not least because the stack of papers would likely be big enough to do some real damage to a passerby.) Ideally, policies and regulations are ethically informed and have been designed to protect research participants’ interests and respect the parameters of their consent, so they are important to establish and uphold. (And yes, I know, also to define institutional liabilities if something goes wrong.) We can’t rely on individuals to take the time and effort to independently deliberate about the best way to proceed each time, so we have policies to serve as guardrails. And there will always be some amount of low-level resentment and mental checking out whenever you’re completing a requirement. I’m reminded of my childhood self fully intending to clean my room but, as soon as my mother verbalized the request, I stood in livid opposition to the thought of it.
History matters, per usual
How, then, to get back to the substance underneath the procedure? I’m certain that at least part of the answer is to keep historical context alive. Research ethics procedures have been developed and codified over decades, often in response to specific historical moments – nicely recounted by Fischer (2006).
- In the 1940’s, the atrocities of Nazi Germany precipitated the Nuremberg Code, 10 principles for conducting research that included voluntary participation and ensuring that benefits outweigh risks. These principles were reaffirmed in the Declaration of Helsinki in 1964.
- The Belmont Report in the 1970’s established basic ethical principles, following US Congressional hearings on the US’s decades-long offense in the Tuskegee Syphilis “Study.”
- Building on the Belmont Report, and adding requirements for IRB oversight and special protection for vulnerable participants (pregnant women, fetuses, prisoners, children), US federal regulations were united under the Common Rule in 1991.
- More recently in the subfield of genomics, NIH established Genomic Data Sharing policies (initial version in 2007, updated in 2014) which include both requirements to share and rules of the road for those with whom the data is shared.
- From 2011-2017, proposed revisions to the Common Rule were considered, including one that would have made it more difficult to do secondary research on deidentified samples. While not ultimately adopted in the final version, NIH Director Francis Collins and Associate Director of Science Policy Carrie Wolinetz recently argued that this provision would have helped avoid repeating the “cautionary tale of HeLa cells.”
Underlining this timeline is that real people, bodies, and communities have been deeply harmed by past research. Principles are established and codified in policy and regulation to try and avoid (or at least mitigate) future harm. As seen with the invocations of Tuskegee and Henrietta Lacks in the timeline above, narratives of individuals and communities are woven into the creation (and/or justification) of these policies and procedures.
Reading between the timelines
And while this is the common historical record of research regulations, and a minimal baseline to understand today’s procedural ethics, it’s still really just the tip of the iceberg. I’m in the middle of educating myself on even more recent history of the human genomics research enterprise (we’re talking 1990’s – not deep history here) from Jenny Reardon’s The Postgenomic Condition. The Human Genome Project was marked by fast, furious, and radically open deposition and sharing of human genome sequence (another set of principles here – the Bermuda Principles).
Once the reference genome sequence was available (a consensus sequence from ~20 anonymous donors), the research community moved on to looking at the genomes of thousands of research participants, in combination with clinical and other non-genetic, personal information. In contrast with Bermuda, these types of data are typically not posted publicly but instead require applications to controlled-access repositories, with agreements to abide by narrower use restrictions (e.g., a participant may have only consented to research related to heart disease).
Starting the large-scale study of the human genome with an intense ethos of openness and then moving to controlled-access models may have bound us into an ideological pretzel, and could partly underlie the annoyance researchers feel at the regulatory hurdles to share and access genomic data. Even though I was fully aware of these two sides of the genomic data sharing coin, I have never thought about them in tandem until reading Reardon’s text.
Ultimately, procedural ethics is a reality of conducting research. Innervating procedural ethics with deep historical understanding is necessary, though perhaps not sufficient, to keeping those founding principles and cautionary tales alive and well in our minds. And as I’m learning through Reardon’s text, it’s not just the common timeline of research ethics that we must be aware of, but we must also understand the subtler, lesser known stories and connections among them.