An Eye on the Ethics of Human Research
As technological advances make the ethics of research on human subjects more complex, a University of Virginia law and public health sciences professor is on the forefront of shaping related regulations.
Professor Lois Shepherd, a law and public health sciences professor at the University of Virginia, was recently appointed to a four-year term on the U.S. Department of Health and Human Services Secretary’s Advisory Committee on Human Research Protections, or SACHRP.
The committee advises the HHS secretary on issues associated with the protection of human research subjects, as well as the role of the Office for Human Research Protections and other offices within Health and Human Services.
Shepherd is the Peter A. Wallenborn, Jr. and Dolly F. Wallenborn Professor of Biomedical Ethics at UVA. She is co-director of Studies in Reproductive Ethics and Justice in the School of Medicine’s Center for Health Humanities and Ethics.
She recently discussed regulatory challenges facing human research and how her scholarship connects to her new role.
What is “human research” and what role does this play in HHS work?
Basically, we’re talking about all kinds of research studying humans — whether it’s a clinical trial involving an unapproved drug or a survey or observational study. But only certain kinds of human subjects research comes under HHS oversight — research relating to drugs or devices (which falls under the FDA, an agency of HHS) or research that is federally funded (the Office for Human Research Protections has oversight to protect research subjects in federally funded research).
What protections are human subjects afforded?
One of the most important is that research subjects know that research is being done that involves them, and they have the opportunity to agree to participate or not. That’s the requirement of informed consent. It’s the first requirement stated in the famous Nuremberg Code, and it features prominently in the U.S. regulations as well. [The code was established in response to unethical medical procedures run by German doctors in concentration camps and elsewhere during World War II.]
But there are other protections as well — institutions engaged in regulated research have to have ethics committees, known as institutional review boards in the U.S. IRBs review the research to make sure that the risks that research subjects take on are justified by the important knowledge to be gained from the research and any individual benefit that might accrue to them. They also have to make sure that risks to subjects are minimized. IRBs also make sure research subjects’ privacy and confidentiality are maintained, and that subjects can withdraw from a research study any time.
What challenges are the advisory committee members tackling?
At the July meeting we spent quite a bit of time on determining the reach of the federal regulations. Questions of whether people are actually engaged in research are not as straightforward as they might have once been because so many studies now take place across multiple institutions and involve so many people doing a piece of the research that may not actually have them in contact with research subjects — they’re just looking at medical records or de-identified data.
There are substantial consequences to determining whether an activity related to research falls under the regulations because if it doesn’t, then the protections for research subjects can be greatly diminished. If the federal regulations don’t apply, then you have to look at state law, and there’s very little specific state law protective of research subjects. In my experience, you can’t rely entirely on researchers determining for themselves what is ethical — there needs to be some legal or regulatory oversight as well.
Also at the July meeting, we spent a lot of time learning about artificial intelligence and the questions it raises for human research protections. The nut of the problem seems to be that the risks of harm from artificial intelligence are not generally risks to the human subjects — the people from whom data is collected to create AI algorithms, for example. The real risk (and potential benefit) is in how AI will be used. But our research regulations were not written in a way to allow IRBs to engage in evaluating these downstream risks. But if not IRBs, then who? That’s the conundrum. There are no other existing oversight bodies for this, and the use of AI is growing.
How does this role relate to your own work/scholarship?
I’ve been working in the area of the ethics and law of human subjects research for over a decade now. I’ve taught a number of courses on it — to law students, medical residents and nursing graduate students. My scholarly interests really deepened on the subject in 2014 over one particularly controversial research study involving premature infants and questions of informed consent. The research ethics community split, opinion-wise, over the ethics (and regulatory compliance) of the study. Since then, I’ve just become more and more interested in questions of consent and the relationship of clinician-researchers to patient-subjects. A colleague, Dr. Donna Chen, and I received a National Institutes of Health grant last year to study questions of consent in studies involving clinical care. We’re looking at how the common law looks at the requirements of consent and what happens when a clinician takes on the role of researcher.
Founded in 1819, the University of Virginia School of Law is the second-oldest continuously operating law school in the nation. Consistently ranked among the top law schools, Virginia is a world-renowned training ground for distinguished lawyers and public servants, instilling in them a commitment to leadership, integrity and community service.