Thursday, August 2, 2007

IRBs vs. Departmental Review

In comments on this blog's introduction, bioethicist David Hunter of the University of Ulster asked me about my preferred alternative to IRB review, and I mentioned my hopes for departmental review (hopes shared by the AAUP). Lest our conversation get lost in the comments, I am moving it to this new posting:

DAVID HUNTER:

I'd disagree on departmental review being best for two reasons.

1. While a committee which has some knowledge and expertise in the area of the project, too much expertise and it becomes too close to the subject matter. This can mean that it misses significant ethical issues because they are standard practice within a specific discipline. To give one example, psychologists often want to give part of their students grade (10%) for being involved in their research. Most RECs I am involved in don't allow this practice because it is felt it is unduly coercive. I imagine if a REC/IRB was entirely composed of psychologists they may disagree.

2. It is important for a REC to be substantially independent from the researcher, but this doesn't happen in departmental review, instead the REC has an interest in the research being let to go ahead.

My university presently runs on a departmental review model, and while I can't name names I have personally seen examples of both of the above issues coming up.

I've written about these problems here:
Hunter, D. 'An alternative model for research ethics review at UK universities' Research Ethics Review. (2006) Vol 2, No 2, 47-51.
(Which unfortunately isn't available online)

and here: Hunter, D. 'Proportional Ethical Review and the Identification of Ethical Issues Journal of Medical Ethics. (2007);33:241-245.

I certainly agree with you that IRBs shouldn't be dominated by medics and medical concerns, they instead should have a wide range of representation. I'm inclined to think though that the baseline ethical issues are similar and while different rules may be appropriate for different disciplines they flow out of the same background.

In terms of examples here are a few, I can't be too specific with details for reasons of confidentiality.

1. Study of sexual attitudes in school children. Asked very probing questions as one might expect, but didn't intend to get parental consent to carry out the research a parallel can be found here: India Research Ethics Scandal: Students made guinea pigs in sex study
No consideration had been given to what might have been done if there was disclosure of harmful behaviour etc.

2. Historian was going to civil war stricken country to interview dissidents about the war, intended to publish identifying comments (without getting consent for this) which were likely to be highly critical of the current regime.

3. Social scientist wanted to understand children's attitudes towards a particular topic. As a blind so that the participant would not know the questions they wanted to answers to, they proposed to use the becks depression index. This contains questions about self harm, future worth and was potentially very distressing, not at all appropriate as a blind.

4. Student wished to conduct interviews with employees of a company on an issue that could significantly damage the companies profitability. No consideration was given to how to best report this information to minimise harm to the company.

I'm inclined to think that any sort of research involving humans can lead to harm whether that is physical, social, financial, psychological or so on. As such the benefits and the risks need to be balanced, and it needs to be considered how to minimise that harm. That I take it is the job of the researcher. However, having sat on RECs for a while it is a job that sometimes the researchers fail at spectacularly, then it becomes the job of the IRB/REC. The difficulty is how, without full review by a properly constituted REC, do you identify those applications that have serious ethical issues?

ZACHARY SCHRAG:

Thanks for these examples.

First, let me state that I am primarily interested in projects that fit Pattullo's proposal of 1979: “There should be no requirement for prior review of research utilizing legally competent subjects if that research involves neither deceit, nor intrusion upon the subject’s person, nor denial or withholding of accustomed or necessary resources.” Under this formula, the projects invovling children (who are not legally competent) and the project involving undergraduates (whose course credit is an accustomed or necessary resource) would still be subject to review.

That said, I have little confidence that IRBs are the right tool to review such research. As for child research, under U.S. regulations, and, I believe, the rules of most universities, the studies could be approved by three IRB members wholly lacking in expertise on child development. (The regulations encourage but do not require the inclusion of one or more experts when vulnerable populations are involved.) Were I the parent of a child involved in such studies (and I'm proud to say that both my children have furthered the cause of science by participating in language studies), I would greatly prefer that the protocols be reviewed not by a human subjects committee, but by a child subjects committee composed mostly or entirely of people expert in child research.

For the psychology course and the history project, the real question is whether a departmental committee can be trusted to enforce its own discipline's ethical code. The code of the British Psychological Society forbids pressuring students to participate in an experiment. And the ethical guidelines of the the Oral History Society require interviewers "to inform the interviewee of the arrangements to be made for the custody and preservation of the interview and accompanying material, both immediately and in the future, and to indicate any use to which the interview is likely to be put (for example research, education use, transcription, publication, broadcasting)." So yes, those sound like unethical projects.

Perhaps some departments would fail to correct these mistakes, just as some IRBs and RECs get them wrong. At some level this is an empirical question that cannot be answered due to the uniform imposition of IRB review. In the U.S., at least one university (the University of Illinois) had a system of departmental review in psychology that worked without complaint until it was crushed by federal regulation in 1981. With the federal government imposing the same rules nationwide, we can only guess about how well alternatives would work.

Moreover, departmental review would allow committees to bring in considerations unknown to more general ethics committees. For example, the British and American oral history codes require attention to preservation and access to recordings, something that and IRB/REC is unlikely to ask about.

I would also add that something close to departmental review is typical of the standard IRB, i.e., one in a hospital or medical school. It's true that the U.S. regulations require "at least one member whose primary concerns are in nonscientific areas" and "at least one member who is not otherwise affiliated with the institution and who is not part of the immediate family of a person who is affiliated with the institution." But the rest of the members can be biomedical researchers of one stripe or another. If that's good enough for the doctors, how about letting each social science discipline form an IRB of its members, with a community member and a non-researcher thrown in?

Still, if IRBs/RECs limited themselves to holding researchers up to the standards of the researchers' own academic discipline, I wouldn't be complaining.

Where we really disagree, then, is on project 4. You write, a "Student wished to conduct interviews with employees of a company on an issue that could significantly damage the company's profitability. No consideration was given to how to best report this information to minimise harm to the company."

That sounds a lot like this case:

Kobi Alexander's stellar business career began to unravel in early March with a call from a reporter asking why his stock options had often been granted at the bottom of sharp dips in the stock price of the telecom company he headed, Comverse Technology Inc.

According to an affidavit by a Federal Bureau of Investigation agent, unsealed in Brooklyn, N.Y., the call to a Comverse director set off a furious chain of events inside the company that culminated yesterday in criminal charges against Mr. Alexander and two other former executives. Federal authorities alleged the trio were key players in a decade-long fraudulent scheme to manipulate the company's stock options to enrich themselves and other employees.

After the March 3 phone call from a Wall Street Journal reporter, the FBI affidavit said, Mr. Alexander and the other two executives, former chief financial officer David Kreinberg and former senior general counsel William F. Sorin, attempted to hide the scheme. Their actions allegedly included lying to a company lawyer, misleading auditors and attempting to alter computer records to hide a secret options-related slush fund, originally nicknamed "I.M. Fanton." It wasn't until a dramatic series of confessions later in March, the affidavit said, that the executives admitted having backdated options. The trio resigned in May.


That's an excerpt from Charles Forelle and James Bandler, "Dating Game -- Stock-Options Criminal Charge: Slush Fund and Fake Employees," Wall Street Journal, 10 August 2006. As far as I can tell, Forelle and Bandler made no effort to minimize the harms to the companies they studied or the executives they interviewed. Their "Perfect Payday" series won the 2007 Pulitzer Prize for public service.

Your insistence that an interviewer minimize harm is a good example of an effort to impose medical ethics on non-medical research, and a good reason to get RECs away from social science.

12 comments:

David Hunter said...

Thanks for bringing this to the front Zachary. There is plenty to respond to here, and I'm sure you understand that it may take a bit of time for me to formulate a complete response - as pleasant as discussing issues on a blog are, there are of course other time pressures in academia.

I should have said by the way, I don't see myself as necessarily championing the IRB system as it presently stands in the US, instead I'm more generally in favour of the view that all research ought to be reviewed by a substantially independent committee made up of a mix of members from different areas and outside academia as well. I don't think the US system necessarily does a good job of meeting that requirement at the moment, nor the British one for that matter. I do agree, that IRBs/RECs are often too heavily weighted towards medical research at the moment, and a broader membership would be good.

From what you have said in regards to IRB's I share your concern. (By the way thank you for your description of the system in the US, very helpful to someone like me, interested generally in research ethics & research ethics systems but with no experience of it in the States)

That 3 members would be all that was needed to sign off on research involving a vulnerable group such as children is astounding. I would have thought that a full committee ought to be required for this. Of course in the UK even less scrutiny may be required, depending largely on which university you work at. I will describe the present UK research ethics system over at philosophy and bioethics later on, so you have a point of comparison.

I think one of the further weaknesses of the American system is that the IRB's are institutionally associated, I am more keen on the system for the review of medical research over in the UK where the committees are not associated with institutions and can receive applications from researchers anywhere in the country. This reduces the possibility for conflicts of interest.

In regards to whether IRBs or departmental committees will make more mistakes, you do make a fair point, this is an empirical question, and in part because there is already existing legislation it is difficult to test. Perhaps cross country comparisons might help? In any case the data is difficult to assess because I suspect there would be substantial under-reporting of errors which were made, and it is difficult to assess whether a departmental review board or an IRB made a bad decision, we would need an accurate higher authority to test this.

Let me just check your view, is it that projects meeting this criteria: “research utilizing legally competent subjects if that research involves neither deceit, nor intrusion upon the subject’s person, nor denial or withholding of accustomed or necessary resources.” Ought to receive no independent scrutiny, or just independent scrutiny by a departmental review board?

I'll respond to the rest over the next few days.

Cheers
David

Zachary M. Schrag said...

You ask whether I think research involving consenting adults "ought to receive no independent scrutiny, or just independent scrutiny by a departmental review board?"

I would answer that this would depend on the discipline. For oral history, prior review of any kind often makes little sense, because the process is so unpredictable. Rather than project-by-project review, I would want departments to establish standardized ethical and methodological curricula and consent forms. If a researcher, student or faculty, had completed the training and was using a pre-approved consent form, I see no need for review of her particular project. I know that many anthropologists make the same case for their fieldwork.

There may be other kinds of research that fit the exemption but whose protocols are more predictable, and therefore more amenable to prior review. But I don't want to write beyond my expertise.

I do think that cross-national comparisons might be helpful, and I would like to learn more about practices in the U.K. But I am dismayed by the findings of Mark Israel and Iain Hay, in Research Ethics for Social Scientists (2006). They found "angry and frustrated" social scientists in the U.S., Canada, the U.K., New Zealand, and Australia (p. 1), and note that regulations in all of those countries were established "with minimal consultation with social scientists and little recognition that social science is not the same as biomedical research." (143)

ZMS

PS. I should correct my earlier statement about three non-experts being sufficient to approve a project with children. Under U.S. regulations, and IRB need have only five members, and a majority (three) makes a quorum. Approval takes only "a majority of those members present at the meeting," so, in fact, "full committee" review can mean that only two IRB members approve the project. See 45 CFR 46.108

David Hunter said...

Okay so it is a discipline specific thing then. Education might be another example, particularly if the teacher is engaging in "action research" because this involves continuous small changes to your teaching practice depending on what you think is working. Under some (I think overly stringent) forms of review arguably every time they changed what they did, they would need to submit an amendment to the IRB/REC...

You are right, predictability makes it harder to pre-review well. What we typically do with interviews is ask for a broad interview schedule, to determine whether the topics that the researcher is interested in are likely to cause distress or potentially be harmful to interviewee. However we don't expect the researcher to only use the sample questions they have given us in a rigid fashion, they are just indicators. I can't see why oral history wouldn't fit within this model?

Standardised consent forms would help, but RECs at least are often more interested in the content of the participant information sheet which is harder to standardise except in the broadest sense.

The difficulty with having no review unless the research meets criteria x, y or z is that the researcher has to assess that, and they have some interest in avoiding review, usually. I would think it would be possible to at least be deceptive in oral history (misdescribing your project to get people to consent to it for example) and while I can't imagine it being an intrusion upon the subject’s person... I think that there are other ways to harm people that could come up in oral history.

So while I will accept that pre-review may be difficult, and that perhaps oral history ought to be treated differently in certain respects (ie revealing the participants identity, presuming they consent to this) I don't think this indicates a need for no review, instead it seems to indicate a need to modify the current IRB system, both by increasing the number and range of people on IRB's and by giving them discipline specific training on ethical norms.

In regards to the practices of research ethics in the UK I have now summarised these here: Research Ethics in the UK: The present "system"

I'm not surprised that research shows social scientists are unhappy with research ethics review, both because it is reasonably new for many of their disciplines, and because most models in most countries were originally set up for biomedical research and are only now changing to encompass more research. But I'm not sure how worried we should be by their unhappiness, many biomedical scientists appear to be unhappy with the ethical review of their work as well. What it seems to show to me is a need to make the system more broad and suited to the needs of different disciplines. The new electronic form used by the NHS RECs in the UK is a step in this direction, since what questions you are asked change depending on filter questions at the beginning, making it more appropriate for different types of research.

I'm surprised that IRB's can be so small, NHS RECs have a quorum of 7 I believe, and 12-18 members if I recall my standard operating procedures correctly.

I'll take up the broader question of harm in a later comment since that seems to be the turning point here.

Zachary M. Schrag said...

Yes, the question of harm is key. If you accept that a legally competent adult should be permitted to incriminate himself in a public statement--with or without the aid of a researcher--then I see no reason to demand topics in advance.

I'd be interested to learn why you think participant information sheets for oral history would be hard to standardize. While I am frustrated that oral historians have not done a better job of this, I don't think it would be that hard. For example, what do you think of Indiana's standardized oral history consent form?

(Note that in the U.S., regulations require that the project information is included with the consent form, rather than a separate information sheet. I suspect that the British system is easier to understand.)

Finally, I am dismayed by your comment that "I think that there are other ways to harm people that could come up in oral history." Why not identify an existing problem and then come up with a solution, rather than dreaming up hypothetical wrongs?

Zach

David Hunter said...

Alright then,

"Yes, the question of harm is key. If you accept that a legally competent adult should be permitted to incriminate himself in a public statement--with or without the aid of a researcher--then I see no reason to demand topics in advance."

Well there is a difference between being involved in research and making a public statement. Namely the researcher is involved, and they may have moral obligations towards their research participants outside of the normal obligations we have to each other.

I'd still be inclined to ask for broad topics beforehand for at least three reasons:
1. In some locations (Northern Ireland for example) if illegal activity is disclosed, then the researcher is legally obliged to inform the police. Where it is felt this may happen, we usually require something on the information sheet to the effect that this might happen. Regardless of the legal requirements, you may still feel that there are moral requirements in terms of public interest that sometimes require informing the appropriate authorities. However as a researcher your primary duty is to your participant, so it is important they are aware you may report them.
2. It could be that revealing their identity or words will lead to considerable harm to them. Even if they are consenting to that harm, it still may not be morally permissible to bring it about.
3. Their history may be considerably distressing for them to relive, in these cases the researcher has a moral obligation to provide support services, or at least contact details for support services, these are usually on the information sheet.

Sorry I wasn't perhaps clear (And you are right btw, in the UK as a standard rule information sheets & Consent forms are separated) I don't think oral history participant information sheets are specifically hard to standardise, I think participant information sheets more generally are hard to standardise. This is because what is required to be on them is to some degree project specific.

Having had a look at the example you give, I think it is all right but not great. I'd like to separate it out into two documents, a consent form and an information sheet for the sake of clarity but fair enough I can't do that in the States. I would also think an invitation paragraph would be in order, along with a description of the project and what it is aiming to achieve. (Perhaps that standardly gets added?) Hard to give informed consent when you know nothing about the project.

It is written in highly legal language that I wouldn't have thought was appropriate with the general public take this for example: "I can withdraw from the project without prejudice prior to the execution and delivery of a deed of gift, a form of which is attached hereto." So obviously it may need adjusting to make it audience appropriate.

Dependent on the nature of the project, and whether it could be distressing for the participants, I would also add details of appropriate support services (ie counselling, the Good Samaritans and so on.)

In regards to your final query, I'm a moral philosopher, my job is to dream up hypothetical possible harms... But seriously, I was just meaning that while the statement you quoted was pointed (one presumes) at physical harms, there is no reason to think that the sorts of non-physical harms possible in research don't warrant scrutiny over research which has the potential for non-physical harms.

Part of the overall discussion depends on what duties we think researchers generally have towards their research subjects. I'm inclined to think one of those duties is to minimise harm. I don't think this is a hold over from the biomedical model, since it is a standard feature of moral philosophy. Nor do I think that this is an over-riding duty, instead it is a prima facie duty, which can be outweighed by other considerations. These can be for example the public interest, consent to the harm or perhaps the value of the research. Nonetheless, whether there are genuine over-riding benefits of the research is a judgement I am more comfortable having a properly constituted ethics committee (which I will concede may not describe an IRB) decide.

Zachary M. Schrag said...

Thanks for your reply.

Your first reason for demanding topics in advance is that a researcher may be legally obliged to report criminal activities. That is a good reason to require promises of confidentiality to be tempered by the disclaimer that the researcher may be legally obliged to report criminal activities. But why not just tell researchers that they must include such a disclaimer if their work might reasonably lead a respondent to report a crime? Why demand questions that they have yet to form, for a project they are just starting?

Your third reason is that narrators' "history may be considerably distressing for them to relive, in these cases the researcher has a moral obligation to provide support services, or at least contact details for support services." I addressed this argument in my April posting, "The Canard of Interview Trauma," Do you know of any systematic studies of the likelihood that interviews will be "considerably distressing," and that a list of "contact details" is helpful in such cases? And do you see any irony in restricting distressing questions while co-authoring a blog named for Socrates?

But it's your second reason that most concerns me. You suggest that when an interviewer and a legally competent narrator have agreed to the publication of the narrator's self-incriminating statement, an IRB/REC can overrule their moral judgment. It can block publication, and perhaps even the conversation, on the grounds that it is not "morally permissible." That is not ethical review. That is censorship.

You concede that the general duty not to hurt other people must be balanced against other values. I would say that freedom--freedom of speech, freedom of the press, academic freedom--is one of those values, and an important one at that.

William Blackstone advocated far fewer guarantees of press freedom than those enjoyed today by Americans (and, I believe, Britons). Yet even he saw prior restraint as a particularly noxious means of discouraging abuses. As he wrote in his 1769 Commentaries,

"The liberty of the press is indeed essential to the nature of a free state: but this consists in laying no previous restraints upon publications, and not in freedom from censure for criminal matter when published. Every free man has an undoubted right to lay what sentiments he pleases before the public: to forbid this, is to destroy the freedom of the press: but if he publishes what is improper, mischievous, or illegal, he must take the consequence of his own temerity. To subject the press to the restrictive power of a licenser, as was formerly done, both before and since the revolution, is to subject all freedom of sentiment to the prejudices of one man, and make him the arbitrary and infallible judge of all controverted points in learning, religion, and government."

Your argument is that a right enjoyed by "every free man" disappears when that man joins a university, or talks to a university researcher. I don't see why this should be the case.

Finally, you concede that an IRB may not, in practice, be a "properly constituted ethics committee." That's the whole question, and the reason why this thread is entitled "IRBs vs. Departmental Review."

The composition of a review committee determines its legitimacy. Democratically elected legislatures and democratically appointed courts may legitimately set limits on the freedom of a citizen. Members of a profession may legitimately police the conduct of other members. But an IRB, appointed by a single institutional official, lacking a majority of members from the discipline whose work is being judged, lacks any such legitimacy. Having one man as "the arbitrary and infallible judge of all controverted points" is grim. Having a panel of bioethicists in that role is not much of an improvement.

David Hunter said...

Hi Zach

You might be interested in this:
An inside-outsider’s view of Human Research Ethics Review

In regards to 1 you ask:
"But why not just tell researchers that they must include such a disclaimer if their work might reasonably lead a respondent to report a crime?"

We do. It is in the published guidance, I highlight it in the ethics courses I teach our students. Nonetheless, we still commonly have to request this is added to information sheets. The problem, as far as I see it, isn't that there isn't good quality guidance available, the problem is that either researchers don't always find it and/or they certainly don't always follow it, even if they have good intentions.

"Why demand questions that they have yet to form, for a project they are just starting?"
If this is the case, they shouldn't be in front of a REC/IRB, since it looks like they don't know what their project is yet. But surely this is a mischaracterisation of researchers in oral history, you must have some, maybe broad, ideas of why you are asking a particular person questions. It is true that research can go in new or unanticipated directions, and I can certainly imagine this happening in the context of history, but some things and topics are reasonably predictable.

In regards to 3. I don't know of any systematic studies, although many experienced researchers have given me some anecdotal evidence for this. I'd welcome a full study of it, might be an interesting project actually I will have to look into it. In the absence of evidence, I tend to take a precautionary approach though.

In regards to 2. I think I haven't explained the position very well. Let me try again. It is not that the individual cannot publicly incriminate themselves or say things that might harm themselves. They are welcome to, outside the context of the research. So I am not suggesting that anyone's free speech ought to be restrained. What I am suggesting is that sometimes, (emphasis on the sometimes) researchers ought not be allowed to lead someone into harming themselves, or providing circumstances in which they may harm themselves. In other words researchers are being restrained from causing harm, not research participants being restrained from speaking freely.

In regards to your final point, yes IRB's don't sound ideal, they still may be better than nothing (though that is another, tricky question).

I'm still not sure about this:
"The composition of a review committee determines its legitimacy. Democratically elected legislatures and democratically appointed courts may legitimately set limits on the freedom of a citizen. Members of a profession may legitimately police the conduct of other members."
There is good reason to be worried about closed shop self regulation, namely it often fails. This is why in some jurisdictions, professions are regulated at least in part, and sometimes in whole by outside bodies. I think the profession should be allowed to regulate itself of course, however, it may still also require outside regulation as well.

"But an IRB, appointed by a single institutional official, lacking a majority of members from the discipline whose work is being judged, lacks any such legitimacy. Having one man as "the arbitrary and infallible judge of all controverted points" is grim. Having a panel of bioethicists in that role is not much of an improvement."
Agreed. Particularly in regards to a panel of bioethicists alone, a worse prospect I can hardly imagine. But that isn't what I'm promoting, I'm keen instead on an IRB/REC which has a wide variety of members from an array of different disciplinary areas across the spectrum along with lay people and an ethicist or two.

I'm unkeen on this:
"lacking a majority of members from the discipline whose work is being judged"
I think IRBs/RECs are better if they are independent from any specific discipline. So I am inclined to think you should have wide, balanced representation. Not a member of every discipline, because that will make an unfeasibly large IRB/REC, more than 20 members gets absurd fast. I think it is important to have members who are familiar with a wide variety of research methodologies and paradigms, and sympathetic to the issues faced in research in different contexts.

Zachary M. Schrag said...

On your methodological question, you write, "you must have some, maybe broad, ideas of why you are asking a particular person questions." Not at the start of a project, I don't. When I started my work on Metro, I hadn't heard of almost any of the people I ended up interviewing. And since I didn't know whom I would interview, I hadn't written questions for them. Being asked for sample questions is like being asked to list all the books one plans to read for the next four years, and what one expects to learn from them.

This is why standardized consent forms are more appropriate than efforts by IRBs to pry into the details of a project, details unknown to the investigator herself. If you want to say that any consent form promising confidentiality also includes a disclaimer about subpoenas, fine. But that's no reason to demand information a researcher doesn't have.

(By the way, if researchers are ignoring their training and the published guidance you give them, what makes you think they are sticking to their committee-approved protocols?)

But the big question here is not about informed consent, but about the risk-benefit calculus that underlies medical research and is so corrosive of social research. (In Britain's Research Ethics Framework," it appears as the blanket statement that "Harm to research participants must be avoided.") And it comes down to the burden of proof.

You seem to believe that no research should be permitted until it is proven harmless. And I'll accept that for medical experimentation, for I have no a priori right to cut open my neighbor, irradiate him, or insert foreign substances, devices, or other matter into his body.

But I do think (with Blackstone) that I have the right to ask him the story of his life, and that it is up to the would-be regulators to explain why I should not be allowed to do so. You have yet to do that.

Instead, you hint at "some anecdotal evidence" of traumatic interviews, that "sometimes, (emphasis on the sometimes) researchers ought not be allowed to lead someone into harming themselves," and that "closed shop self regulation . . . often fails." Anecdotal? Sometimes? Often? Frankly, this is all so vague that I don't know what you are talking about. Yet it is on this vagueness that you propose significant restrictions on the freedom of speech.

Historians have been interviewing people for 2500 years. I would really like to learn of some projects you think went badly, and why you think IRB review would have helped them.

David Hunter said...

"On your methodological question, you write, "you must have some, maybe broad, ideas of why you are asking a particular person questions." Not at the start of a project, I don't. When I started my work on Metro, I hadn't heard of almost any of the people I ended up interviewing. And since I didn't know whom I would interview, I hadn't written questions for them. Being asked for sample questions is like being asked to list all the books one plans to read for the next four years, and what one expects to learn from them."

I think you have misunderstood me. I was suggesting you would have a broad idea of the questions you might ask and the sorts of people you might talk to, and why you might talk to them. I don't think you should provide the individual questions you would ask each person who might be involved in your interview, but I still think a broad overview is possible. Is even this not the case?

"By the way, if researchers are ignoring their training and the published guidance you give them, what makes you think they are sticking to their committee-approved protocols?"
Tough question. For the most part the researchers appear to take on board the committee's recommendations, quite well, it isn't a case of neglect, but rather that ethics can be a complicated business. So I think most of them will stick to what they have agreed to do. However this is not always going to be the case. There seem to be two main reactions to this. The first is to provide further oversight and regulation, such as the reporting of researchers to research ethics committees, and random audits. This provides some assurances that the researchers are following what they have said, which broadly was the result of a recent voluntary audit at my own university. The other option is to react to any revealed breaches of ethics with punitive measures, which does happen from time to time. But of course neither measures guarantee that researchers will behave ethically, they just make it more likely.

"But the big question here is not about informed consent, but about the risk-benefit calculus that underlies medical research and is so corrosive of social research."

Well it is a bit more complicated than this. risk-benefit is one factor, other factors are informed consent, justice and other ethical norms that may be violated in the course of research.

"In Britain's Research Ethics Framework," it appears as the blanket statement that "Harm to research participants must be avoided.""
I should note that that is not Britain's research ethics framework, but instead one framework by the Economic and Social Research Council. Written specifically for the social sciences I might add and endorsed by the Arts and Humanities Council which I presume fund much of the historical research not funded by the ESRC in the UK. It has been highly influential on the structure of research ethics committees and their operation in universities in the UK, almost by default. I'll also agree, like many policy documents, it seems rather more aspirational than realistic in parts, and the quote you have highlighted is a good example of that.

"You seem to believe that no research should be permitted until it is proven harmless."
No, this is not either what I believe or I have stated above. I believe that no research should be permitted if it is significantly unethical.

There are two key points there:
1. I think mildly unethical research may be permitted to proceed in some cases.
2. Ethics doesn't merely reduce to avoiding harm, there are other important considerations such as autonomy and justice among others. Even if research is harmful there maybe reasons to let it proceed.

"But I do think (with Blackstone) that I have the right to ask him the story of his life, and that it is up to the would-be regulators to explain why I should not be allowed to do so. You have yet to do that."

Alright, I will give the argument in a straightforward fashion:
Step 1, the general argument:
1. Being a researcher is a profession.
2. Professions carry with them certain ethos and duties dependent on their roles and clients.
3. The specific duties of researchers entail a high level of concern and care for their research participants, since these individuals are for the large part taking the risks involved in research.
Conclusion: Researchers have a professional obligation towards their research participants.

But this doesn't get us to independent review, that takes another argument.

Several arguments for independent ethical review:
A. Past harms
1. Researchers have a professional obligation towards their research participants.
2. In the past researchers have failed, sometimes spectacularly to uphold that obligation.
3. One way to minimise this is to have independent review of their research.
Conclusion: research should be independently reviewed.

B: Complexity
1. Ethical issues are complex and sometimes difficult to identify, especially if you are close to the subject matter.
2. An independent multidisciplinary group is going to be better at identifying these issues than an individual.
3. Researchers presumably want to avoid being unethical, thus they should seek independent review.
Conclusion: Research should be reviewed by independent committees.

C: Potential Unethical Behaviour
1. Research inherently involves the unknown, and entails potentially unethical behaviour, in regards to respect for autonomy, risk, justice and so on.
2. Since these can be significantly harmful for the research participant or others we ought to aim to minimise these.
3. One means of minimising these is by seeking independent review.
4. Therefore there should be individual ethical review.

Each of those rough arguments seems sound to me, and provides reasons why we might want independent ethical review for all research, I see no need or good reason to make special exception for oral history. As you have pointed out it is difficult to anticipate some aspects of the research, this is not unique by any means to oral history. Likewise the harms likely to be entailed by oral history research are liable to be lower than biomedical research, nonetheless possible harms are still there.

There are two more pragmatic arguments:
Harms to research
1. If unethical research is carried out then this may become publicly recognised.
2. If this is public then this is likely to make people less keen to be involved in any research.
3 Therefore we should welcome independent review.to try and minimise this.

Harms to university
1. If unethical research is carried out then this may become publicly recognised.
2. If this is public then this is likely to make people less keen to be involved in supporting our university.
3 Therefore we should welcome independent review.to try and minimise this.

"Yet it is on this vagueness that you propose significant restrictions on the freedom of speech."
As I said in the previous posts, I am suggesting no limitations on the freedom of speech. I am proposing limitations on the freedom of researchers but that is a different thing than freedom of speech.

"Historians have been interviewing people for 2500 years. I would really like to learn of some projects you think went badly, and why you think IRB review would have helped them."
I know of none, but then I am not a historian, so I am not sure why you should think that I would be aware of these case. And it is unlikely that we have records of how all of that research was conducted or any ethical issues that were raised in that history. Do you honestly think that every history project ever carried out in the last 2500 years has been ethically excellent? I'd be stunned that history performed that much better than other research areas. As for IRB's helping, no doubt in some cases they wouldn't have, but in others I am sure they would have.

Let me put back the question to you:
Do you think history research can be unethical?

If the answer is yes, then why would you oppose independent review as a means of reducing unethical research?

Zachary M. Schrag said...

Hunter: I think you have misunderstood me. I was suggesting you would have a broad idea of the questions you might ask and the sorts of people you might talk to, and why you might talk to them. I don't think you should provide the individual questions you would ask each person who might be involved in your interview, but I still think a broad overview is possible. Is even this not the case?

Schrag: Not in the work I've done. Right now I'm in the early stages of a book on the history of riot control. I'm still in the nineteenth century, but in 2005 I went through IRB approval so I could interview my brother about his National Guard service during Hurricane Katrina relief operations. At some point I may find some other people to talk to, but that's all I know.

Try this exercise: Get a copy of Studs Terkel's Hard Times or Christian Appy's Patriots. Then list all the topics mentioned in the book, and try to reconstruct all the questions the interviewer asked to get the responses. Then answer these questions:

1. How long did it take you to list all the topics?

2. Could the interviewer have anticipated the topics at the start of his project?

3. Of what use to an IRB would the list of topics be?

Hunter: "By the way, if researchers are ignoring their training and the published guidance you give them, what makes you think they are sticking to their committee-approved protocols?"
Tough question. For the most part the researchers appear to take on board the committee's recommendations, quite well, it isn't a case of neglect, but rather that ethics can be a complicated business. So I think most of them will stick to what they have agreed to do. However this is not always going to be the case. There seem to be two main reactions to this. The first is to provide further oversight and regulation, such as the reporting of researchers to research ethics committees, and random audits. This provides some assurances that the researchers are following what they have said, which broadly was the result of a recent voluntary audit at my own university. The other option is to react to any revealed breaches of ethics with punitive measures, which does happen from time to time. But of course neither measures guarantee that researchers will behave ethically, they just make it more likely.


Schrag: Audits and punitive measures can proceed without IRB review, so long as the standards are defined. That's how we police plagiarism, for example.

Hunter: "But the big question here is not about informed consent, but about the risk-benefit calculus that underlies medical research and is so corrosive of social research."

Well it is a bit more complicated than this. risk-benefit is one factor, other factors are informed consent, justice and other ethical norms that may be violated in the course of research. "In Britain's Research Ethics Framework," it appears as the blanket statement that "Harm to research participants must be avoided.""
I should note that that is not Britain's research ethics framework, but instead one framework by the Economic and Social Research Council. Written specifically for the social sciences I might add and endorsed by the Arts and Humanities Council which I presume fund much of the historical research not funded by the ESRC in the UK. It has been highly influential on the structure of research ethics committees and their operation in universities in the UK, almost by default. I'll also agree, like many policy documents, it seems rather more aspirational than realistic in parts, and the quote you have highlighted is a good example of that.


Schrag: I'd be very interested to learn who exactly had power in the drafting of the framework. I'm working on the history of recommendations and regulations in the United States; perhaps someone in the U.K. can take up the story there.

It's quite possible that historians and journalists were excluded from the drafting process. Notable is section 2.17:

"Some research that poses risks to research subjects in a way that is legitimate in context of the research and its outcomes.This might arise for two reasons. First, as is recognised elsewhere (see Tri-Council of Canada, 2002) research may be ‘deliberately and legitimately opposed to the interests of the research subjects’ in cases where the objectives of the research are to reveal and critique fundamental economic, political or cultural disadvantage or exploitation. Much social science research has a critical role to play in exploring and questioning social, cultural and economic structures and processes (for example relating to patterns of power and social inequality), and institutional dynamics and regimes that disadvantage some social groups over others, intentionally or not,. Such research results may have a negative impact on some of the research subjects. Principles of justice should, however, mean that researchers would seek to minimise any personal harm to such people. Secondly, researchers should also consider how to balance the potential of immediate or short-term risks to research subjects against longer-term gains to future beneficiaries. It is the responsibility of the research proposers to make such a case in detail to an REC."

In other words, you can harm a social group, but must minimize harm to an individual. This sounds like something written by a sociologist, intent on carving out some space for his discipline alone.

Hunter: "You seem to believe that no research should be permitted until it is proven harmless."
No, this is not either what I believe or I have stated above. I believe that no research should be permitted if it is significantly unethical.


There are two key points there:
1. I think mildly unethical research may be permitted to proceed in some cases.
2. Ethics doesn't merely reduce to avoiding harm, there are other important considerations such as autonomy and justice among others. Even if research is harmful there maybe reasons to let it proceed.


Schrag: As you know, I am concerned about the protection of autonomy. I don't know what "justice" means in this context, since the above-quoted passage from the Research Ethics Framework is the only use of the term, and it equates justice with minimizing harm.

Hunter: "But I do think (with Blackstone) that I have the right to ask him the story of his life, and that it is up to the would-be regulators to explain why I should not be allowed to do so. You have yet to do that."

Alright, I will give the argument in a straightforward fashion:
Step 1, the general argument:
1. Being a researcher is a profession.
2. Professions carry with them certain ethos and duties dependent on their roles and clients.
3. The specific duties of researchers entail a high level of concern and care for their research participants, since these individuals are for the large part taking the risks involved in research.
Conclusion: Researchers have a professional obligation towards their research participants.


Schrag: I'm with you so far.

Hunter: But this doesn't get us to independent review, that takes another argument.

Several arguments for independent ethical review:
A. Past harms
1. Researchers have a professional obligation towards their research participants.
2. In the past researchers have failed, sometimes spectacularly to uphold that obligation.


Schrag: I dispute this premise, at least in regard to oral history.

I've been investigating this topic for about three years, and the only case I know of when a historian seriously failed his narrators is that of William Sheridan Allen's The Nazi Seizure of Powe: The Experience of a Single German Town, 1930-1935, first published in 1965. Allen interviewed residents of the town of Northeim, Germany, about their lives during Hitler's rise to power. He promised to keep secret the names of his informants and of the town itself, using pseudonyms in the published work. But soon after the book was translated into German, a German magazine disclosed the real name of the town and many of Allen's narrators.

This was clearly a failure, though I don't know enough about the case to pronounce it a "spectacular" failure. But one failure is a rather thin record of abuses. Nor is it clear that IRB review would have prevented this failure. And the victims were, after all, genuine Nazis.

(Where oral historians regularly fail is in their obligation to serve other researchers by archiving their interviews. I still owe George Washington University copies of some of my Metro interviews, more than a year since my book was published. The solution to this problem is more funding for oral history.)

I must leave other disciplines to defend their own records, but I will say this: I wish ethicists would give The Tearoom Trade a rest. However troubling you find Humphreys's methods, they were sufficiently exceptional that they should not be the basis of generalized rule-making.


Hunter: 3. One way to minimise this is to have independent review of their research.


Schrag: I dispute this premise as well. I see no evidence that independent review improves the likelihood that researchers will honor their ethical obligations, and a far amount of evidence that it reduces that likelihood. For example, in January I noted an IRB's demand that a historian destroy the recording of interview tapes. In March I described the ways that IRB-mandated training distorts the ethics of my profession. And I just noted William Burman's finding that IRBs make consent forms harder to read. While I hope to do more systematic research on this issue, there are enough horror stories around to suggest that independent review is as or more likely to do harm as good. I hope you will take the time to read through my half-year of blogging on this subject.


Hunter: Conclusion: research should be independently reviewed.


Schrag. Even if I granted premises 2 and 3, this conclusion would not follow, any more than the conclusion that research should be banned outright. You would need to show that independent review minimizes ethical breaches without unduly harming other interests, such as the researcher's and the narrator's rights to free inquiry and expression. American jurisprudence holds that "even though the governmental purpose be legitimate and substantial, that purpose cannot be pursued by means that broadly stifle fundamental personal liberties when the end can be more narrowly achieved. The breadth of legislative abridgment must be viewed in the light of less drastic means for achieving the same basic purpose." (Shelton v. Tucker, 364 U.S. 479). I consider this declaration not merely a legal ruling, but a moral one, in that it upholds human dignity and freedom.


I will skip the remaining syllogisms because my objections are the same. I don't see evidence that IRB review reliably improves the ethical content of research, but it does reliably delay and distort legitimate inquiry.

 

 

Hunter: I am suggesting no limitations on the freedom of speech. I am proposing limitations on the freedom of researchers but that is a different thing than freedom of speech.

Schrag: You want me to get permission before I talk to other people. How is that anything other than a limitation on the freedom of speech?

Hunter: Let me put back the question to you:
Do you think history research can be unethical?


If the answer is yes, then why would you oppose independent review as a means of reducing unethical research?

Schrag: Historians occasionally break all sorts of ethical norms. They plagiarize. They falsify data. They steal books from the library, though not, perhaps, with the rapacity of ethicists. But these breaches are rare enough that the profession handles them as they arise, by shaming or firing the offenders as they are caught. We have not imposed prior review on every publication and every trip to the library. Oral historians deserve the same freedom.

David Hunter said...

Bolding, excellent idea. I'll use it too for quotes. I should also say I am going to go silent for a bit after this post, I am off to a conference tomorrow and won't be back until next week.

Schrag: Not in the work I've done. Right now I'm in the early stages of a book on the history of riot control. I'm still in the nineteenth century, but in 2005 I went through IRB approval so I could interview my brother about his National Guard service during Hurricane Katrina relief operations. At some point I may find some other people to talk to, but that's all I know.
Sounds like a fascinating book. Okay clearly you aren't at the point of going to an IRB yet. One thing you could do, (you certainly can in the British system) is apply with a broad set of questions around your topic, and then apply for a substantial amendment when you want to add new topics and/or people.

You don't need specific questions, and of course RECs/IRBs shouldn't ask you to provide the impossible. But some idea of general themes or anticipated areas of interest is helpful for the REC. That way they can anticipate whether there are likely to be more ethical issues, so suppose you were interested in the history of sexual deviancy for example, there are likely to be a few more issues raised there, than the history of tea cosys. (A British woollen cover for a tea pot. Yes the Brits are mad.)

Schrag: I'd be very interested to learn who exactly had power in the drafting of the framework. I'm working on the history of recommendations and regulations in the United States; perhaps someone in the U.K. can take up the story there.
The framework was developed in a two year consultation process from what I have read about it. The main players were of course the Economic & Social Research Council. I expect sociologists did dominate, although the consultation was reasonably wide, and I believe it wasn't just social scientists drafting it. However I am not sure how much input went into it from a history perspective simply because I am not sure who funds much of the history research. I would guess some of their funding would come from the ESRC, but the bulk would come from the Arts and Humanities Research Council, which rather than creating it's own research ethics framework has endorsed the ESRC REF.

Schrag: As you know, I am concerned about the protection of autonomy. I don't know what "justice" means in this context, since the above-quoted passage from the Research Ethics Framework is the only use of the term, and it equates justice with minimizing harm.
Right sorry should have explained myself more, most research ethics frameworks buy into something like the four principles approach. On this approach, there are four prima facie moral principles:
1. Respect for Autonomy
2. Respect for Beneficence
3. Respect for Non-maleficence
4. Respect for Justice

None of these principles is pre-eminent, what the outcome ought to be in each situation depends on the situation, and often balancing between these principles. This view is gained widespread acceptance in regards to bioethics and public policy (Even if many of us philosophers have our theoretical doubts about the model) in part because it provides a set of claims that people with wildly different ethical preconceptions can nonetheless agree upon. In this context justice is primarily about the distribution of research and it's results. So it isn't about harm or benefit directly, it is about the distribution of that harm or benefit. It is somewhat neglected in research ethics (I think because justice issues tend to point to systemic concerns), but you can see cases of it in complaints about trials of drugs that would be unaffordable in the third world, being carried out in the third world (since the research participants gain no long term benefit from the treatment). You can also see it in concerns about over-researched populations, there the concern is that the burden of research is not being evenly spread.

Schrag: I dispute this premise, at least in regard to oral history.

Right, but I was making a more general argument about research, rather than oral history research.

I see no evidence that independent review improves the likelihood that researchers will honor their ethical obligations

There are some obvious explanations for this lack of evidence. The first is that often the outcomes of ethics committee meetings are considered confidential. The second of which is that measuring this is very difficult. However, while I will happily agree that ethics committees get it wrong some of the time, as per your examples, they get it right a lot more often. While the evidence may not be easily available, each time a consent form was improved by a REC that was a benefit, each time a potential harm was averted, that was a benefit... And so on. Is there evidence for this, perhaps not publicly available evidence (another interesting research project, thanks!) but there is a vast amount of anecdotal evidence.

And some evidence can be given to show that committees are more reliable decision makers than individuals, (I've recently written on this here: [View PDF] in the Journal of Medical Ethics.

Schrag: You want me to get permission before I talk to other people. How is that anything other than a limitation on the freedom of speech?

It is a limit to the right to research... which is not the same thing as speech even if it happens to involve speaking.

I guess I still am not convinced that there is a significant difference between oral history research and other sorts of research. It is the potential for serious ethics breaches that most bothers me.

Basically your points in favour of getting no or only departmental review are:
1. It is difficult to predict the content of the research beforehand.
2. The harms involved are lower than in other areas, especially biomedical sciences.
3. There is little evidence that IRBs/RECs are the best/most proportionate means of minimising ethical breaches.

My responses are:
1. Some rough outline of topic or area must be available even if the specifics are not. So I think the REC can still make some decision.

2. While the direct physical risk to the participants are infinitely lower (It is hard to see how an interview can directly kill you) the indirect physical risks and social risks are still present and potentially quite high. While sometimes it may be justified to let the participants choose to run those risks, I'm not convinced that a researcher with the obvious conflict of interest that they want to conduct the research is the best person to make that decision. Furthermore harms are only one part of the ethical equation, autonomy is also important. While you are right that a common consent form can help this, the devil is in the details, and without those details being scrutinised history research could still be done in a deceptive fashion for example.

3. I agree there is little overt evidence, but I think there is some reasonably obvious reasons for this as I argued above. I'm inclined to think that despite it being difficult to measure, if you want to argue that ethics committees have made some research ethical worse that they have made some research ethically better. It would take a stunning level of incompetence to do this. Indeed, it would be amazing that individual researchers can be ethically perfect, but get a bunch of them together into a committee, and suddenly they can only make things worse.

I'm all for reform of the current system, and it's moving away from a system designed by and staffed by for the most part medics. I just don't see the need to throw the whole thing away, I'm inclined to tweak it instead.

Zachary M. Schrag said...

It seems that no lack of evidence will dissuade you from conjuring hosts of deceptive historians, and no amount of evidence will persuade you that ethics committees attain "a stunning level of incompetence" frequently enough to keep this blog in business.

The paper on "The experiences of Ethics Committee Members" shows only that the more people who review a project, the more likely they are to demand modifications. It does not show that these are modifications for the better. Too many cooks spoil the broth, and too many inexpert ethical reviewers leads to the kind of absurdity I document.

ZMS