Does peer review work? Thoughts on conference session selection.

So here’s something interesting from my inbox!

Only 8% members of the Scientific Research Society agreed that “peer review works well as it is.” (Chubin and Hackett, 1990; p.192).

“A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research.” (Horrobin, 2001).

Horrobin concludes that peer review “is a non-validated charade whose processes generate results little better than does chance.” (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors. (full text and references)

I got this email from the International Symposium on Peer Reviewing: ISPR being organized in the context of The 3rd International Conference on Knowledge Generation, Communication and Management: KGCM 2009, which will be held on July 10-13, 2009, in Orlando, Florida, USA.

I assume that I got on their mailing list because of my post from almost a year ago, Conference 2.0 – changing how sessions are selected. In that post, I mentioned the KGCM conference and how these experts are investigating how people learn at conferences and are questioning how conference sessions are selected.

In the past year, I’ve read many blogs wondering about conference sessions and the selection process. Many use a form of “peer review”, meaning that people who are presumably peers of the paper author or conference session presenter rate the submission and determine whether or not it gets selected. Academic journals use peer review as a means to claim that their articles are superior to those found in lay magazines, books, or other sources. And many conferences use this same technique to select sessions, whether or not there are academic proceedings that result from the conference.

So this research not only calls into question the methodology of selecting conference sessions, it questions that whole academic tradition of journals representing the highest level of publishing and “truth” about current research in any field. They find that the principle of peer review is well-regarded, but that the methodology and implementation is flawed. Doesn’t that sound familiar…

If peer review is a flawed process, what does that say about conference session selection? You have to ask yourself, would a completely open selection process, where everything is out in the open be more in keeping with the principle of fairness and making sure best practices and research are shared to benefit all?

Would an open selection process for conference presentations be more or less prone to favoritism, self-promotion, populism, and cronyism than the current process? Is that just throwing the baby out with the bathwater?

If you are interested in joining a conversation about improving conferences, check out this social network created by the Association for the Advancement of Computers in Education (AACE) — Spaces of Interaction. They are having an online conversation Feb 18-20 with interactive live sessions.

And by the way, what does this say about school librarians who insist that students use only peer-reviewed journal articles for their research. Do they have a leg to stand on anymore?

Sylvia

2 Replies to “Does peer review work? Thoughts on conference session selection.”

  1. Hi Sylvia.

    Interesting. One organization with which I work, NYLearns, uses a collaborative peer review process to select instructional resources to be published in its online database of instructional resources. This peer review process is mediated by the presence and use of a formal set of standards for the peer review. These standards guide the peer reviewers in their analysis and assessment, which are then shared with the person who submitted the instructional resource. Comments and reflections then go back and forth between the peer reviewers and the original submitter until a consensus is achieved.

    I think this type of guidance and mediation would help; I don’t know, though – is it what is currently done? I’ve never been involved with the selection of journal articles; I have run subcommittees for NECC selection a number of years ago, when the process was to gather a local group and go through paper submissions. There was some guidance, but it was basically a question of what the local group found relevant and interesting.

  2. I think a lot of the various processes people use are simply historical artifacts and no one questions them.

    In the previous post I mentioned, I linked to research that said that peer review with a basic scoring system (best aggregate score wins) tends to favor bland, homogenous sessions or papers. Using a consensus model might help with that, although for a submission that is seriously challenging the status quo, it would have to find a “champion” who would stand up for it against colleagues saying that it’s too radical or out of the ballpark. I think you’d have to give a lot of thought about how free your reviewers are to disagree with each other without fear of being labeled a troublemaker.

    Like you say, the criteria if often not specified well — “relevant and interesting” may well get you the latest fads and buzzwords, or whatever the reviewers think is “new”. Unfortunately not all new ideas are great ideas, and you might leave a lot of the tried and true stuff on the cutting room floor.

    It’s an interesting problem!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.