Flawed Research: The Hidden Weakness of Peer Review & More Falsified Stem Cells

gspz0625.jpg

The point of this post is not about stem cell research it is about flaws in the peer review process.  However, I need to start with a little timely news and some background first.

In yet another blow to stem cell research, New Scientist reports today that one of the best-known stem cell papers in the past five years, describing adult cells that seemed to hold the same promise as embryonic stem cells is based on likely flawed (or false) data.

Everyone will recall the Korean Stem cell-cloning mess recently. There, Hwang Woo Suk admitted that none of the 11 tailor-made cell colonies he claimed to have created actually exist (and that women were coerced to donate their eggs).  More details of this fiasco are reported here and the retraction of the paper from the journal Science here. 

The latest stem cell mess-scenario seems to be yet another garden variety data falsification.  The new paper prints data that is later found to be exactly used in another paper from the same group but on different cells – (coincidence- methinks not)! 

What’s more interesting to docinthemachine here is the inherent flaws in the peer review system.  Its just like that old bad legal joke:  a jury of your peers means a group of people too stupid or lazy to get out of jury duty (a sad commentary on civic duty and the legal system but hey that’s another post for the politico-blogs).  Unfortunately, the medical peer review process shares some similarities.  I have served as a journal reviewer for many journals and have reviewed original research since I was a resident through fellowship and on as an assistant professor.  I have several disquieting observations. Since I have an extensive network of colleagues, mentors, and mentees (is that a word?) from many different departments and universities (many who just have sought my advice) the following cannot be taken as representative of any particular person I have worked with.  Sorry – not naming names here. 

Where Peer Review Fails:

1) the trickle down review.  A journal reviewer is chosen to review submitted manuscripts based on his past academic accomplishments, expertise in the area of research, and reputation.  However, it is commonplace for academic faculty members, division directors, and departmental chairs to pass the reviews on to junior faculty and fellows-in-training.  On the positive side, when done collaboratively this can be an excellent leaning opportunity for the junior person.  They do extensive research on the topic and prepare an exhaustive report for the senior faculty member who sits down with them and prepares the final review teaching all along the way.  Yes, but we’re not in Kansas anymore Toto!  On the bad side, the lazy senior faculty member passes it on the junior (lacking in the expertise or knowledge) who does his best but does an inadequate job.  The lazy mentor simply says thanks, signs his name, and passes the crap off as his own.  Yep, seen it happen. 

2) The lazy reviewer.  A journal submission goes to more than one reviewer.  Often the reviewers see the comments from the other reviewer.  On many occasions I have completed my review with a dozen points for the author to fix, clarify, or amend.  Sometimes these include very serious research flaws.  Heck, I take the job seriously.  All the more reason I get shocked to see the other reviewer chime back with 2-3 lines of comments, at least one of which is spelling or grammar.  They just did not give it much effort.  This reminds of the couples that come in where the wife speaks little English and the husband translates.  I ask a question, he translates.  She goes on for 5 minutes passionately answering to him- sounds pretty concerned to me.  He turns to me and replies “she says no”.  Something is getting lost in the translation and it’s not accidental.  I remember a seminal paper I wrote along co-authors who are world famous pioneers and leaders in reproductive medicine.  This thing was infamous in my circles since it went through more than 35 major revision drafts over 7 years before my mentor judged it complete.  (for those interested it was a reappraisal of the basic theory of the role of estrogen in human follicular and egg development refuting classical theories – Are Estrogens of Import to Primate/Human Ovarian Folliculogenesis? from Endocrine Reviews and can be found here).  Well I remember my shock when the reviewers comments came back with pages of suggestions!  This guy did MASSIVE research double checking and commenting on minutia in the paper but did 1000% his job as a reviewer (we went on the address all of his concerns). 

3) Bad reviewer choice A.  Some journals have a shortage of good reviewers.  As a result they have a second tier group who just don’t meet the muster in terms of qualifications.  Happens more with very clinically oriented journals.  A related issue is in some of the newer surgical journals.  Many of the leading surgeons on the”cutting edge” (sorry) are amazing pioneers but not necessarily researchers.  Many of the lead academics are amazing researcher but not pioneers in these fields.  Therefore the reviewer tends to be only 1/2 of what’s needed. 

4) Bad reviewer choice B.  The reviewer gets a paper to review that he is really not a expert on.  In the best case he taps out and declines the review.  In the worst case he accepts to do the review but misses a lot.  I have seen this one getting calls “hey Steve- you’re an expert on so and so – I got this paper to review but don’t know anything about this – can you give it a look?”

5) The editor over-ruling the reviewer.  Reviewer finds errors and recommends rejection. Editor accepts the paper and it goes on the be revealed to be a bad study.  Seen it happen.

I even remember a teaching conference we ran briefly when I was at Yale “bad research papers”. We would pull papers that really had flaws and have the fellows comment on what was wrong to teach methodologies etc. 

All in all the medical literature is a living breathing organism.  That’s why internet research by inadequately trained or educated readers can be so faulty.  You can find SOME paper to support both sides of an argument or hypothesis.  A real expert knows them all and weighs the bunch on their merits and methodology to try to come to the best conclusion.  A single paper taken out of its context of the entire literature is just a piece of the puzzle.  If you don’t know research methods and the other studies you are getting set up to draw incorrect conclusions.  This is one of the problems in the courtroom where in expert testimony papers are weighed equally.  It’s the reason why I never draw conclusions on areas where I am not an expert. 

In a sense this is at the basis of the trend towards evidence based medicine.  Instead of just an expert or committee’s opinion research is judged and ranked according to its quality and conclusions drawn and then the results apllied to the individual patient.  It’s a mantra for the modern specialist!  You can read a nice piece on EBM from BMJ here including this definition:

Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice. Increased expertise is reflected in many ways, but especially in more effective and efficient diagnosis and in the more thoughtful identification and compassionate use of individual patients’ predicaments, rights, and preferences in making clinical decisions about their care. By best available external clinical evidence we mean clinically relevant research, often from the basic sciences of medicine, but especially from patient centered clinical research into the accuracy and precision of diagnostic tests (including the clinical examination), the power of prognostic markers, and the efficacy and safety of therapeutic, rehabilitative, and preventive regimens. External clinical evidence both invalidates previously accepted diagnostic tests and treatments and replaces them with new ones that are more powerful, more accurate, more efficacious, and safer

For those wishing to go deep into this area check out extensive resources at UW EMB here and the categories of research levels of evidence here. 

You can read my thoughts about the future of electronic medical research and publishing here.

MORE: great discussion on the flaws of the peer review process on the thread at slashdot.  Obviously over fraud and falsified data is at the top of the crap heap.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>