By Michael Hoenig - New York Law
Journal - June 9, 2014
In my column last month, "'Unreliable'
Articles, 'Trial By Literature' Revisited,"1
I reported on the increasing phenomenon of experts testifying
about the content of all kinds of published articles they had
not authored and results of research in which they had not
participated. It is as if the hearsay article is "testifying"
via the conduit of the expert. One problem is the inability of
judges or fact-finders to truly gauge the reliability of the
hearsay. Another is the unavailability of the author to be
cross-examined, a trial mechanism that helps to test
credibility, identify weaknesses and expose unreliability of
content, methodology and opinions.
In civil litigation, the party's
challenge is not always to prove that the opponent expert is
lying. Testimonial probity is not always weighed in a
black-and-white contrast. Shades of gray often permeate the
qualitative fabric of expert testimony. Which witness is more
convincing? Which testimonial position is more logical, more
reliable, more persuasive? An expert can testify impressively
and his qualifications may be excellent. He may even believe
what he is saying. But if he relies upon and spouts what is
essentially "junk science," extracted from someone else's
writing, the testimony is junky nonetheless.
One method of revealing junky
testimony (or its limitations) is to demonstrate unreliability
or lack of trustworthiness of the hearsay. But if opposing
counsel fails to mount an incisive challenge to the hearsay, the
testifier could get a free ride. Similarly, if courts, willy
nilly, infer reliability of the hearsay simply because it was
published, the courts are ignoring realities of the publishing
marketplace, hampering the justice system in its search for the
truth and defaulting on their judicial gatekeeping task.
Often, proponents of such
hearsay testimony invoke the magic words "peer review" to
sanctify the article with a kind of hallowed status that would,
they hope, dispense with the need to prove the article's
reliability. Some courts buy it. After all, in
v. Merrell Dow Pharms.,2 the U.S.
Supreme Court, in elaborating the gatekeeping task, said that a
"pertinent consideration" in determining whether a theory or
technique is scientific knowledge that will assist the trier of
fact "is whether the theory or technique has been subjected to
peer review and publication." But, as last month's column showed
and as prior articles revealed,3 peer review is
riddled with problems, qualitative shortcomings, unjustified
conclusions, false findings, and misleading information. Indeed,
last month's column reported on Harvard science journalist John
Bohannon's experience in submitting for publication a hoax
science paper containing grave errors to hundreds of
"open-access" scientific journals. In Bohannon's sting
operation, more than half the journals accepted the paper. Some
60 percent did not conduct peer review. Of the 106 journals that
did conduct peer review, some 70 percent accepted the
error-ridden paper! All that plus additional sources about peer
review frailties were identified.
This writer received many
comments in the wake of last month's column. Not surprising!
"Trial by literature," unreliable hearsay, junk science and
expert testimony are pivotal crossroads challenges at the core
of complex litigation. In particular, I want to share with
readers two sources on peer review cited to me by a superbly
skilled litigator conversant with intensive
"trial-by-literature" wars. Together with the information
sources cited and discussed in my prior articles, these
additional resources should equip the serious litigator with
enough positive and negative information on peer review
practices to help mold strategic thinking and critical advocacy.
As I wrote last month, "article worship" presents dangers that
threaten the reliability standard as never before.
One resource recommended to me
and not mentioned earlier is a fine law review article by
University of Miami Law Professor Susan Haack titled, "Peer
Review and Publication: Lessons For Lawyers."4
Published in Spring 2007, this article developed from a talk
Haack gave at the National Institute of Justice Conference on
Science and the Law in September 2005, in Florida. More about
some of Haack's important observations later.
A second resource is the recent
development and launch by several concerned scientists of a new
post-publication peer review system called "PubMed Commons,"
housed on the often-accessed National Center for Biotechnology
Information (NCBI) biomedical research database. PubMed Commons
was announced on Oct. 22, 2013. It allows users to comment
directly on any of PubMed's 23 million indexed research
articles. For a description of this new PubMed venture, the
reader can consult Aimee Swartz's October 2013 article in The
Scientist Magazine, called "Post-Publication
Peer Review Mainstreamed."5
Swartz observes that studies
have shown that peer review "does not elevate the quality of
published science" and that many published research findings
"are later shown to be false." Part of the problem (also
addressed in my May column) is that published works need to be
exposed to comment, critiques, refutations, and identification
of errors and weaknesses. Although, nominally, scientists are
welcome to publish contradictory findings, typically by
contacting the authors directly or by writing a letter to the
journal's editor, these are lengthy processes that "likely will
never be heard or seen by the majority of scientists." Thus,
"most scientists do not participate in formal reviews."
Although a small number of
scholarly journals have launched online fora for scientists to
comment on published materials, there is inconvenience to
scientists in commenting journal by journal. If one wants to
comment on a paper in the journal "Nature," one has to go to the
Nature site, find the paper and comment. If it is a PLOS paper,
one has to go to the PLOS site. These attempts are major
investments in time, "particularly when people may never see the
The several scientists behind
development of PubMed Commons were frustrated by the
inefficiencies and the unavailability of published criticism of
science and technical papers. The Commons "allows users to
comment directly on any of PubMed's 23 million indexed research
articles." Such an organized post-publication peer review system
could help "clarify experiments, suggest avenues for follow-up
work and even catch errors," said Stanford University's Rob
Tibshirani, one of the Commons developers and a professor of
health research and policy and statistics. The post-peer review
"could strengthen the scientific process." Approximately 2.5 to
3 million people access the online resource each day.
Researchers thus may have a resource to check out whether
published papers have stimulated objections or controversy or,
perhaps, have been retracted.
Haack's 2007 law review article
elaborates many of the limitations inherent in peer review
identified in my columns. Her article sketches the origins of
and many roles peer review now plays; the rationale for
pre-publication review and its shortcomings as a quality control
mechanism; the changes in science and scientific publication
that have put the peer-review system "under severe strain";
recent examples of flawed or even fraudulent work that passed
peer review; and the role peer review ought to play in courts'
assessments of "reliability."
Some persons are "tempted to
exaggerate" regarding the virtues of pre-publication peer
review. Instead of viewing it as a "rough-and-ready preliminary
filter," some consider it a "strong indication of quality." But,
in reality, "the system now works very imperfectly."6
Peer review cannot be expected to "guarantee truth, sound
methodology, rigorous statistics, etc." Scientific editors have
stressed that they and their reviewers "have no choice but to
rely on the integrity of authors."7 I would add that
when the author is not present to testify and be cross-examined,
the testifier's parroting of the hearsay can create a
testimonial integrity gap that should signal gatekeeping courts
to be cautious.
Citing a noted editor, Haack
describes the review process roughly like this: An editor
classifies articles into self-evident masterpieces, obvious
rubbish, and the remainder as needing careful consideration. The
latter is the large majority. The editor then chooses one or two
reviewers to look at each paper selected with a checklist
against which to check for aspects of style, presentation and
certain kinds of obvious error. The reviewers are given a time
limit—often no more than two weeks—to respond with their
assessment and recommendation. Reviewers "spend an average of
around 2.4 hours evaluating a manuscript." Many journals don't
check the statistical calculations in accepted papers and
reviewers are in no position to repeat authors' experiments or
studies, which ordinarily have taken a good deal of time and/or
money. Acceptance rates vary. Where the acceptance rate is low,
most of the rejected papers submitted to the "most desirable"
journals eventually appear in some lower-ranked publication. A
paper "may have been rejected by ten or twenty journals before
it is finally accepted."8
With more and more papers
submitted to more and more journals, the quality of reviewers
and the time and attention they can give to their task "is
likely to decline." Prestigious journal editors have expressed
major concerns. Richard Smith, editor of The Lancet, wrote that
peer review is "expensive, slow, prone to bias, open to abuse,
possibly anti-innovatory and unable to detect fraud." Drummond
Rennie, of the Journal of the American Medical Association (JAMA)
wrote: "there seems to be no study too fragmented, no hypothesis
too trivial, no literature citation too biased or too
egotistical, no design too warped, no methodology too bungled,
…no argument too circular, no conclusion too trifling or too
unjustified, and no grammar or syntax too offensive for a paper
to end up in print."9
Haack advises: The fact that a
work has passed pre-publication peer review is "no guarantee
that it is not flawed or even fraudulent; and the fact that it
has been rejected by reviewers is no guarantee that it is not an
important advance." Publication does, in the long run, make the
article available for the scrutiny of other scientists. This
increases the likelihood that eventually any serious
methodological flaws will be spotted.10 Haack's
discussion of how this all impacts upon the quest for
"reliability" in the courtroom is too lengthy to review here.
But she posits a "whole raft of questions" lawyers should ask
that might throw light on the significance of publication in a
peer-reviewed journal.11 These can be of value to
litigators preparing to challenge the hearsay article.
The courts' task is to gatekeep
expert testimony to ensure that scientific evidence is relevant
and reliable before it is ruled admissible. Anything less
obscures the search for the truth and distorts the justice
system. Somehow, the professionally-reliable-hearsay
exception—permitted only because it is supposed to be
trustworthy—has morphed into a "trial by literature" stampede in
which expert testifiers use all manner of hearsay articles to
quote or to bolster their testimony. Often, the magic words
"peer review" are flashed as a talismanic
Sometimes, the tactic works. The
search for the truth deserves better, however. Litigators have
been furnished with abundant information about peer review's
shortcomings. Armed with such knowledge, they can fashion
compelling advocacy. A battle over reliability can dictate the
is a member of Herzfeld & Rubin.
1. M. Hoenig, NYLJ, May 12,
2014, p. 3 (also available on LEXIS).
2. 509 U.S. 579, 593 (1993).
3. See M. Hoenig, "Testifying
Experts and Scientific Articles: Reliability Concerns," NYLJ,
Sept. 16, 2011, p. 3 (citing prior articles on experts' use of
unreliable hearsay and scientific papers questioning the
reliability of biomedical articles, and reporting serious
shortcomings even in those that were peer reviewed);
"Gatekeeping of Experts and Unreliable Literature," NYLJ, Sept.
12, 2005, p.3. The articles are also available on LEXIS.
4. Susan Haack, 36 Stetson L.
Rev. 789 (Spring 2007).
5. Aimee Swartz,
"Post-Publication Peer Review Mainstreamed," The Scientist Oct.
6. Haack, supra n. 4, 36 Stetson
L. Rev. at 796.
7. Id. at 797.
8. Id. at 800.
9. Id. at 802.
10. Id. at 808-809.
11. Id. at 814-815.
Printer Friendly Version
Back to Archive
125 Broad Street, New York, New York, 10004 - Phone: 212-471-8500 - Fax: 212-344-3333©2004 -2014 Herzfeld & Rubin, P.C.