It is time legal scholars and practitioners follow suit to ensure our legal disputes are resolved with the best science has to offer.
By Jason Chin, adjunct professor
This commentary was first published in The Hill Times, Monday Nov. 30, 2015.
A recent large-scale study has found that a great deal of science admissible in Canadian and U.S. courts is unreliable.
Brian Nosek of the University of Virginia and his colleagues recently attempted the herculean task of determining if modern psychological science is reliable. To do so, they tried to replicate the results of 100 psychology studies that were already published in prestigious peer-reviewed journals.
The collaboration’s findings, published in the journal Science and now widespread in the media, were disturbing. Less than half of the studies Nosek’s team redid worked out the same way as the originals, despite copying the prior works’ methodologies. Many think the problem is just as extensive in other areas of science.
The ramifications of Nosek’s work go well beyond science. Every day, Canadian courts rely on accurate science to resolve disputes fairly. For example, courts use science to determine questions as varied as whether video games cause violence to whether DNA links a suspect to a crime scene. You could even say the problem is more immediate in law: science self-corrects, but defendants often get just one kick at the can in situations that are literally life and death. So what does it mean for law if the majority of science’s findings are unreliable?
Back in 1993, the U.S. Supreme Court addressed this problem in its landmark decision in Daubert v. Merrell Dow deciding that judges were not scrutinizing science closely enough. The old way admitted scientific evidence in court if it reached “general acceptance” in the scientific community.
Daubert changed the game. When confronted with scientific evidence, trial judges were now tasked with a gatekeeper role. The trial judge independently evaluates the science behind the evidence prior to it reaching the jury or impacting the ultimate decision. He is or she is specifically directed to ask (1) how the finding had been tested, (2) what the error rate was, and (3) if it had been published in a peer-reviewed journal.
Canadian courts have readily adopted the Daubert standard and the trial judge’s role as gatekeeper has only increased in Canada. Indeed, in a 2015 Ontario Court of Appeal decision, Chief Justice George Strathy strengthened the Ontario position: “There has been growing recognition of the responsibility of the trial judge to exercise a more robust gatekeeper role in the admission of expert evidence…”
So what does it mean for the law that more than 50 per cent of peer-reviewed and published findings are irreproducible? It means that the current U.S. and Canadian position regarding scientific evidence does not go far enough. Yes, judges must go beyond mere acceptance of a finding and evaluate the scientific procedure used. But those procedures judges now look at—testing, error rates, peer review—they all failed to prevent Nosek and colleagues’ startling find.
Last year, in an article published by the American Psychological Association (APA), I suggested that in the courtroom, where life and death is literally on the line, we should apply science’s best practices rather than the ones that resulted in the current crisis. Given what we know now, those recommendations are even more important. These practices are not my own – they derive from the work of watchdog scientists and professors who have long warned about what a study like Nosek’s might find.
Three of these recommendations can be easily implemented now. First, progressive academic journals are beginning to require researchers “pre-register” their studies. That is, researchers must publicly prerecord the parameters of their experiments, which ensures they don’t engage in the misbehavior of stopping data collection once a publishable effect is found (i.e., changing the goalpost). Judges should ask if the research offered to the court was pre-registered, and if it wasn’t—why not?
Second, judges should ask expert scientific witnesses if the research they are relying on has been replicated. Replication is at the foundation of good science and failure to replicate is a serious red flag.
Finally, judges should ask expert witnesses about potential unpublished research that either confirms or casts doubt on the research at issue. A major contributor to the current problem is that a great deal of high-quality research never sees the light of day because journals prefer high-impact research.
Despite the alarmist tone that the coverage of Nosek’s work has taken, many within the sciences take solace in knowing there are capable researchers working on solutions to the problem. It is time legal scholars and practitioners follow suit to ensure our legal disputes are resolved with the best science has to offer.
Jason M. Chin is an adjunct law professor at the University of Toronto and an Ontario lawyer. He authored Psychological Science’s Replicability Crisis and What it Means for Science in the Courtroom.