- Published: November 19, 2012
When people imagined the “lie detectors” of the future, they probably did not imagine an enormous brain-scanner. Yet, the closest thing we currently have to a lie detector is an MRI machine. In one fMRI-based technique, called the Guilty Knowledge Test, subjects are placed in the scanner and asked either relevant questions that test their truthfulness or neutral questions. If subjects exhibit significantly different neural activity for the relevant compared to the neutral questions, that suggests that they have “guilty knowledge” about whatever that question was probing. Although critics tend to focus on the technology’s susceptibility to “false positives” (i.e. detecting a lie where there is none), we should be just as worried about the opposite – finding the guilty innocent. For instance, Ganis et al. (2007) used fMRI to correctly detect deception 100% of the time until they used simple countermeasures involving associating a covert action (such as clandestinely moving one’s left pointer finger) with irrelevant stimuli. Then, accuracy was reduced to only 33% - hardly enough to inspire any confidence in the reliability of the technology outside of the laboratory.
There have already been a number of court cases in which defendants have argued for the admission of brain-based lie detection technology as proof of their innocence. In the hallmark Supreme Court case United States vs. Semrau, Dr. Lorne Semrau appealed his conviction of health care fraud on the basis that elucidating evidence – namely, results from an fMRI lie-detection test – should have been allowed in the court as evidence of his innocence. The Supreme Court still rejected the fMRI evidence and affirmed Semrau’s conviction because the technology had not been sufficiently tested in “real-world” settings and was not yet deemed reliable. Similarly, in a recent Maryland murder trial, the judge also said that brain-imaging evidence was not admissible. Gary Smith, who is accused of murdering his fellow Army Ranger in 2006, wished to use evidence from an fMRI test – in which he was asked questions such as “did you kill Michael McQueen?” – to prove his innocence, but he was rejected. Again, the technology was deemed not well enough understood and as having somewhat dubious reliability.
Although the danger of false positives or negatives are currently avoided by courts’ rejections of these new technologies, these specific cases and the whole issue of brain-based lie detection could still be causing problems by tainting the credibility of neuroscience as a whole. There are many ways in which neuroscience could valuably inform the legal system – such as identifying neural bases of criminal behavior and offering new rehabilitative strategies. However, lie detection is probably not the right avenue for wielding its influence because it is simply too unpredictable and not well enough understood. It is also slightly suspicious that defendants such as Semrau and Smith have sought out these brain-based techniques and then argued to use them as proof of their innocence when they could easily have known how to fool the system. Unfortunately, these types of neuroscience technologies might be turning judges away from admitting other neuroscience-related evidence that is better corroborated and could have greater societal implications.
Ganis, Giorgio, Peter Rosenfeld, John Meixner, Rogier Kievit, and Haline Schendan. "Lying in the scanner: Covert countermeasures disrupt deception detection by functional magnetic resonance imaging." NeuroImage. 55.1 (2011): 312-319. Web. 19 Nov. 2012.
Laris, Michael. "Debate on brain scans as lie detectors highlighted in Maryland murder trial." Washington Post [Washington DC] 26 Aug 2012. Web. 19 Nov. 2012.
McCalla, John Phillips. United States. United States Court of Appeals. United States vs. Semrau. Washington DC, 2012. Web.Shen, Francis, and Owen Jones. " Brain Scans as Evidence: Truth, Proof, Lies, and Lessons." Mercer Law Review. 62. (2011): 861-83. Web.