Deception Research at Michigan State University


Dr. Timothy R. Levine, Professor, levinet@msu.edu, https://www.msu.edu/~levinet/

Dr. Steven A. McCornack, Associate Professor, mccornac@msu.edu

Dr. Hee Sun Park, Associate Professor, heesun@msu.edu, https://www.msu.edu/~heesun/


Please contact one of us for article reprints of if you have questions. Both media and student inquires are welcome.



For 40 years now, the Department of Communication at Michigan State University has been the home of innovative, social scientific research on human deception and deception detection. Early deception research at MSU is summarized in Gerald R. Miller and Jim Stiff’s (1993) book Deceptive Communication. This page focuses on the work of the current generation of deception researchers at MSU.


On this page you can read about truth-bias, Information Manipulation Theory (IMT), the veracity effect, the probing effect, sex differences in lie acceptability, how people really detect lies, and much more. We have summarized the highlights of the last 20 years of deception research at MSU and we offer some sneak previews of things to come.


Deception research is a contentious area of research, and there are clear camps, each championing different theoretical perspectives and different research methods. The current MSU approach is programmatic, theory-guided, predominately experimental research with a deep concern for ecological validity and a strong preference for simple, elegant experiments that produce findings that replicate. Theoretically, we are highly skeptical of nonverbal leakage as the primary mode of deception detection and of Interpersonal Deception Theory (IDT) as a depiction of typical interactions involving deception. Our research can thus be understood as a counterpoint and, and to some extent, a reaction to, these views.


Our current view is that, in the realm of deception, verbal content in context is much more important than nonverbal behaviors. All communication, and especially deception, takes place in a larger physical and social context. Communication, taken out of context, is often misleading. Because deception research usually ignores context, many findings do not generalize to communication in non-research settings. Our goal is a more context and culture sensitive understanding of deception and deception detection.


Levine is currently developing “Truth-Bias Theory” as an alternative theoretical perspective. Levine has also started a new program of deception research looking at cross-cultural deception, especially in Islamic countries. Park is currently interested in cross-cultural differences in deception, especially why, in the IMT paradigm, Asians tend to rate “honest” massages as less honest. McCornack continues his long standing interest in deceptive message production.

 

Currently, deception research at MSU is funded by the National Science Foundation with Levine as PI.


This page summarizes some of the many findings and ideas that have come out of our lab.


Current NSF Sponsored Research


Our NSF sponsored research is part of a larger, multi-university, multi-national team, and the title is Collaborative Research: Interactive Deception and its Detection through Multimodal Analysis of Interviewer-Interviewee dynamics (SBE0725685).


The MSU teams’s role was to add to a data base of video taped truths and lies that could be used as the data in cue studies and as stimulus materials in detection studies. Using a variation on the Excline procedure, we videotaped interviews with subjects who had the opportunity to cheat on a trivia game to gain a cash prize. Lies in this situation are both unsanctioned (subjects decided for them selves whether or not to cheat, and if they cheated, whether or not to lie about it) and high stakes (the subjects were students who were suspected of cheating in federally funded research at a major university). While our initial proposal was to create 100 new tapes, we have now created more than 300. Preliminary results suggested that questioning strategy plays an important role, so the additional data involved experimentally varying the nature of the questioning in the interviews.


To date, all the tapes have been digitized and edited for use in subsequent studies. As of 3/09 approximately 170 tapes have been transcribed for linguistic analysis and analysis of nonverbal differences between truths and lies is well under way. The tapes have also been used as stimulus materials in more than a dozen detection studies (so far) in our lab at MSU.


The corpus of tapes created is probably the highest quality deception stimulus materials in current existence due the absolute certainty of ground truth, the lack of sanction, and the stakes connected to the situation.


Look for some really cool findings about the impact of questioning style on truth-bias and accuracy in the near future.


A list of the papers using various versions of the cheating data is provided here. Thanks NSF!


Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confessions and denials: An initial test of a projected motive model of veracity judgments. Human Communication Research, 36, 81-101.


Levine, T. R., Kim, R. K., & Hamel, L. M. (2010). People Lie for a Reason: An Experimental Test of the Principle of Veracity. Communication Research Reports. (Accepted for publication).


Levine, T. R., Serota, K. B., & Shulman, H. (2010). The impact of Lie to Me viewers’ actual ability to detect deception. Communication Research. (Accepted for publication).


Levine, T. R., Shaw, A., & Shulman, H. (2010). Increasing deception detection accuracy with direct questioning. Human Communication Research. (Accepted for publication).

 

Levine, T. R., Shaw, A. J., & Shulman, H. (2010). Assessing Deception Detection Accuracy with Dichotomous Truth-lie Judgments and Continuous Scaling: Are People Really More Accurate When Honesty is Scaled? Communication Research Reports (accepted for publication).


Levine, T. R., Park, H. S., & Kim, R. K. (2009). The essential role of motive in deception message production and detection. Proceedings of Hawaii International Conference on System Sciences, 41.


Ali, M., & Levine, T. R. (2008). The language of truthful and deceptive denials and confessions. Communication Reports, 21, 82-91.


Kim, R. K., & Levine, T. R. (2008). Effects of suspicion on deception detection accuracy: A reconceptualization and replication of McCornack and Levine (1990). Presented at the annual meeting of the National Communication Association, San Diego.


Levine, T. R., & Kim, R. K. (2007). (In)accuracy at detecting true and false confessions and denials. Presented at the annual meeting of the International Communication Association, San Francisco.


Levine, T. R., Kim, R. K., & Hamel, L. M. (2007). People Lie for a Reason: An Experimental Test of the Principle of Veracity. Presented at the annual meeting of the National Communication Association, Chicago.


Levine, T. R. (2006). Recent findings in deception detection. Presented at the annual meeting of the National Communication Association, San Antonio.


Levine, T. R., Kim, R. K., Park, H.S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73, 243-260.


The Veracity Effect


The veracity effect (Levine et al., 1999) is now our most cited finding according to Google Scholar. The veracity effect refers to the fact that in most deception detection studies, sender veracity (i.e., whether the source is honest or lying) is the single best predictor of detection accuracy. Because message judges are usually truth-biased (see below), judges tend to get honest messages correct more often than lies. Hence, sender veracity affects judge accuracy. A primary implication of the veracity effect is that research needs to report accuracy for truths separately from accuracy for lies, and that averaging across accuracy can be misleading. Our lab has now replicated the veracity effect more than a dozen times, including several times with the new cheating tapes (see table below). The veracity effect is a very robust finding. Following the publication of our paper, most labs now report both truth and lie accuracy separately.


Levine, T. R., Park, H.. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125-144.


Truth Bias and Veracity Effect In our Studies

 

Study                                                              % Honest        Truth Accuracy           Lie Accuracy

McCornack & Levine (1990)                         72%                81.8%                         31.3%

Levine et al. (1999), Study 4                          68%                68.5%                         37.5%

Levine & McCornack (2001), Study 1           72%                75.0%                         31.0%

Levine & McCornack (2001), Study 2           69%                76.7%                         39.2%

Levine & McCornack (2001), Study 3           56%                56.8%                         44.1%

E. Park, Levine et al. (2002)                           66%                67.0%                         37.0%

Levine et al. (2005), Study 1                          63%                56.3%                         38.6%

Levine et al. (2005), Study 2                          62%                66.4%                         43.0%

Levine et al. (2005), Study 3                          62%                66.4%                         43.2%

Levine et al. (2006)                                        66%                67.1%                         34.3%

Levine et al. (2008), Study 1                          68%                74.2%                         37.7%

Levine et al. (2008), Study 2                          70%                62.9%                         22.5%

Levine et al. (2008), Study 3                          70%                73.8%                         32.3%

Levine et al. (2009)                                        72%                74.5%                         37.7%

Levine et al. (2009a)                                       61%                69.1%                         46.4%

Levine et al. (2009b)                                      60%                65.4%                         45.9%


Note that truth-bias and truth accuracy are greater than 50% in every single study, and lie accuracy never exceeds 50%.


The Park-Levine Probability Model


The Park-Levine (2001) probability model follows directly from the veracity effect. If veracity impacts accuracy, then different truth-lie base-rates will predictably yield different accuracy rates. Specifically, the model predicts that overall detection accuracy is a predictable linear function of honesty base-rates such that:


Accuracy = P(H | T) × P(T) + P (~H | ~T) × P(~T) That is,

 

the observed total accuracy will be the product of truth accuracy times the proportion of messages that are true plus the product of lie accuracy times the proportion of messages that are lies where the proportion of true messages equals one minus the proportion of messages that are lies.


The model was tested by Levine et al. (2006). Base-rate accounted for 24% of the variance in accuracy scores, the a priori linear contrast accounted for 95% of the explained sums of squares in accuracy, the slope and the y-intercept of the line best fitting the data were predicted by the model to within 1% accuracy, and raw accuracy was predicted to within 3%. The paper won the 2007 Franklin H. Knower Article Award from the Interpersonal Division of the National Communication Association. Plans are underway to test the model for both conversational participant and observers as part of an undergraduate honors research seminar for Fall, 2009.


Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68, 201-210.


Levine, T. R., Kim, R. K., Park, H.S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73, 243-260.


A Few Prolific Liars


Most deception researchers believe that most people lie daily, and the DePaulo et al. diary study finding that people lie between once and twice daily is often cited. We (Serota, Levine & Boster, 2008) surveyed a representative sample of 1,000 Americans and asked them how many lies they told in the past 24 hours. While we found an average of 1.65 lies per day, the distribution was highly skewed with 60% of the sample reporting telling no lies at all. Nearly a quarter of all reported lies were told by top 1% of liars and nearly half of all lies were told be the top 5% most prolific liars. Reanalysis of the DePaulo student data also showed a highly skewed distribution. Highly skewed distributions make averages misleading. Based on these data, we conclude that most lies are told by a few prolific liars, and most people try to be honest most of the time.


Serota, K. B., Levine, T. R., and Boster, F. J. (2010). The prevalence of lying in America: Three studies of reported deception. Human Communication Research, 36, 1-24.



The McCornack and Parks Model of Relational Deception


If you are new to deception research, try answering this question before you read on. Are people better at detecting lies from people the know? Is it the case that the better you know someone, the more you are likely to know when they are lying?


In a now classic study, McCornack and Parks (1986) modeled how relational closeness impacted detection accuracy. Prior to McCornack and Parks, most researchers believed that knowledge of the other person would surely enhance accuracy and that deception detection research involving strangers would not apply when the people knew each other. McCornack and Parks reasoned that the better you know someone, a) the more you believe that you can tell when they are lying to you, and b) they more you tend to trust them and become truth-biased. McCornack and Parks, in fact, coined the term “truth-bias.” Further, they thought that over confidence and truth-bias would reduce accuracy. Specifically, their model predicts that confidence and truth-bias mediate relationship closeness and accuracy such that:


Relational Closeness- (+) –> Confidence- (+) –> Truth-Bias- (-) –> Accuracy


McCornack & Parks’ data were consistent with the model, and the model was later replicated by Levine and McCornack (1992) who showed that it held across levels of aroused suspicion. Further evidence consistent with the model has been obtained from subsequent meta-analysis and thus the model is empirically well established. 


Here is a small bit of deception trivia. McCornack and Park (1986) was Steve McCornack’s undergraduate thesis at the University of Washington. Not bad for undergraduate work, huh?


McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication yearbook 9 (pp. 377-389). Beverly Hills, CA: Sage. 


Levine, T. R., & McCornack, S. A. (1992). Linking love and lies: A formal test of McCornack and Parks' model of deception detection. Journal of Social and Personal Relationships, 9, 143-154.


How People Really Detect Lies


Most deception detection experiments have people watch someone who is either lying or telling the truth and then judge the message for veracity. The judgments are then scored for accuracy. Implicit in this method is the premise based in leakage theory that people detect lies, at the time the lie is told, based on the verbal and nonverbal behavior of the message source. Hee Sun Park thought this situation was atypical of what people really do outside the deception lab. So, we tested her idea with a very simple study we call “how people really detect lies.” The results were telling.


We asked approximately 200 subjects to complete a brief open-ended survey asking them to recall the last time they caught someone lying to them. We asked them to describe what happened, and we coded the responses into a set of lie discovery methods. What we found was that less than 2% of the reported lies were detected at the time of the telling based purely on source verbal and nonverbal behaviors. Most lies were detected well after the fact. People used information from third parties and physical evidence to catch liars. Some liars later confessed or let the truth slip out. Sometimes the lie was simply inconsistent with prior knowledge.


None of these common discover methods are available to judges in deception detection experiments. Perhaps people are not very accurate in deception experiments because deception detection experiments incorrectly presume that deception is detected based on real-time nonverbal leakage. That is, while deception detection experiments make sense from a leakage or IDT perspective, these theories and the methods that follow fail to accurately capture the ecology and process of deception detection.

 

Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69, 144-157.


Information Manipulation Theory


Information Manipulation Theory (McCornack, 1992, McCornack et al., 1992) explains deception as convert violations of Grice’s conversational maxims. Most deception is accomplished verbally through some combination of a) the omission of information, b) the falsification of information, c) ambiguity in how information is presented, and/or d) strategic evasion away from sensitive information. These four methods correspond to Grice’s maxims of quantity, quality, manner, and relation respectively. According to IMT, in order to make sense of what other say, we need to presume that other communicative cooperatively, following the maxims. Exploiting this presumption enables deception. There is a nice description of the theory on wilipedia.org. More academically targeted work on IMT is listed below.


McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1-16.


McCornack, S. A., Levine, T. R., Torres, H. I., Solowczuk, K. A., & Campbell, D. M. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59, 17-29.


Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.


Levine, T. R. (1998). Modeling the Psychometric Properties of Information Manipulation Ratings. Communication Research Reports, 15, 218-225.


Yeung, L. N. T., Levine, T. R., & Nishiyama, K. (1999). Information Manipulation Theory and Perceptions of Deception in Hong Kong. Communication Reports, 12, 1-11.


Lapinski, M. K., & Levine, T. R. (2000). Culture and information manipulation theory: The effects of self construal and locus of benefit on information manipulation. Communication Studies, 51, 55-74.


Levine, T. R. (2001). Dichotomous and continuous views of deception: A reexamination of deception ratings in information manipulation theory. Communication Research Reports, 18, 230-240.


Levine, T. R., Lapinski, M. K., Banas, J., Wong, N., Hu, A. D. S., Endo, K., Baum, K. L., & Anders, L. N. (2002). Self-construal and self-other benefit as determinants of deceptive message generation. Journal of Intercultural Communication Research, 31, 29-48.


Levine, T. R., Asada, K. J., Massi, L. L. (2003). The relative impact of violation type and lie severity on judgments of message deceptiveness, Communication Research Reports, 20, 208-218.


The Probing Effect


The probing effect refers to the finding that, relative to a source that has not been specifically questioned, the mere questioning of a source makes it more likely that they will be believed. That is, source question answering increases receiver truth-bias. It is usually presumed, of course, that asking probing questions of a source (or witnessing a source being questioning) would make judges more accurate. Research finds, however, that questing per se has little or no impact on accuracy but instead increases truth-bias.


While the existence of the probing effect is accepted, the explanation for why probing effect happens proved controversial and led to one of our first battles with IDT researchers. IDT explains the probing effect in terms of sender behavioral adaptation (the BAE, Behavioral Adaptation Explanation). Probed sources were argued to strategically adjust and present themselves as more honest. We found that the probing effect held when controlling for sender behavior, and thus reasoned that the explanation must reside in receiver cognition. It ended up taking us a decade to publish our findings (the original conference paper was presented in 1991 and eventually published in 2001 after many revisions and rejections). The 1996a paper was actually a letter to the editor of Communication Monographs arguing with reviewers after the paper was rejected for the first time. When that letter failed to persuade the editor of CM that our arguments had merit, we submitted the argument letter to HCR as a paper.


Levine, T. R., & McCornack, S. A. (2001). Behavioral adaption, confidence, and heuristic-based explanations of the probing effect. Human Communication Research, 27, 471-502.


Levine, T. R., & McCornack, S. A. (1996a). A critical analysis of the behavioral adaptation explanation of the probing effect. Human Communication Research, 22, 575-589.


Levine, T. R., & McCornack, S. A. (1996b). Can behavioral adaption explain the probing effect? Human Communication Research, 22, 603-612.


Truth-Bias


We define truth-bias as the tendency for message judges to infer honesty independent from actual message veracity. We score truth-bias simply as the percentage of judgments that are honest (i.e., number messages judged as honest over the total number of messages judged). In every deception detection experiment we have conducted over the past 20 years, truth-bias has been greater than .50. Simply put, people tend to believe more than they disbelief, and this is one of the most stable and important findings in deception research. We believe that deception research has not fully come to grips with the implications of truth-bias.


Note that calling truth-bias as a “bias” does not necessarily imply an error in judgment. A person who believes everything they hear is still “truth-biased” if all incoming messages are honest. Biases typically evolve as biases precisely because they lead us to a correct inference more often than not. What makes truth-bias a bias is not inherent inaccuracy but instead insensitivity to reality.


McCornack and Parks (1986) originally coined the term to refer to the tendency to believe a romantic partner. Over time the meaning has changed to refer to a more general tendency to believe.


Although people are, on average, truth-biased, truth-bias is far from constant. Thus, a major focus of our research has been on understanding what influences truth-bias. Truth-bias has both an individual difference component and a situational component, although the situational component appears considerably stronger. Early on in our work, we proposed that there were stable individual differences in the tendency to believe others and we created a scale called GCS (Generalized Communicative Suspicion) to capture this individual difference (Levine and McCornack, 1991, McCornack & Levine,1990, see below). Situationally, people are relatively more truth-biased in face-to-face interaction than when the communication is mediated, when they are communicating with relationally close others rather than strangers, and outside the deception lab rather than when they are in deception studies (being a subject in deception experiments typically primes suspicion). Truth-bias is reduced when there reason to infer that the other has a motive to lie (Levine, Park, & Kim, 2009) and when information from third parties suggests that deception is likely (McCornack & Levine, 1990a).


Truth-bias likely stems from how people mentally represent true and false information (Gilbert, 1991) and from the fact that humans are social and need to communicate (Grice, 1989). In deception detection experiments, truth-bias leads to the veracity effect (Levine et al., 1999) and underlies the park-Levine probability model (Park & Levine, 2001). We believe that even though truth-bias makes us vulnerable to deception, it is highly adaptive, and this is the key premise behind the forthcoming Truth-Bias Theory.


Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46, 107-119. 


Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.


Levine, T. R., & McCornack, S. A. (1991). The dark side of trust: Conceptualizing and measuring types of communicative suspicion. Communication Quarterly, 39, 325-340.


Levine, T. R., Park, H. S., & Kim, R. K. (2009). The essential role of motive in deception message production and detection. Proceedings of Hawaii International Conference on System Sciences, 41.


Levine, T. R., Park, H.. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125-144.


McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication yearbook 9 (pp. 377-389). Beverly Hills, CA: Sage.


McCornack, S. A., & Levine, T. R. (1990a). When lovers become leery: The relationship

between suspicion and accuracy in detecting deception. Communication Monographs, 57, 219-230.


Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68, 201-210.


A Few Transparent Liars


A new idea of Levine’s might be called “a few transparent liars.” Deception detection experiments consistently report slightly better than chance accuracy. An interesting question is why accuracy is almost always above chance but never, ever very much better than chance. What state of affairs might explain a) above chance accuracy, b) a low detection ceiling, and c) highly consistent findings with no major moderators. This is exactly what all the recent meta-analysis find. One state of affairs that would produce such outcomes is if there were a few transparent lairs. Imagine that 90% of the population could lie seamlessly, but 10% of people were really lousy liars and everyone could catch them. Judges would be at chance level when judging most people and near perfect when judging the leak minority, producing a stable 55% accuracy (i.e., .90 x .50 = .45 + 1.00 x .10 = .10; 45% + 10% = 55%). To the extent this idea has merit, accuracy is more a function of variance in senders than judges. It is unclear if the idea will pan out, but the a few transparent lairs is consistent with the results of all the recent meta-analyses and it resolves what are otherwise theory-inconsistent anomalies in the literature.


Levine, T. R. (2010). A few transparent liars: Explaining 54% accuracy in deception detection experiments. Communication Yearbook, 34. (Accepted for publication).



Truth-Bias Theory

 

Levine has been developing a new theory of deception and deception detection that will be called truth-bias theory. It integrates the findings summarized on this page. So far (as of 3/09), the theory has only been presented in a few talks around campus at MSU. A preview, however, is in press and will appear in Knapp & McGlone’s new book. Levine will likely seek to publish the theory as a university press book and summarize it in journal article format. Stay tooned!


Levine, T. R. (2009). Some considerations for a new theory of deceptive communication. In M. Knapp & M. McGlone (Eds). The Interplay of Truth and Deception. Routledge. (Forthcoming).


Categorical Versus Dichotomous Scoring in Accuracy


An ongoing matter of disagreement between MSU researchers and IDT researcher regards how to score accuracy in deception detection experiments.


IDT researchers claim that the poor accuracy evident in the literature is, in part, a function of how perceptions of deception are measured and how accuracy is scored. Specifically, they attribute low, slightly better than chance accuracy to dichotomous truth-lie scoring of deception to create a percent correct accuracy value. Further, IDT researchers claim that their research proves that people can detect deception better when continuous scaling is used; i.e., judges rate honesty on a 0 to 10 scale.


We find this claim ludicrous.


Our view is that:


A) Both types of measures find similar results and lead to the exact same conclusions. Regardless of how deception is measured, people are statistically significantly better than chance at distinguishing truths from lies, people are usually truth-biased, and because of truth-bias, the tend to get the truths right and the lies wrong (the veracity effect). Both types of measures consistently find that people rate honest messages as more honest than lies, but this does not mean that they detect deception when it is present.

B) Continuous scaling of honesty in deception detection experiments confounds perceptions of deceptive intent with perceptions of factual accuracy with perceptions of moral condemnation, with the degree of confidence in the judgment. Thus, when a subject rates a message as a 6 on a scale of 0 to 10, we don’t know if the think the message is partly true, if they think it is lie, but not a bad lie, or if they are not sure what it is.

C) When accuracy is calculated based on continuous measures, it is less clear what counts as accuracy. The ease of interpretation of percent correct scores is a clear plus.


We have empirically demonstrate the first of the points in a study:


Levine, T. R., Shaw, A. J., & Shulman, H. (2010). Assessing Deception Detection Accuracy with Dichotomous Truth-lie Judgments and Continuous Scaling: Are People Really More Accurate When Honesty is Scaled? Communication Research Reports (accepted for publication).


A New Look at Suspicion


In our first deception detection experiment together, we looked at the effects of induced suspicion on deception detection among college dating couples (McCornack & Levine, 1990a). At the time, previous studies had found little or inconsistent effect for suspicion. We expected nonlinearity. We thought that a little suspicion might reduce truth-bias and increase accuracy, but too much suspicion might create lie-bias and reduce accuracy. Thus, we predicted a sweet spot where moderate levels of suspicion would yield higher accuracy than either high or low suspicion. Just as we expected, we found the highest accuracy (65%) in the moderate suspicion condition. Contrary to our reasoning, however, we found no lie bias. Hence we could not explain the downturn under high suspicion.


Rachel Kim (Kim and Levine, 2008) recently replicated the suspicion study. She did not find a peak for accuracy at moderate suspicion. Instead, she found that suspicion reduced truth-bias, increasing lie accuracy, reducing truth accuracy, which generally canceled out. Thus, suspicion had little impact on overall accuracy, but it did reduce (but not eliminate) the veracity effect.


Kim, R. K., & Levine, T. R. (2008). Effects of suspicion on deception detection accuracy: A reconceptualization and replication of McCornack and Levine (1990). Presented at the annual meeting of the National Communication Association, San Diego.


Emotional Reactions to Discovered Deception


So, if people do detection deception, then what? We answered some of this question in our study of emotional and relational reactions to discovered deception. We found relational closeness, suspicion, lie importance, and information importance all impacted how hurtful the discovery of lie is. The paper also found evidence for what we called “the stewing effect.” When people suspect an lie about something important, the more they suspect a lie, the more upset they are when confirming their suspicion. However, when it came to the question of whether or not the lie broke up the relationship, the only meaningful predictor was what the lie was about. It wasn’t the lie per se but what the lie hid that sunk relationships.


McCornack, S. A., & Levine, T. R. (1990a). When lies are discovered: Emotional and relational outcomes of discovered deception. Communication Monographs, 57 119-138.


Sex Differences in Lie Acceptability


While researching emotional reactions to discovered deception, we found that women tend to value honesty more and see lying as generally less acceptable. We published these findings in CQ.


Levine, T. R., McCornack, S. A., & Avery, P.B. (1992). Sex differences in emotional reactions to discovered deception. Communication Quarterly, 40, 289-296.


The Lie Acceptability Scale Revised


In order to study sex difference in emotional reactions to discovered deception, we needed a measure of individual differences in the extent to which people see lying as acceptable. So, back in the late 1980s, we created the lie acceptability scale. We, however, never got around to publishing it. So, recently we dusted off the scale, updated the wording, collected some validity data, and published the revised lie acceptability scale.

 

Oliveira, C. M. & Levine, T. R. (2008). Lie Acceptability: A construct and measure. Communication Research Reports, 25, 282-288.


A Placebo Control for Nonverbal Training Studies


One of our favorite studies is our bogus training study. If you have read the rest of this page, you will know that we a skeptical of nonverbal leakage as a mechanism for deception detection. Well, if we are right about the anemic nature of leakage, then the nonverbal training literate poses something of a problem for us. The thing is, studies that train people in reading nonverbal deception cues typically find that training improves accuracy. The gains in those studies are usually small (4% on average), but if leakage is useless, nonverbal train too should be useless. So, why does nonverbal training usually improve accuracy if nonverbal behaviors are red herrings? Our thinking was (and still is) demand effects. Training is usually tested against a no-training controlling, and its possible that the small gains are simply a function demand and increased attention.


The typical solution to demand artifacts is to include an placebo control. So we trained some subjects in cues that should be valid, we trained subjects in cues that should have no utility (i.e., placebo or bogus cue control), and we had a standard no training control. Who did best? The placebo group! This was another study that hard to get into print (rejected at Journal of Nonverbal Research and Communication Research) but it is so cool.


Levine, T. R., Feeley, T., McCornack, S. A., Harms, C., & Hughes, M. (2005). Testing the effects of nonverbal training on deception detection accuracy with the inclusion of a bogus training control group. Western Journal of Communication, 69, 203-218.


Norms vs Expectation Violations


Another personal favorite of ours is a variation on Charlie Bond’s “fishy looking liar” study in JPSP. Bond argued that any behavior that violated an expectation would be sufficient to provoke an attribution of deception. He showed people videos of weird acting and normal acting people and found that the weirdos were seen as less honest. While is the case that weird behavior is unexpected, it is also the case that it just plain weird. What we did was unconfound norms and expections in a (anything but simple) 2 x 2 experiment crossing norms and expectations. Subjects interacted with one of four confederates who were either honest or lying and either acted weird (e.g., following an invisible insect around the room with their eyes or sporadically modulating their speaking volume) or not. Prior to the actual interaction, subject were either led to expect weird behavior nor not. So we had expected weird, unexpected weird, expected normal, and unexpected normal conditions. When the results were all analyzed, there was a big fat main effect for oddity. Weird acting people were seen are less credible than when they were not acting weird regardless of expectations and expectation violations. We still laugh when we read over the methods. Specially thanks to John Banas and Norman Wong who were exceptionally effective at acting weird.


Levine, T. R., Anders, L. N., Banas, J., Baum, K. L., Endo, K., Hu, A. D. S., & Wong, N. C. H. (2000). Norms, expectations, and deception: A norm violation model of veracity judgments. Communication Monographs, 67, 123-137.


GCS Generalized Communication Suspicion


Previously, we mentioned GCS. Basically, it is reversed scored trait truth-bias scale. The citation and a link to the items are provided below.


Levine, T. R., & McCornack, S. A. (1991). The dark side of trust: Conceptualizing and measuring types of communicative suspicion. Communication Quarterly, 39, 325-340.


Bibliography of MSU Deception Research


Burgoon, J. K., & Levine, T. R. (2008). Advances in deception detection. In. S. Smith & S. Wilson (Eds.). New Directions in Interpersonal Communication. Sage.


Lapinski, M. K., & Levine, T. R. (2000). Culture and information manipulation theory: The effects of self construal and locus of benefit on information manipulation. Communication Studies, 51, 55-74.


Levine, T. R. (1998). Modeling the Psychometric Properties of Information Manipulation Ratings. Communication Research Reports, 15, 218-225.


Levine, T. R. (2001). Dichotomous and continuous views of deception: A reexamination of deception ratings in information manipulation theory. Communication Research Reports, 18, 230-240.


Levine, T. R. (2008). Deception. In W. F. Eadie (Ed.) 21st Century Communication. Sage.


Levine, T. R. (2008). Deception Detection. In W. Donsbach (Ed.) The International Encyclopedia of Communication (vol III). Blackwell.


Levine, T. R., Anders, L. N., Banas, J., Baum, K. L., Endo, K., Hu, A. D. S., & Wong, N. C. H. (2000). Norms, expectations, and deception: A norm violation model of veracity judgments. Communication Monographs, 67, 123-137.


Levine, T. R., Asada, K. J., Massi, L. L. (2003). The relative impact of violation type and lie severity on judgments of message deceptiveness, Communication Research Reports, 20, 208-218.


Levine, T. R., Asada, K. J. K., & Park, H. S. (2006). The lying chicken and the gaze avoidant egg: Eye contact, deception, and causal order. Southern Communication Journal, 71, 401-11.


Levine, T. R., Feeley, T., McCornack, S. A., Harms, C., & Hughes, M. (2005). Testing the effects of nonverbal training on deception detection accuracy with the inclusion of a bogus training control group. Western Journal of Communication, 69, 203-218.


Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confessions and denials: An initial test of a projected motive model of veracity judgments. Human Communication Research, 36, 81-101.


Levine, T. R., Kim, R. K., Park, H.S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73, 243-260.


Levine, T. R., Lapinski, M. K., Banas, J., Wong, N., Hu, A. D. S., Endo, K., Baum, K. L., & Anders, L. N. (2002). Self-construal and self-other benefit as determinants of deceptive message generation. Journal of Intercultural Communication Research, 31, 29-48.


Levine, T. R., & McCornack, S. A. (1991). The dark side of trust: Conceptualizing and measuring types of communicative suspicion. Communication Quarterly, 39, 325-340.


Levine, T. R., & McCornack, S. A. (1992). Linking love and lies: A formal test of McCornack and Parks' model of deception detection. Journal of Social and Personal Relationships, 9, 143-154.


Levine, T. R., & McCornack, S. A. (1996a). A critical analysis of the behavioral adaptation explanation of the probing effect. Human Communication Research, 22, 575-589.


Levine, T. R., & McCornack, S. A. (1996b). Can behavioral adaption explain the probing effect? Human Communication Research, 22, 603-612.


Levine, T. R., & McCornack, S. A. (2001). Behavioral adaption, confidence, and heuristic-based explanations of the probing effect. Human Communication Research, 27, 471-502.


Levine, T. R., McCornack, S. A., & Avery, P.B. (1992). Sex differences in emotional reactions to discovered deception. Communication Quarterly, 40, 289-296.


Levine, T. R., Park, H. S., & Kim, R. K. (2009). The essential role of motive in deception message production and detection. Proceedings of Hawaii International Conference on System Sciences, 41.


Levine, T. R., Park, H.. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125-144.


McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1-16.


McCornack, S. A. (1997). The generation of deceptive messages: Laying the groundwork for a viable theory of interpersonal deception. In J. O. Greene (Ed.). Messages Production (pp. 91-126). Mahwah, NJ: LEA.


McCornack, S. A., & Levine, T. R. (1990a). When lovers become leery: The relationship between suspicion and accuracy in detecting deception. Communication Monographs, 57, 219-230.


McCornack, S. A., & Levine, T. R. (1990a). When lies are discovered: Emotional and relational outcomes of discovered deception. Communication Monographs, 57 119-138.


McCornack, S. A., Levine, T. R., Morrison, K., & Lapinski, M. (1996). Speaking of information manipulation: A critical rejoinder. Communication Monographs, 63, 83-91.


McCornack, S. A., Levine, T. R., Torres, H. I., Solowczuk, K. A., & Campbell, D. M. (1992). When the alteration of information is viewed as deception: An empirical test of information manipulation theory. Communication Monographs, 59, 17-29.


McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication yearbook 9 (pp. 377-389). Beverly Hills, CA: Sage. 


Miller, G. R., & Stiff, J. B. (1993). Deceptive Communication. Sage.


Oliveira, C. M. & Levine, T. R. (2008). Lie Acceptability: A construct and measure. Communication Research Reports, 25, 282-288.


Park, H. S., & Ahn, J. Y. (2007). Cultural differences in judgment of truthful and deceptive messages. Western Journal of Communication, 71(4), 294-315.


Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68, 201-210.


Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69, 144-157.


Serota, K. B., Levine, T. R., and Boster, F. J. (2010). The prevalence of lying in America: Three studies of reported deception. Human Communication Research, 36, 1-24.


Yeung, L. N. T., Levine, T. R., & Nishiyama, K. (1999). Information Manipulation Theory and Perceptions of Deception in Hong Kong. Communication Reports, 12, 1-11.