Tag Archives: Fake News

Tech Insight: Do People Lie More Online?

In this article, we look at whether there is evidence to suggest that people lie more online, and what the message is for businesses.

Lying 

Just 3 years after the World Wide Web was introduced to the public domain, a report was published by University of Virginia psychologist Bella M. DePaulo (DePaulo, Kashy, Kirkendol, Wyer, and Epstein) which revealed that most people, on average, tell one or two lies per day. Also, the report showed that, although people are dishonest in about 30 per cent of their social interactions each week, different levels of dishonesty are perceived, people generally don’t regard their lies as serious, don’t plan lies much, and don’t worry about being caught.

The Truth Default Theory 

Levine’s Truth Default Theory (TDT) from 2014 gives more context to how we generally judge truth, deception, and deception detection.  The main message of TDT is that when we communicate with other people, we tend to believe them, and the thought that maybe we shouldn’t does not even come to mind.  TDT also says that, although this ‘truth-default’ makes us vulnerable to deception, there are certain “triggers” that can break us out of our default-to-honest mindset and enable lie detection. In other words, if a person’s suspicions rise to a level that they cannot explain away or rationalise (through having too many doubts), they can snap-out of the truth-default.

Lying + Technology : Hancock’s Feature-Based Model 

The first prominent research to discover if there was a connection between deception rates and technology was carried out in 2004 by researcher Jeff Hancock.  Mr Hancock’s research was based on reports of social interactions by his colleagues and 28 students, and, along with researchers Jennifer Thom-Santelli and Thompson Ritchie, led to the development of the “feature-based model.”

The model was designed to show how the design of these technologies affects lying behaviour. The results showed that people lie most on the telephone and lie the least in emails, and that lying rates in face-to-face and instant messaging interactions are approximately equal.  It was concluded that the design features of communication technologies (e.g., synchronicity, recordability, and copresence) affect lying behaviour in important ways and that designers must consider these features must when issues of deception and trust arise.

Revisited With Updated Findings 

Since the study, the world of social media and smart-phones has matured somewhat, and Hancock revisited the study of the Relationship Between Deception and Design just as this was happening.  The new findings and observations have led to the following updated points:

– People tell the most lies per social interaction over synchronous, distributed, and recordless media (the phone, video chat).

– People tell the fewest lies per social interaction via email, although the differences across the forms of communication are small.

– Lying rates are also associated with ‘aversive personality traits’, plus antisocial, and relational deception motives.

– While media options have evolved, technological design features often remain stable and indicate deception rates.

Online Dating Lies 

Another Hancock study (Hancock, Toma, Ellison research, 2021), looked at the world of online dating and found that:

– Deception is frequent, but the magnitude of the deceptions is usually small, and deceptions differ by gender. Also, 81 per cent of people lie about at least one of these variables: weight (the most frequently lied about attribute), followed by height, and least of all age.

– Social Media and Presenting An Imprecise Image

– A Custard.com study found that people commonly “lie” by presenting an image of themselves and their lives that is imprecise or less than comprehensive, thereby leading the viewer to believe falsehoods. For example, only 18 per cent of men and 19 per cent of women report that their Facebook page displays “a completely accurate reflection” of who they are, and one-third of people tend to only share the “non-boring” aspects of their lives and are not as active as their social media accounts show them to be.

Social Media and Accountability 

Although deception for self-presentation can bring the reward of appearing more positive (self-oriented lies), many professional activities are now conducted online (e.g., displaying resumes on LinkedIn). The public-nature of resumes and the accessibility of profiles to colleagues and friends in social networking websites makes people more accountable for information shared online.  This can make people less comfortable to lie on some social media to friends/colleagues, many of whom would be able to spot their deception.

Fake News, Disinformation, and Misinformation 

One key area that has proven difficult for social media businesses to manage has been the spread of lies online in the form of fake news.  Ofcom figures show that 4 out of 10 UK adult internet users don’t possess the skills to critically assess content online. Also, many young people have social media as their main source of news, thereby making them more vulnerable to the effects of online lies. Measures taken to help reduce the damaging effects of this problem include fact-checking services for social media and government strategies to help people to spot disinformation (e.g., the UK’s Online Media Literacy Strategy from the Department for Digital, Culture, Media, and Sport – DCMS).

What Does This Mean For Your Business? 

Truth Default Theory may be one explanation as to why we can be deceived (i.e., most of us assume a person is being honest until proven otherwise). Studies such as those by Hancock, which look at connections between deception rates and technology, appear to show that people don’t really lie that much more online, and there are really only small differences in lying rates across media, and people are less likely to lie in emails.  Also, people may be less likely to lie where they will clearly be held accountable and where the lies will be spotted and could negatively affect how others view them.

For businesses, getting the truth (e.g., from employees, job applicants, customers, and other stakeholders) is important for business continuity, marketing, security, and indeed all operations rely on trust and truth. The message from many of these studies shows, however, that although it’s tempting to believe that technology facilitates deception, the relationship between deception and technology is not straightforward and deception is much more complicated than that. There is no single cue that always predicts deception, but if something doesn’t feel right, it’s not. As Hancock has said, “The idea (with spotting online deception) is to pay attention to how you’re feeling about things, and that if something doesn’t feel quite right or is too good to be true, it probably is.” 

Where important information and declarations are required, businesses should, therefore, ask for (and check) backup evidence, make it clear that there are checks in place and there is accountability to deter lying in the first place, and perhaps to design steps in systems that have a human ‘feelings’ reality check built-in.

The message to businesses involved in communication technologies is to consider how synchronicity, recordability, and copresence (factors that affect lying behaviour) could be used and arranged to minimise the chance for deception to be used.

Tech News : Disinformation or Misinformation?

The new Online Media Literacy Strategy from the Department for Digital, Culture, Media, and Sport (DCMS) is aimed at supporting 170+ organisations to improve media literacy rates in the UK, and thereby help young people to spot disinformation.

As an aside, misinformation is information that is simply wrong, inaccurate or misleading (without necessarily having any intention to propagate the misinformation) whereas disinformation is a subset of it, i.e. information that is deliberately wrong, inaccurate or misleading.

As aside to the aside, mistrust and distrust are roughly the same in meaning (i.e. not to trust someone or something) although, according to Dictionary.com, distrust implies having evidence to support that feeling.

Disinformation Problem

The Strategy, which was promised in the government’s online harms white paper, is intended to help tackle the problem that many young people in the UK are not able to distinguish between disinformation/misinformation and truth in what they read online.  For example:

Ofcom figures show that 4 out of 10 UK adult internet users don’t possess the skills to critically assess content online.

National Literacy Trust research figures show that only 2 percent of children have the skills they need to identify misinformation, half of teachers (53.5 percent) think that the national curriculum doesn’t educate children with the literacy skills they need to identify fake news, and 2 in 5 parents (39 percent) don’t watch, listen to, or read news with their child at home.

Pandemic Highlighted Problem

The fact that many young people may have been deterred from accepting the COVID-19 vaccine and/or have believed misinformation and conspiracy theories about the origins and causes of the pandemic have highlighted the problem. For example, popular stories believed by some in the UK, highlighted in University of Cambridge research (Oct 2020) include that:

– COVID-19 was engineered in a Wuhan laboratory (22 percent believed it).

– The pandemic is “part of a plot to enforce global vaccination” (13 percent).

– 5G telecommunication towers worsen COVID-19 symptoms (8 percent).

Who and Why?

Back in October 2020, Cambridge’s Winton Centre for Risk and Evidence Communication + UK Cabinet Office: Go Viral!, studied correlations between certain beliefs and demographic categories and the perceived reliability of misinformation. They discovered that:

– High levels of trust in science equates to low levels of susceptibility to false information (across all nations).

– Better Numeracy skills are a predictor of greater resistance to misinformation.

– Being older is linked to lower susceptibility to COVID-19 misinformation.

– Identifying as more right-wing /politically conservative is associated with a higher likelihood of believing COVID-19 conspiracies.

– With COVID-19, a tiny increase (one-seventh) in how reliably misinformation is perceived leads to a much bigger (23 percent) drop in the likelihood that the person will agree to get vaccinated.

Ultimately, as summarised by the minister for digital and culture Caroline Dinenage last week, “False or confused information spread online could threaten public safety and undermine our democracy.”

Training Trainers

The newly announced strategy is to teach a wide variety of UK organisations to teach others to get a better understanding of the online world, and how to critically analyse the content they see, thereby helping them to spot misinformation.

Criticism and Challenges

Criticism or the strategy includes that:

– It is possibly an opportunity missed and is less of a strategy and more a shopping list of useful actions that mirror what’s gone before rather than charting new directions (says LSE’s Professor Sonia Livingstone).

– The strategy appears to blame the user for the problems of the digital world.

– The strategy may be weaker than it could be because it is linked to the Online Safety Bills, so focuses on reducing consumer harms rather than addressing the breadth and depth of the media literacy agenda.

Challenges for the strategy include:

– Exposure to misinformation and disinformation can be influenced by changes to algorithm design and content feeds, thereby meaning that tech companies have a part to play.

– Motivations for believing (and wanting to spread) misinformation are varied, can be complicated and, therefore, anti-vaxxer mentalities / ‘cult’ type and attitudes are difficult to break down and challenge, even with well-meaning teaching.

What Does This Mean For Your Business?

In terms of tackling health emergencies effectively, education and tackling misinformation are vital. Many young people have social media as their main source of news, so giving many other organisations the means to educate young people in how to critically evaluate what they read is well-meaning and could have a value for young people and society as a whole going forward, which in turn will have a value to businesses. However, social media and other platforms use algorithms, which also influence what is presented to young people, which means that tech companies have an important role and responsibility to play in tackling the problem. The problem of misinformation is being tackled to a degree on social media using e.g., fact-checking, and curated news services, but the issue of misinformation is wide, and it is debatable how much of an effect the new strategy will have upon it.  One of the strengths of the new strategy, however, is that it is leveraging the power of many other trusted organisations to help deliver it.