Looking a the public side of things, in other words, us, that’s just it; we are participating in content creation or spreading of information. We react when a headline provokes us to, sharing it with our networks. We document the world around us, posting it on social media and news outlets. And to a degree, we filter this information with our own biases and perspectives.

Can you say that you have never posted information online that positions you in a better light? A more flattering profile picture? A suggestion something you were doing was more fun than it really was?

How much of what we add to the information space in user-generated content contributes to the disinformation?

In fact, we distort reality in many ways.

Several psychological studies have come out in the past few years suggesting lurking on Facebook can cause depression. The problem is that following the seemingly perfect lives of friends, causes users to compare an amigo’s travels, successes and great looking photo montages with one’s own humdrum daily existence. In posting to a following or an audience, people tend to self-select, putting their best moments forward. By wanting to present a better picture than what true reality is (and who can blame us wanting to do so?) We are, however, inadvertently distorting reality as a result.

Another example is Instagram, a site for sharing pictures, is all about cropping what you capture down to a perfect square and applying a photo filter. If ever there were a site for framing and distorting imagery, this would be it.

instagram-lie-photos-crop-slowlife-chompoo-baritone-1

Twitter restricts what can be posted down to 140 characters. What is gained in brevity, a tweet will naturally lack in context. (This description was 128 characters – a single tweet – it couldn’t tell you why there is a restriction, or what happened before or after.)

Screen Shot 2015-09-29 at 6.24.04 PM

And sites such as LinkedIn are all about presenting ourselves as being hireable, professionally desirable. While it is advisable to stick as close to the truth as possible, it isn’t likely that users will air all their dirty work laundry publicly.

This subtle or perhaps even unconscious deceptive participation in the creation of this “Age of (Dis)information” by all of us might also explain why people are not as bothered by misinformation as we might expect.

In a study by a fellow Canadian, Craig Silverman entitled “Lies, Damned Lies, and Viral Content,” he found that misinformation actually spread at greater rates than corrective content. A news article from NationalReport.com, for example, claimed that a Texas town had been quarantined due to an Ebola outbreak. This was not true, yet it was shared 339,000 times. The corrective information debunking the claim was shared at one third of that rate. What’s more, the people who spread the rumour in the first place, are not likely to be those exposed to the correction.

Other studies have confirmed this. So long as the information shared conforms with the poster’s existing beliefs, it doesn’t seem to matter as much to them that the content be accurate. One of my favourite examples of a person’s nonchalance on discovering they had spread disinformation comes from a BBC Trending video covering the 2014 Gaza conflict.

People’s beliefs are strong political motivators for commenting online. In ongoing research, we used Netvizz to pull 999 posts and associated comments from the main RT Facebook Page. In identifying and analysing the top 51 commenters over those posts, we found that 85% displayed an obvious political motive for commenting to the Page. Only 12% were clearly associated with a public group whose expressed aim was to spread what they claimed was corrective information. The vast majority had an aim and a message in posting comments. This goes beyond concerns over astroturfing, the use of fake accounts to espouse points of view – apparently real people assist willingly.

Screen Shot 2015-09-29 at 6.38.33 PM

This is not to discount the use of astrourfing online. The Brits are doing it. The Americans want to do it. Everyone says the Russians are doing it.

Beyond governments, companies are posting fake reviews to push their products – and slam those of their competitors. Organisations besieged by unfavourable reputations, such as Fox News, attempt to bolster support through bogus accounts saying nice things about themselves online. Law enforcement agents are using false social media identities to befriend targets and gather intelligence on them. Some journalists have adopted alternate online personalities to engage and investigate ISIS recruiters.

Read More About Astroturfing Astroturf

The rate of online manipulation is fast turning the Internet into a virtual labyrinth of distorting mirrors.

And this sort of user-generated content is beginning to influence perspectives and offline actions. Recent studies analysing the impact of online content and shopping habits confirms this view. A Forrester study from late 2014 indicates that herd mentality – a cognitive bias that encourages us to adopt the thinking or beliefs of the majority – applies to user-generated content and purchasing: 76% of those surveyed were more likely to buy a product if many positive reviews were available online. Such peer review could have bigger consequences: as one 2012 Pew Research Centre study demonstrated social media is being used to influence voting in elections – mostly via a person’s own contacts.

In part we are being led, but are also leading ourselves astray into an Age of (Dis)information were reality is more what we want it to seem to be, rather than what it is. The implications for mental health and the fabric of society will be grave as people lose anchors and a sense of grounding in reality.

This article is part of The Age of (Dis)information series AgeDis

FriendNewsPrevious MediaLaunderingNext

About Author

La Generalista is the online identity of Alicia Wanless – a researcher and practitioner of strategic communications for social change in a Digital Age. Alicia is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. With a growing international multi-stakeholder community, the Partnership aims to foster evidence-based policymaking to counter threats within the information environment. Wanless is currently a PhD Researcher at King’s College London exploring how the information environment can be studied in similar ways to the physical environment. She is also a pre-doctoral fellow at Stanford University’s Center for International Security and Cooperation, and was a tech advisor to Aspen Institute’s Commission on Information Disorder. Her work has been featured in Lawfare, The National Interest, Foreign Policy, and CBC.