There is an elephant in the room – and it is persuasion. Until we come to grips with its role in Western society, we cannot tackle issues associated with the shaping of the information environment.

Something has been bothering me a lot lately. Let’s call it my elephant ? in the room. It’s a big elephant. This elephant is there at government hearings grilling tech company representatives (some more successfully so than others).The same elephant haunts announcements promising to address disinformation. And there, behind the often sensationalist media coverage showcasing evidence of attempts to shape the information environment lurks that very elephant, unmentioned amidst all the accusations and blame.

This elephant in the room did not start out big. He has grown over time, mostly from a nagging unease stemming from our seeming inability, as a society, to fully appreciate the ‘other’ side of what the evolution of information communication technologies (ICT) and the borderless flow of information it afforded brought with it.

The understanding driving me to say we have an elephantine problem emerged when research I conducted led me to predict Trump’s win back in January 2016. While my prediction was based on Ellul’s concept of sociological propaganda, my subsequent research led me to a model for how the information space was manipulated to help him. I have since found other patterns and models for shaping the information space. And while the technology that facilitates these efforts is new, signs of this uncomfortable discovery manifested themselves much earlier, around 2007, when I began to become concerned for how divergent Canadian perspectives were becoming, particularly between rural and urban populations, on issues like immigration –and I wondered how that would start to play out online – given the recent provincial election in Ontario and some of our analysis around online communities and discourse related to the campaigns, I think my worries were justified.

I say we have a problem based on years of research, analysis and practical experience – and that problem is not technology.

The internet is but a mirror and we do not like what we see.

Undoubtedly, the rapid developments in ICTs have transformed all aspects of human existence. Our lives are intertwined and dependent on ICTs to such a degree that many of us would be left quite at a loss should gadgets we use stop functioning. ICTs have become so pervasive we barely notice them anymore.

We now live in what information philosopher Luciano Floridi refers to as “hyper history”, where “ICTs and their data processing capabilities are the necessary condition for the maintenance and further development of societal welfare, personal well-being, as well as intellectual flourishing.” Indeed, we live inside an information environment: there is the physical side of it with hardware and infrastructure; an informational component, comprising all the data, software, algorithms and content; and there is the cognitive – how we perceive, consume and engage with this space.

Yet when it comes to conceptualising this space, we lack an equivalent information ecology or study of how this information environment works. More attention goes to the technical trees rather than the informational forest. Our understanding is framed by that, as are our approaches to dealing with challenges within it. We do not even have a universal lexicon to describe the issues we face today – with many words being thrown around and nothing really sticking – are we talking about information operations, propaganda, social engineering, public relations, fake news, misinformation or disinformation, information warfare or active measures? Take your pick. It’s dizzying and confusing, especially for media and lay people.

Never mind the difficulties in applying these words to practical policy. How does one discern something so philosophically challenging as truth from falsehood at scale? Measure intent of actors? Disentangle the messy throng of digital humankind when the internet has collapsed our historic senses of time and geography? What is foreign? Do diaspora have the right to influence politics in the country from which they came? What about migrants? Who are legitimate actors in public debate? What is legitimate public debate, for that matter?

And yet, in the fear that is spreading about the future of democracy and threats of undue influence (whatever that might mean), there is no shortage of blame and simplified calls to action: “the governments should regulate the tech companies!”; “the tech companies should police their users!”; “we should do back whatever is done against us – so, more propaganda!” In all of this, I cannot help but think we might be missing something; are we fundamentally approaching the problem in the wrong way? ?

Technology might have brought us to this point where we now must look at ourselves, our society in this digital mirror. But it is not to blame. These issues are not new. Persuasion, manipulation, deception, abuse – none of these are new. Humans have been doing and suffering from these things forever. What is new is the scale, speed and reach of such things. If anything, ICTs have only amplified our pre-existing issues – and as such, no technical solution can truly fix it.

Moreover, technology changes, rapidly as we are now seeing. Strategically speaking, looking for a technical solution for today’s challenges is short-sighted. Yet there is one constant we seem to continue to ignore – behind the infrastructure, the data, the perceptions there are people.

Half of the population in the U.S., U.K., and Canada graduated high school before the web was invented. While everyone wants to believe themselves to be impervious to persuasion, most of these adults have little hope of understanding on their own how the information environment is manipulated and affects them.

Filtering and blocking people into protection is not just impractical in a digital age, it is dangerous. This is the informational equivalent of obsessive disinfectant use – what will happen to that bubbled audience when “bad” information inevitably comes through? To say nothing of the consequences for democracy in such a managed information environment. After all, blocking access to information only makes it more coveted through the scarcity effect. And given that the people exposed to misinformation are seldom those who see the corrective content questions remain about the utility of efforts to dispel and counter such campaigns. If we want to protect democracy, we must make citizens more resilient and able to discern good information from bad.

Of course, to do this, we in liberal democracies, will have to come to terms with something we have avoided for a while: how much persuasion is acceptable and where are the lines? What is good versus bad information and who decides that? Who can legitimately use persuasion and when? ? This is the elephant in the room – persuasion. And it is not enough to say that when we use persuasion it is acceptable, but when an adversary does so it is not.

Until we in the West come to terms with our awkward relationship with propaganda and persuasion, we cannot effectively tackle issues associated with the manipulation of the information environment. For far too long our aversion to persuasion has made us hypocrites, trying to disguise attempts at influencing other populations with various euphemisms (which also might explain why words are failing us now in describing the situation).

In an Information Age where most things have a way of coming out, that is an extremely disadvantageous position to be in during an informational conflict. Every time a Western military or government is caught being deceptive, or misleading, this will be used against us eroding further public faith in the established order. We need to operate with the assumption that ultimately every action will become public knowledge at some point – and question how that will be viewed.

Furthermore, our attempts to euphemise propaganda are not lost on countries like Russia – it does not matter what we call it, it is how they perceive it, and in turn, how that perception is fed back to domestic journalists, activists, academics, and self-interested politicians who will use revelations of hypocrisy for their own gains. What’s more, as we live in democracies, these voices cannot be silenced, as in many cases, they are legitimate, domestic, and raising such criticism is their right. And to that end, no one can influence a population like one’s own – which brings us back to these bigger unanswered grey questions of where are the lines in this grey sand and who should draw them?

And therein lies the real problem. It is not a matter of technology, but one of understanding. What we need now more than anything is to come together as a community and provide the understanding our society so badly needs to be resilient in an information age. This can be achieved by:

  • Finding the right words. We need a lexicon that is flexible enough to operate in the grey area that is the information environment. Words that go beyond the operating constrictions of doctrine into the space of what is possible. Our existing terminology tends to be dichotomous (truth versus fiction) or draw distinctions based on intent (misinformation versus disinformation). Such distinctions are impossible to detect and assess at scale and becomes meaningless as a result in policy development in an Information Age. Until we develop such a flexible lexicon, we have little hope in creating policy to address this problem.
  • Identifying measurements of effect. While case studies undoubtedly have their place, such research must go beyond finding evidence of activity aimed at shaping the information environment to analysing how (if at all) such efforts achieved effect. Anecdotes are great, but what do they mean in terms of the bigger picture? It is not enough to find proof of efforts to shape the information environment. To truly understand and assess risk from such threats, we must find ways of measuring effect. This is extremely difficult if the actor behind such efforts and their intent are both unknown, as is often the case in this “grey area” space. Quantifying likes, shares and reach says nothing of effect –millions might have been exposed to a promoted post, but that indicates little in terms of changing behaviours or perspectives. Without such assessment on information campaigns, we run the risk of responding reactively as a result of deliberate provocation, not because the threat was real, leading to serious consequences for escalation in an informational confrontation.
  • Answering the bigger grey area questions. We need lines in the sand around persuasion, its acceptable use and by whom. This reckoning above all else must be honest and sweeping. Such an effort is going to come down to a community charge. Politicians are not going to lead this effort. Frankly, any coming to Jesus around the deliberate persuasion of populations accelerated with technology will be too much of a threat for most political operators. But that doesn’t mean the rest of us – academics, those on the front lines, and other experts – cannot come together.

In fact, it is down to us. If you are reading this, chances are, you are the brain trust. Will you acknowledge the elephant ? in the room? How will you help solve this problem? And perhaps more importantly, which organisations will demonstrate their commitment to tackling this problem strategically? Before we can have solutions, we need understanding– and we need it now.

Header Image: Canadian propaganda poster from the Second World War issued by the Director of Public Information discouraging people from wasteful spending.

About Author

La Generalista is the online identity of Alicia Wanless – a researcher and practitioner of strategic communications for social change in a Digital Age. Alicia is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. With a growing international multi-stakeholder community, the Partnership aims to foster evidence-based policymaking to counter threats within the information environment. Wanless is currently a PhD Researcher at King’s College London exploring how the information environment can be studied in similar ways to the physical environment. She is also a pre-doctoral fellow at Stanford University’s Center for International Security and Cooperation, and was a tech advisor to Aspen Institute’s Commission on Information Disorder. Her work has been featured in Lawfare, The National Interest, Foreign Policy, and CBC.

Comments are closed.