“How can we be at war, if we don’t even know what we are talking about?”

“We’re at war and we’ve got absolutely no idea,” proclaims a recent New York Times opinion video series snappily entitled Operation Infektion documenting Russian disinformation. Yet behind the flashy production and alarmist statement hides a wider academic problem of confused terminology and a tendency to conflate activity with effect. How can we be at war, if we don’t even know what we are talking about? The media can hardly be expected to cover a topic such as the use of information in conflict accurately, if the research it draws upon is itself confused and misleading. Given how little headway seems to have been made in deepening understanding of the information environment and how it is shaped, it is time to take a step back and assess whether it has been approached in the right way – and explore alternative methods for analysing this topic.

A confusion of terms

Academic discourse about the use of information against a target audience can sometimes seem like the heated debate in Lewis Caroll’s Alice’s Adventures in Wonderland, which follows the Mad Hatter’s riddle about the similarities between a raven and a writing desk. Instead of answering the question, the characters argue over the meaning and order of words, ultimately drawing a blank. Indeed, reviewing the literature that analyses the deliberate shaping of the information environment for strategic purposes is equally unsatisfying, with a confusion of competing concepts and ultimately popular assumptions implying that to take action is the same as having an effect.

Given the Times documentary proclamation that we are at war, let’s start with the concept of information warfare – which currently has no universally accepted definition.

Information warfare

The idea that information can be used in conflict, however, is not new. The Sun Tzu noted that the “skillful leader subdues the enemy’s troops without any fighting” demonstrating “supreme excellence” Machiavelli outlined how men “discontented and desirous of change….are able to open the way to you for the invasion of their country and to render its conquest easy,” but warned of how “fickle” was the “multitude, and that while it is easy to persuade them of a thing, it is hard to fix them in that persuasion.” Clausewitz also discussed the application of information before and during a conflict. Whereas he defined war as “an act of force to compel our enemy to do our will,” some researchers have posited that he addressed the use of information in conflict when discussing the moral and political aspects of war. And indeed, reading the Prussian military theorist in the context of accusations around election interference and regime change, what Clausewitz described as “direct political repercussions, that are designed in the first place to disrupt the opposing alliance, or paralyse it, that gain us new allies, favourably affect the political scene” remain apt. Yet none of these great strategic thinkers left us with the terminology to describe the shaping of the information environment, and arguably it was not their focus.

As a term, “information warfare” is problematic, lacking in what John Gerring refers to as conceptual goodness. For Gerring a good concept is one that is familiar and resonates with a target audience, is short and coherent, while distinguishing between other ideas that are similar and drawing on shared definitions, and ultimately has both theoretical and field utility. Yet information warfare seems to meet few of these criteria. While it might be short, it is not well understood or differentiated from other similar terms, and its utility in both theory and practice are thus debatable.

Information warfare is a combination of two terms both with much debated definitions. As Ventre notes the definitions of both “information” and “warfare” vary depending on the school of thought addressing the topic. While writing about cyber warfare, Betz could just as well be describing information warfare as:

“a portmanteau of two concepts, cyberspace and war, which are themselves undefined and unequivocal; it takes one complex non-linear system and layers it on another complex non-linear system. And…as a result, it does not clarify understanding of the state of war today; it muddies waters that were not very transparent to start with.”

Indeed, what is information? Some philosophers view it as “well-formed, meaningful and truthful data”, implying some inherent veracity in such content. Others, particularly drawing from the field of computer science, view any sort of data as information, regardless of its accuracy. Ulrich sees information as data that has acquired “context-dependent meaning and relevance”, a point which Carston-Stahl echoes, in that “information as meaningful data needs to have meaning to (human) agents.” Given this lack of clarity, the use of the term “information” hardly lends more definition to the concept of information warfare.

Where definitions exist of information warfare, they vary in range from a focus on information as the core objective “worthy of conquest or destruction,” to the aim of “discrediting or even destruction of an opposing ideology.” Long before the internet, Marshal McLuhan wrote “real, total war has become information war. It is being fought by subtle electric informational media – under cold conditions, and constantly. The cold war is the real war front – a surround – involving everybody – all the time – everywhere.” Martin Libicki focuses information warfare more on the technology driving it, suggesting it “covers the physical or electromagnetic destruction of adversary command-and-control systems (commanders, command centres and links to the field), the jamming of radio-electronic and radar links, the destruction or spoofing of sensors, and lately, illicit access to an adversary’s networks and computers.” While Libicki notes other forms of information warfare, such as economic, can be executed beyond militaries, the bulk of his work is focused on how militaries conduct it on a target such as adversarial forces or their political leaders, rather than on civilian populations, an understanding echoed, insomuch as who engages in such conflict, by Mariarosaria Taddeo. American Air Force Colonel Richard Szafranksy pushes boundaries further describing information warfare as “a form of conflict that attacks information systems directly as a means to attack adversary knowledge or beliefs.”

Perhaps as a result of this lack of consensus and broad range in definitions (not to mention scope) both NATO and the U.S. have opted not to include a definition of information warfare in their official dictionaries. The Russian Ministry of Defence, on the other hand, defines information warfare as:

“an open and acute clash between states, in which the resistance of the adversary is suppressed by the use of hazardous influence on their information sphere, destruction or disruption of the normal functioning of information and telecommunications systems, preservation of information resources, gaining unauthorised access to them, as well as a having massive informational-psychological impact on the personnel of the Armed Forces and the population of the adversary in order to destabilise the society and state.”

The Western omission of information warfare from key Western military dictionaries might also speak more to the unanswered question of whether the concept is warfare at all.

In reading military historian Hew Strachan’s work it would seem that information warfare – particularly how the Times documentary describes it – is not war. Drawing on philosophy and practical ethics, Strachan suggests “war has five constituent elements” including: (1) “the use of force” as well as the threat to do so, (2) in a reciprocal dispute between sides with (3) “a degree of intensity and duration to the fighting” and (4) combatants serving the public, not as private citizens, while (5) “not fighting for its own sake” but for an “aim, often normatively defined in political terms, but perfectly capable of being more narrowly and militarily defined, for example as the pursuit of victory.” Strachan then turns to international law, where “war has to be declared, and once it is declared its belligerents acquire legal rights as well as being subject to legal obligations,” which in turn leaves out conflicts where war was never openly waged or there is no state to war against. Looked at in this context, how can a war be waged without the attacked party knowing it was in need of defence, as the Times video series suggests?

To address some of the conceptual challenge related to the role of information in warfare, specifically, some analysts have proposed distinctions with terms such as “cyber-” or “information-enabled” warfare. Yet still others have pushed for quite different concepts.

If at first you don’t succeed, define, define again

Recognising the role of information communication technologies in conflict has led to many new terms which are often difficult to distinguish from information warfare. Much of this discourse is focused on the changing character of war. For some analysts, this is centred on the growing role of narrative and the ability of information to be dispersed to international audiences quickly, engaging people in war in novel ways.

Former commander of the UK Joint Forces Command, General Sir Richard Barrons, argues that “important as bombs and missiles are, the synchronised and constant manipulation of all forms of communication: political; diplomatic; state, commercial and social media; paid-for influence; and expert cyber intrusion is now a daily part of how states compete, confront and conflict. Likewise, his colleague General Rupert Smith believes war has shifted from being industrial to what he calls “war among the people,” a new type of conflict “in which the people in the streets and houses and fields – all the people, anywhere – are the battlefield.” Similarly, Lind et al describe the changing conflict as “the fourth generation battlefield” which “is likely to include the whole of the enemy’s society.”

For Smith, advances in international media coverage made war a theatre which many people watch and experience from the comfort of their living rooms. While the 24-7 news cycle certainly increased coverage of conflicts, this concept dates back to McLuhan, who noted through broadcast media “the living room has become a voting booth. Participation via television in Freedom Marches, in war, revolution, pollution, and other events is changing everything.” This idea was echoed by Lind et al. suggesting “television news may become a more powerful operational weapon than armored divisions.” In these concepts, spectator civilians judge what happens in a war based on the media’s portrayal of events, which in turn can be framed by the conflict’s “producers”, those engaged in the hostilities. As Betz notes, this created a “telescopic lensing effect of the high-tech and trans-national media which can turn minor blunders, fleeting errors of judgement and isolated acts of indiscipline into acts of strategic consequence once reserved for general officers.” Information communication technologies have closed “the gap between the grisly reality of war and the public’s awareness of it at home.”

For William Lind and his military co-authors this is a fourth generation of warfare, which they argue “is a goal of collapsing the enemy internally rather than physically destroying him,” by targeting things such as popular support for a conflict and culture, much as the Russians have defined it. For Lind et al, this changed form of warfare is marked by a lack of definition, blurring war and peace and potentially “having no definable battlefields or fronts. The distinction between “civilian” and “military” may disappear. Actions will occur concurrently throughout all participants’ depth, including their society as a cultural, not just a physical entity.”

Arquilla and Ronfedlt refer to a similar concept in defining netwar:

“information-related conflict at a grand level between nations or societies. It means trying to disrupt, damage, or modify what a target population knows or thinks it knows about itself and the world around it. A netwar may focus on public or elite opinion, or both. It may involve public diplomacy measures, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with local media, infiltration of computer networks and databases, and efforts to promote dissident or opposition movements across computer networks”

In his work On Political War, Paul Smith examines a sweeping array of examples showcasing information’s role in conflict from ancient history to the Cold War. Smith chooses to describe this phenomenon as political warfare or “the use of political means to compel an opponent to do one’s will,” which “may be combined with violence, economic pressure, subversion, and diplomacy, but its chief aspect is the use of words, images, and ideas, commonly known, according to context, as propaganda and psychological warfare.” Many of his examples, from Macedonian and Roman use of coins to Napoleon’s empirical myth-building and Goebbels’s innovative use of broadcast are classic cases of propaganda, as featured in Jowett and O’Donnell’s comprehensive study on the topic. While Smith attempts to distinguish between political warfare (that which the West might legitimately engage in) versus propaganda, which he defines as “political advocacy aimed abroad with hostile intent,” by the close of his book there appears to be little distinction when he asserts that:

“Political war usually has as its object the destruction of a social order, and the elimination or forcible reorientation of large classes of people. Even in its most restrained forms, as practiced by states acting under the restrictions of rules of just and limited war, it presumes loss of life. Carried out by millenarian movements from motives of race or class hatred, it almost inevitably requires physical elimination of whole categories of human beings, in all age groups and of both sexes. Political war is one of the most destructive and bitter forms of combat, and it remains one of the least successfully regulated. It is a lethal weapon.”

Yet, propaganda itself is a much-contested concept. At its simplest, propaganda has been described as the use of persuasive information to manipulate a target audience into a behaviour desired by the propagandist. Concepts of propaganda too are being revisiting in an Information Age, ranging from computational propaganda, “the assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion,” to network propaganda, “the ways in which the architecture of a media ecosystem makes it more or less susceptible to disseminating” persuasive messaging, and participatory propaganda, the coopting of target audience through online communities to adopt, adapt, and further spread it.

The difficulty in defining propaganda, stems in part from its complicated relationship with liberal democracies, given that popular opinion is expected to influence political decision-making and the act of manipulating opinion calls into question the agency of voters. As John S. Dryzek explains there is a long history in political theory “from Plato to Habermas which equates rhetoric with emotive manipulation of the way points are made, propaganda and demagoguery at an extreme, thus meriting only banishment from the realm of rational communication.” Such concerns were not assuaged by early pioneers in the field of public relations either, who fearing how easily public opinion could be swayed saw propaganda as an acceptable tool for the management of popular views. As such, in the English-speaking world, propaganda is most often used in the pejorative a slur to denote what an adversary does, differentiating influence efforts at home by other terms such as public affairs or public relations and those aimed at persuading external audiences as public diplomacy or information operations. This, of course, is all in the eyes of the beholder. Propaganda is an agnostic tactic. While the U.S. distinguishes between informative and influence activities, in practice this amounts to an exercise in semantics failing to convince adversaries that PR and public diplomacy are not persuasive in intent. These euphemistic forms of propaganda are used by many NATO members in diplomacy programs that spread democracy, which Russia has consistently viewed as a threat to its sovereignty.

This ambiguity in conflict has elsewhere been described as a gray zone, where actors engage coercively and aggressively while deliberately remaining “below the threshold of conventional military conflict and open interstate war.” The approach is to use “unconventional tactics from cyberattacks, to propaganda and political warfare, to economic coercion and sabotage, to sponsorship of armed proxy fighters, to creeping military expansionism” to achieve an aim without provoking an armed response. For some, this gray zone is novel in warfare particularly given the role of the internet, but as Hal Brands notes “there is a long history of actors seeking to derive the benefits of war, without incurring the costs and risks of overt aggression”, pointing to confrontations during the Cold War and Iranian use of terrorism in the 1980s.

Similar to the gray zone, Russian military thinkers refer to “gibridnaya voyna”, which is not war at all but marked by an absence of armed conflict while still destabilising the adversary, according to Fridman.  For Russians, such ambiguous tactics are not the remit of the military, but civilian leadership, whereas Western thinking views it as an armed forces function. Either way, without the use of armed force between two states, it returns us to the question of whether this is in fact war or not. Indeed, the Russians also speak of information struggle which has as its objective to obtain and damage adversarial information resources, while protecting one’s own, by influencing information and mass media “to shape the moral values, motives, and behaviour of the individual, collective and social consciousness in order to affect the armed forces and the people.” And it is this thinking that far more resembles what both Russia and the West accuse each other of doing as late, specifically using information communication technologies to facilitate regime change, engaging in so-called hybrid warfare, weaponizing information, and election interfering.

These reciprocal accusations might be instructive in how concepts around the shaping of the information environment for strategic purposes develop. Lind et al suggest the future of war might no longer be defined by the West as it has for the past half century, but by more eastern traditions. Likewise, McLuhan expected connectivity to bring Western thinking more in line with Eastern: “The contained, the distinct, the separate – our Western legacy are being replaced by the flowing, the unified, the fused.”

The confines of case studies

This is perhaps even more poignant in light of Rupert Smith’s warning that “war is an imitative and reciprocal activity” often with opponents engaged in extended conflict becoming increasingly similar, copying and reflecting each other’s tactics. This is alarming given some calls in the West to counter unwanted shaping of the information environment by foreign actors, with solutions resembling the approaches of the very entities they fear – namely increased regulation on access to information, censorship and increased reciprocal information manipulation. In aiming to mitigate informational threats, the West might inadvertently become more similar to Russia than intended. This is more likely to be the case too if tactics (in the form of anecdotes and case studies) are the focus of research and analysis on this topic.

Indeed, the politicisation of concepts such as hybrid warfare, by both Russia and the West, help both sides gain support for defence spending allocations by positioning the other as a threat in this sphere. Fridman argues that in the West this politicization of Russian concepts related to hybrid warfare came, in part, from analysts originating from Central and Eastern Europe, who due to the cutting back of funding for Russian-related academic programs, replaced Western specialists of the region who are now rarer. Given their regional history, this led to an exaggeration of the Russian threat. Likewise, however, Russia’s positioning of Western efforts to influence in their near abroad as a pressing existential threat also helps inflame Western fears that Russia is a threat to it. This escalation of narratives seems to encourage and mutually prove that each are out to get the other. This politicization should also raise questions, though, about recent information campaigns, particularly regarding their purpose and aims. All might not be how some want it to seem.

In their comprehensive analysis of the media ecosystem before and after the 2016 U.S. presidential election, Benkler, Faris and Roberts conclude “that the crisis is more institutional than technological, more focused on U.S. media ecosystem dynamics than on Russia, and more driven by asymmetric political polarization than by commercial advertising systems.”

That is not to say that Russians were not engaged in consistent efforts to influence the information environment around the 2016 U.S. election. However, as Benkler et al found in analysing the spread of content related to the Pizzagate conspiracy, accounts associated with Russian attempts to influence U.S. politics were “more consistent with a background presence and continuous engagement of active Twitter accounts that were also involved in Russia campaigns than with a coordinated campaign to influence Wikileaks or Alex Jones to publicize this particular story, or to amplify it to a degree not otherwise consistent with background attention among right-wing conspiracy minded Twitter users looking for such stories to amplify.” The very identification of Russian efforts was somehow equated to its effectiveness, as if in merely being it had shaped target audience perspectives. As Benkler et al write:

“Evidence of sustained effort is not the same as evidence of impact or prevalence. It would be profoundly counterproductive to embrace the narrative that we can no longer know what is true because of Russian bots, sockpuppets, or shady propaganda. Indeed, having us adopt that attitude would mark a remarkable success for the Russian effort: the success in denying democracy one of its core pillars – the capacity to have a public debate based on some sense of a shared reality and trust in institutions.”

Indeed, the authors draw several conclusions from their study including that research on the manipulation of the information environment must be done cautiously, avoiding assumptions that activities automatically result in successful effect.

Case studies looking at instances of shaping the information environment for strategic aims are fraught with problems. As Schroeder and Ling note “constructivist theory, although it provides case studies and analyses of various individual aspects of ICTs and social change, is limited by the fact that these are invariably bound to particular contexts or issues, which makes them difficult to evaluate across different cases or at a more general level.” In the context of assessing Russian activity, some case studies look at the phenomenon in isolation, failing to put it into wider context potentially making an activity seem more important than it really is. The Guardian, for example, has been tracking British media mentions of Russian fake Twitter accounts, without cross-referencing what that count (100) means compared to how many times social media is cited as a source overall. Others equate activity to effect. One study that looked at Russian Twitter accounts previously identified as fake, concluded that “Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination.” While the report looked at the account activity of these fake Twitter users, it did not assess its effect on audiences they might have reached. Many make assumptions about the overall aim of the actor behind the campaign, but the very nature of this sort of informational conflict or struggle is well noted to be ambiguous. As Thomas Rid notes “cyberaggressors may act politically, but in sharp contrast with warfare, they are likely to have a strong interest in avoiding attribution. Subversion has always thrived in cyberspace because preserving one’s anonymity is easier to achieve than ironclad attribution.” If the culprit is not forthright in their actions, there can be little expectation that their strategic aim will be more openly stated – and without a clear aim it is impossible to measure effectiveness of an information campaign.

Perhaps the difficulty in defining how the information environment is shaped for strategic purpose is because we have been looking at it the wrong way – focusing on more traditional models of conflict with known belligerents in obvious struggles and applying that frame to analysis. What might be required is a giant step back from the specificity of war and conflict, to a more flexible framework that can assess a given example of how the information environment is shaped without having to know the culprit or his motives first. This should certainly be done before making any more proclamations, whether in popular media or otherwise, that “we are at war”.


Header Image: Second World War poster by John M. Gilroy in the UK Ministry of Supply

About Author

La Generalista is the online identity of Alicia Wanless – a researcher and practitioner of strategic communications for social change in a Digital Age. Alicia is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. With a growing international multi-stakeholder community, the Partnership aims to foster evidence-based policymaking to counter threats within the information environment. Wanless is currently a PhD Researcher at King’s College London exploring how the information environment can be studied in similar ways to the physical environment. She is also a pre-doctoral fellow at Stanford University’s Center for International Security and Cooperation, and was a tech advisor to Aspen Institute’s Commission on Information Disorder. Her work has been featured in Lawfare, The National Interest, Foreign Policy, and CBC.

Comments are closed.