Social media companies are better positioned than governments to meet the enforcement challenges posed by influence operations that aren’t aligned with hostile states but still cause harm.

By Elise Thomas, Natalie Thompson, and Alicia Wanless for the Carnegie Endowment for International Peace

Influence operations are organized attempts to achieve a specific effect among a target audience. In such instances, a variety of actors—ranging from advertisers to activists to opportunists—employ a diverse set of tactics, techniques, and procedures to affect the decisionmaking, beliefs, and opinions of a target audience.

Yet much public discourse has failed to paint a nuanced picture of these activities. Media coverage of influence operations often tends to be negative, stoking fears about how influence operations might undermine the legitimacy of liberal democracies. But the purpose behind such campaigns must be considered when assessing their effects. In electoral settings, influence operations can refer not only to coordinated state efforts to influence a foreign election but also to positive advocacy campaigns such as those designed to encourage people to vote. That being said, governments and industry actors face growing pressure to do something about malign influence operations, but these campaigns must be clearly understood to be addressed effectively.

In reality, influence operations are neither inherently good nor bad, and it is up to societies themselves to decide what conduct and responses are and are not acceptable. The question of whether some influence operations are acceptable or not is highly ambiguous because it is so hard to ascertain the motives driving the actors behind them. This analysis examines a case study involving an online influence operation originating in Israel and targeting audiences in a host of countries including Australia, Canada, the United Kingdom, and the United States. The operators of this content mill push highly politicized online content and gain access to foreign audiences, at least in part for apparent financial gain.

This study explores the difficulties in making simple assessments about influence operations, and this particular case study serves as a basis for analyzing publicly available information about social media community standards and government legislation aimed at countering influence operations, for the purpose of identifying gaps in and challenges for the solutions proposed so far. This study ultimately points to the need for clearer consideration of what constitute acceptable persuasive techniques and tactics of online engagement, and it further highlights the lack of clear existing guidelines for policy development.

KEY TAKEAWAYS

  • Influence operations defy easy categorization. Influence operations often fail to fit neatly into boxes outlined by individual policies or legislation. They are run in a complex environment where actors overlap, borders are easily crossed and blurred, and motives are mixed—making enforcement challenging. In this case study, actors share highly politicized online content but also appear to benefit financially from their actions, making it difficult to ascertain whether their motives are primarily political, commercial, or both.
  • Relevant policies by social media platforms tend to be a patchwork of community standards that apply to individual activities of an influence campaign, not the operation as a whole. Policies published by social media companies often focus on individual components of influence operations. This approach attempts to neatly categorize and distinguish actors (foreign versus domestic), motives (political influence and profit), activities (including misrepresentation, fraud, and spamming behavior), and content (such as misinformation, hate speech, and abuse). This piecemeal approach to enforcement raises questions about whether officials within social media platforms fully understand how influence operations work and how such campaigns are more than the individual behaviors that compose them.
  • Social media networks have more opportunities to counter influence operations through their platform policies than governments do with existing legislation. Social media companies have implemented various policies to govern how their platforms are used, providing opportunities for combating influence operations. They also have greater access to information about how their platforms are used and have domain-specific expertise that allows them to create more tailored solutions. Fewer avenues exist for countering such influence operations using government-led legal mechanisms. This is not only because of the relative paucity of laws that govern online activity but also because law enforcement requires attribution before they can act, and such attribution can be difficult to ascertain in these cases. This means that governments have generally done little to help private industry actors determine what kinds of influence operations are unacceptable and should be combated. In the absence of such guidance, industry actors are de facto drawing those lines for society. Governments could do more to help guide industry players as they determine the boundaries of acceptable behavior by participating in multi-stakeholder efforts—some of which have been set up by think tanks and nonprofits—and by considering legal approaches that emphasize transparency rather than criminalization.
  • The influence operations uncovered by media scrutiny are not always as easy to counter as those writing about them might hope. Savvy influence operators understand how to evade existing rules, so that their activities and content do not breach known policies or legislation. Media coverage that showcases examples of influence operations seldom explains whether and how these operators violate existing platform policies or legislation. This is a problem because distasteful influence operations do not always overtly violate existing policies or laws—raising questions about where the lines are (and should be) between what is tolerable and what is not, and, moreover, who should be determining those lines. Even when existing policies clearly do apply, these questions persist. Stakeholders should more clearly assess what constitutes problematic behavior before rushing to demand enforcement.

Header Image: Soviet anti-superstition propaganda poster: “This man is a scientist, but he blindly believes in all signs” from 1970.
Source: Soviet Visuals

About Author

Alicia Wanless

La Generalista is the online identity of Alicia Wanless – a researcher and practitioner of strategic communications for social change in a Digital Age. Alicia researches how we shape — and are shaped — by a changing information space. With more than a decade of experience in researching and analysing the information environment, focusing on propaganda and information warfare, Alicia conducts content and network analysis, and has developed original models for identifying and analysing digital propaganda campaigns. Alicia is currently a PhD Researcher at King’s College exploring alternative frameworks for understanding the information environment.

Comments are closed.