Computational propaganda involves the “use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (Woolley & Howard, 2018). While propaganda has existed throughout human history, the rise of digital technologies and social media platforms have brought new dimensions to this practice.
These technologies allow bad actors to deliver their messages to more people than ever before, regardless of geographical location. The anonymity enabled by many platforms has led to the proliferation of fake and automated accounts, which can be used to amplify misleading information or silence opposition. Finally, the monitoring and profiling of social media users by tech firms enables bad actors to target messages to audiences who might be particularly susceptible to disinformation campaigns. This online landscape presents a real challenge to organizations working on pressing social, political, and environmental issues.
Who takes part?
Click here to see our 2019 report on the Global Inventory of Organised Social Media Manipulation by governments and political parties.
Strategies & Targets
While computational propaganda varies across political and cultural contexts, certain strategies are common. For example:
- Amplifying misleading messages through the use of bots or paid commentators
- Hiring trolls to debate, harass, or bully genuine social media users
- Purchasing advertisements and using analytics to target citizens with disinformation
These strategies are often used with the aim of:
- Widening pre-existing divisions within society
- Influencing the outcome of elections
- Distracting people from potentially damaging news
- Creating confusion and an environment of distrust towards institutional actors
Computational propaganda has been employed by a range of actors, including government organizations, political parties, companies, terrorist groups, and ad-hoc groups of private citizens.
How have governments tried to address the problem?
- Computational propaganda is not just a technical problem. Disinformation campaigns often exploit pre-existing social issues, such as distrust in the government or resentment towards immigrants.
- These campaigns result in real harm to individuals, society, and democratic systems. Those from vulnerable groups tend to be disproportionately targeted by disinformation.
- Counter-strategies from governments, industry, and civil society must contend with the speed and sophistication of computational methods, along with complex social and historical contexts.
To find out more about the issue of computational propaganda, see our curated list of resources.