The Just Security Podcast

The Spread of Political Propaganda on Encrypted Messaging Apps

Just Security Episode 86

During this year’s election season in Mexico, propagandists leveraged a new mass-broadcasting feature on WhatsApp, called “channels,” to impersonate reputable political news outlets and pump out misleading information. Thousands of miles away, Telegram users in Hungary leveraged the app’s forwarding bot against LGBTQ+ and pro-democracy civil society organizations, portraying them as “Western-controlled” ahead of European Union elections. 

Messaging platforms such as WhatsApp, Telegram, and Viber have become highly influential tools for manipulating and misleading voters around the world. 

In fact, a new report, “Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation” examines how political propagandists have refined a digital “broadcasting toolkit.” The toolkit is a set of tactics for reaching large swaths of voters directly on their phones using narratives tailored to resonate with their specific interests and viewpoints. 

What are some of the most common tactics in the “broadcasting toolkit”? How can users and messaging platforms respond to the spread of propaganda and disinformation? 

Joining the show to discuss the report’s key findings are two of its authors, Mariana Olaizola Rosenblat and Inga Trauthig. 

Mariana is a policy advisor on technology and law at the New York University Stern Center for Business and Human Rights. Inga is the head of research for the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.

Show Notes:  

Paras Shah: During this year’s election season in Mexico, propagandists leveraged a new mass-broadcasting feature on WhatsApp, called “channels,” to impersonate reputable political news outlets and pump out misleading information. Thousands of miles away, Telegram users in Hungary leveraged the app’s forwarding bot against LGBTQ+ and pro-democracy civil society organizations, portraying them as “Western-controlled”, ahead of European Union elections.  

Messaging platforms such as WhatsApp, Telegram, and Viber have become highly influential tools for manipulating and misleading voters around the world. In fact, a new report, “Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation”, examines how political propagandists have refined a digital “broadcasting toolkit.” The toolkit is a set of tactics for reaching large swaths of voters directly on their phones using narratives tailored to resonate with their specific interests and viewpoints.  

What are some of the most common tactics in the “broadcasting toolkit”? How can users and messaging platforms respond to the spread of propaganda and disinformation? 

This is the Just Security Podcast. I’m your host, Paras Shah.  

Joining the show to discuss the report’s key findings are two of its authors, Mariana Olaizola Rosenblat and Inga Trauthig. 

Mariana is a policy advisor on technology and law at the New York University Stern Center for Business and Human Rights. Inga is the head of research for the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.

Mariana, Inga, welcome to the show. Congratulations on the report. It really is so helpful. It contains so much useful information from around the world. And congratulations as well to the NYU Stern Center on Business and Human Rights and the UT Austin Propaganda Research Lab. 

So, the report draws on interviews with political propagandists in 17 countries and includes a survey of more than 4,500 messaging app users. What are your key findings from the report?

Mariana Olaizola Rosenblat: Thanks, Paras. So, our top line finding is that political propagandists are leveraging specific features, design elements, of encrypted messaging platforms to manipulate voters during elections. Now, the dissemination of political content, including manipulative content on messaging apps, has been known for some time, and Inga and her team have been at the forefront of analyzing this phenomenon for several years. What this new study from us does is, it unearths how this happens. So, what are the mechanics of propaganda dissemination on these apps and what effect it has on voters. 

So, on the propaganda supply side, if you will, our main finding is that propagandists have developed sophisticated techniques over the years that consist of combining different, as I said, design elements and features of the apps to achieve content virality, which is something that is not typically associated with messaging apps. And on the demand side, our main finding is that these tactics do sway a significant portion of voters or constituents. 

So, I'm happy to dive into each of those, and Inga to provide some illustrations. But the other thing I'll say, just by way of introduction or starting this conversation, is that we lay out definitions for certain key terms, and if I could, I'll just mention a few. So first, what are encrypted messaging platforms? We use this term, and it's appropriate to use, I think, in some cases, but I would say it's a misnomer in some situations, first, because many so-called encrypted messaging platforms or apps are not completely encrypted. That is end-to-end encrypted. And they're at this point, with the exception of Signal, much more than just about messaging. And we can talk about what has driven that, mainly, the platform's business models. 

And then another key definition is propagandist. What do we mean by that? And we lay out the definition in the report, and it refers to individuals or groups that work to leverage media and communications in purposeful efforts to manipulate public opinion, particularly during elections and other events of civic significance. So, we don't use it in a positive term. And we also define propaganda which we use it in the sense of negative propaganda, as attempts to influence an audience through content that is false or misleading, and/or by employing tactics that are manipulative or inauthentic. And we then provide definitions for each, but I won’t go on — just to say this is how we should understand these terms as we use them.  

Paras: Thanks for that overview. And in the report, you do talk about a number of these causal mechanisms, right, about how propaganda is spreading on these encrypted messaging apps. One of these sets of tactics are what you describe as a broadcasting toolkit. What are some of the main features of the toolkit and how propagandists are using them?  

Inga Trauthig: Yeah, so we need to give credit where credit is due. And the term broadcast toolkit is something that an Indian consultant, political strategist who works in political communication explained to me that he uses. And Mariana and I realized in that term, broadcast toolkit, is really encapsulated, like the different tactics that we are observing and that we're hoping, you know, will be counted better in the future, so that they can't be used for manipulation.

And the broadcast toolkit, it's not one static thing that you like switch on or off, but instead it's an evolving system that propagandists are building to, like, manipulate public opinion via encrypted messaging apps. And roughly speaking, it has two layers. The first one, the most important one, is kind of your infrastructure, so what we also call the networks of distribution. So, either you create new groups, where you add members and voters in a certain region, for example, of a certain segment of the society, and then you use these groups to distribute your propaganda. Or, you try to infiltrate existing groups and then put propaganda into those already existing groups and sway public opinion. Or you collect phone numbers in order to single message people. So, that's the first layer of the broadcast toolkit. You need your, like, network of distribution. 

And then the second layer is to rely on those app features and use them to the best benefits of manipulating public opinion. So, forwarding is a very important function for messaging apps that propagandists use. Cross-posting. Cross-posting is a bit of a technical term, which basically just means you create content on one platform and then you make sure it spreads on a different one. So, it might be an X tweet or video that then you put into your WhatsApp group, and then, you know, it takes it from there. Status updates, or stories, is another important feature. Channels and channel feedback loops, where you create something in a channel, then you forward it to an individual chat. You forward it to a group, and that way the content just spreads by itself, or also the use of bots. 

So, the broadcasting toolkit is this, like, system that is constantly evolving, and then propaganda is used to manipulate public opinion via messaging apps. 

Paras: You know, it sounds benign when you think about it in abstract terms, right, forwarding groups and similar tactics. But when you dive into specific examples, and the report draws on elections this year from different regions of the world, from Africa, Asia, Europe, North America. That’s where you really see how these tactics are operating on the ground. Can you give a couple examples of how you've observed these tactics in action?

Inga: I'm going to start with India, because in India, the encrypted messaging, especially WhatsApp, kind of a propaganda ecosystem, is really far developed. So, also, this year, we've seen like several examples of propagandists and manipulative actors like trying to copycat and portraying themselves as official party groups, but spewing out really hateful content and information, often like targeting Muslim minorities in the country. Other concrete examples could be, for instance, in Mexico, which also had a presidential election this year, the feature of verification, so that you're like a verified news outlet who has a channel, or verified business account, was exploited. And, you know, very reputable news outlets were basically then taken over by propagandists, and they could put out really harmful and disinformative information pretending to be this reputable news outlet. So those are some concrete examples. 

And I also remember from some of the interviews we did in Bolivia, which actually has national elections next year. Folks that we spoke to who were very concerned, saying that on WhatsApp, there’s been so much hate that has been created lately, and so many of the government agencies and departments are starting their own channels or groups, but with that comes a lot of manipulative actors who try to copy them and then get verified and then, for instance, put out information saying that certain tribal minority groups in Bolivia get way more social security or way more social benefits than other parts of the population. So, that's already creating polarization right now, one year in the lead up to the elections.

Paras: 2024 is the year of elections. Over a billion people globally have already voted this year. The report has a number of recommendations that are targeted to these messaging platforms. In concrete terms, what are the steps that they can take to address the spread of propaganda and misinformation and disinformation on their platforms? 

Mariana: Sure. There's a number of things they can do. Before laying some steps out there, I just want to step back and remind everybody that encrypted messaging apps are tricky because they're a double-edged sword, or a sword in a shield, if you will. So messaging apps like WhatsApp, Viber, Telegram, they provide us with clear benefits, especially for activists who are at risk of state repression, they provide a crucial mechanism to communicate, coordinate and mobilize. But often, because these apps are secure, they serve as channels for bad actors to disseminate harmful information and sometimes mobilize for violence. 

So, our question going into the recommendations and main consideration was, how can platforms add friction and mitigate harmful electoral propaganda without undermining encryption, because end-to-end encryption, again, is a vital function, a vital tool for democracy activists. So, the main sort of condition in all our recommendations is nothing can compromise encryption. So, starting with that, there are still many things that platforms can do. First, to counter the types of automated dissemination of information and creation of third-party accounts under pretense and to basically combat phone farms and trolls, they should establish strict account creation limits. So, a platform like Telegram allows someone to, with one device, to create indefinite number of accounts, because you can actually buy and sell usernames on a platform that Telegram owns. On the other side of the spectrum, Signal allows for the creation of one account per device. That's it. And so, you're likely to see less use of that platform for propaganda. And then in the middle are WhatsApp and Viber, which allow for, I think, up to two accounts per device, which seems like a good compromise, but then propagandists have told Inga and her team that they're able to circumvent those limitations, and they have said that time and time again. So, our recommendation for platforms is, you should establish a strict account creation limit both in terms of the number and the pace of account creation, and you should also make sure that you close any technical loopholes that can be exploited. 

Then the second thing they can do, which again, it's very actionable, very concrete, is they should do a robust vetting of channels, business accounts and other premium accounts that are able to disseminate content to large numbers of users at once. So, channels are unlimited broadcasting functions, and business accounts on WhatsApp can also reach indefinite number of users if the business pays. So, that's fine if the platform wants to enable those features, but they say in their policies, those are not available for political campaigns. And yet, through the field research, we know that propagandists are able to set up business accounts and channels, as I was saying, under false pretense, and basically easily able to game the system there. 

Third, the platforms should invest in user-driven fact checking tools, like tip lines and other functionalities, that help users figure out if they're receiving a piece of misinformation or disinformation. And I'm happy to talk about what that would entail. And those tools should be user friendly, rather than cumbersome, which is the way that they are now. Fourth, the platforms should be transparent with users as to which chats or features are actually protected with end-to-end encryption and which are not, so that users have a realistic expectation of privacy in each type of chat that they engage in. 

And fifth, and probably most important, the platform should stop trying to be everything, to provide messaging and also social media functionalities and also be payment systems and then try to put end-to-end encryption on all of it, or just some of it, but in a confusing way. They should really just be clear and bifurcate their different services —provide messaging, protect it with end-to-end encryption, as it makes sense. And then, you know, provide social networking functionalities, but in a different surface and being clear that that's not protected with end-to-end encryption, because in those situations, it makes sense to have robust content moderation. So, those are not all, but some of the recommendations that we put up for platforms. 

Paras: Thanks. And I did want to follow up on that point about tip lines. Where have you seen that be successful? And really, how can the platforms make those as user friendly as possible?  

Inga: So, in order for them to be as user friendly as possible, there needs to be collaboration between the companies that are behind the messaging apps, and usually like civil society organizations or news organization who offer tip lines. So, for instance, here in the U.S., one tip line is from Factchequeado, which is mainly targeting Latino communities in the U.S. But one thing I want to point out when it comes to tip lines, which is actually really important, one of the reasons that Mariana and I recommend that, or like we recommend that in our report all of the authors, is that it comes from the user, right? So, it's not breaking encryption or anything. The user themselves on the encrypted platform is like, oh, I would like to have this checked by a tip line. So, it is, like, this type of bottom-up effort that saves encryption. But, at the same time in our survey, a lot of the respondents actually say — very few of the respondents said that they actively use tip lines. But when they were asked — Marianna can maybe clarify afterwards — but when they were asked about certain tools that could be available to them, like, already on the encrypted messaging apps that they could use for, like, fact checking or just verification or double checking a source, then a lot of them said they would find that really helpful. So, I think there's a lot of room for improvement here for, like, cross-sector collaboration, so that users on the messaging apps feel, like, more empowered. Mariana, do you want to add on the exact survey data?

Mariana: Yes, happy to, and that's absolutely right. So, in our survey, we asked respondents, have you ever used a tip line? Those who had encountered political content on messaging apps, and only seven percent of them across the 90 countries where we deployed our survey said that they had ever contacted a tip line. And actually, the percentage was lower than that, because when we asked them, which one, and they gave us their response in an open-ended form, we went in and looked at the tip lines that they mentioned, and they weren't actual tip lines. They were mostly just names of other applications, or Google, something like that.  

A tip line is a service-in-app that society and fact checking organizations provide. They basically set up an account within the messaging app where users can choose, on their own initiative, to send them a piece of content that they encountered and ask, has this been fact checked? Is this true? Is this false? And the civil society organization, or fact checking organization, will say, we have fact checked this and it's false, or this is true. And again, only seven percent or less of our surveyed users said they had ever used such a service. But, when we asked, would you find such a service useful, 83.4 percent said they would. So, a huge disparity.

And same was the case with user reporting, by the way. Very few users, I think more than seven percent, but few, said that they had used the reporting features of the apps that enabled that. And a lot, maybe something like 90 percent — I have to look at the actual number — but huge majorities that they find that important to have an option to report problematic content they encounter on the apps.

And further, in talking to experts, who were looking at this issue as well, and have examined tip lines, what they mentioned is that to contact the tip line on an app like this, you have to go through a series of steps that, as a user, is really difficult to do, and especially if you're a busy person, and you have to basically add the tip line, know which tip lines exist, or which civil society orgs have tip lines, then add their phone number to your context, and then remember to each time, find them in the directory and ask them to verify. Whereas you could have the apps provide an easy-to-use function where, next to the content, when they have the three dots, or, you know, ways that you can click on the content or the image, it provides you with option — do you want to contact the tip line? And then give you a directory of options of which tip lines you want to contact, which orgs you trust. So, there are ways that the apps can make this much better and much more functional. And that's our recommendation. 

Paras: Yeah, really tangible steps there. And the report also has recommendations targeted for policymakers, civil society organizations, researchers. What can they do?  

Mariana: Yeah, for governments, our main recommendation, and I'll say it emphatically, is, do not impose legal obligations on platforms that undermine encryption. Again, going back to why end-to-end encryption is so important for the exercise of human rights and at least for the purposes of mitigating disinformation, it's not necessary to break encryption. So, governments should stop. There's a trend among governments to demand that online platforms track and remove certain content illegal or harmful, but that would if applied to encrypted messaging spaces that would require them to break encryption. So, governments like Brazil, India, Indonesia, the U.K., they've recently passed or put forth laws that impose client-side scanning, source tracing or traceability, and also just vague requirements that the platforms track and take down content, and that would sharply incentivize companies from enabling end-to-end encryption. 

So rather than passing those regulations, governments should focus on transparency requirements — requiring that messaging apps, just like other platforms, release information, content neutral information as to their operation, their policy enforcement systems, how their business models play into which features they roll out and so forth. And there's a second recommendation which we have for governments to support local organizations that work with communities to enhance their media literacy. And that's mostly a supportive role for government, but these organizations definitely need resources and tangible support. 

Paras: Yeah, thanks for that. Again, very practical and important steps that governments can also take.  

What are some of the gaps in our current understanding of how disinformation is spreading on these messaging applications, based on the research that you've done so far, and how can researchers, in follow-up work continue to navigate these challenges and address these gaps? 

Inga: I'm going to start on a positive note, because over the last, I would say, handful of years, the work and the research on encrypted messaging apps, messaging apps, some with more and some with less encryption, and how they contribute to political developments is actually increased. Like it started in the security studies field, especially terrorism studies, where, you know, Islamic State has been so present on Telegram for like, several years now. But it has moved into, like, the political communication space, where we are interested in how is political opinion formed and influenced and also manipulated. 

And at the beginning, messaging apps are basically out of the picture. It was like, okay, you have Facebook, you have Twitter, at the time, Instagram, now TikTok as well, all of these, like, big platforms. That's where all the political manipulation and really the influence happens, because that's where you have the viral features, the huge audiences, and messaging apps is really only for communication amongst family and friends, and yeah, maybe you come across politics occasionally. And I think with this report, we are really managing to underline by just providing the data with the survey and with the interviews about how crucial, also, messaging apps have become in political opinion formation and can be used to manipulate voters before elections. 

So, I think that is really important to acknowledge, and I think that's quite a lot of work that has happened in this research space over the last years. And I hope this report is kind of like underlining how much focus needs to go into these apps as well. Now, we come to how to research this, and that's quite hard, because the apps are end-to-end encrypted, partially are end-to-end encrypted, or mainly end-to-end encrypted, depends on which app you look at exactly. So, that makes research on them harder, especially ethical research, because you often can't scrape large amounts of data. You often don't have what existed in the past now doesn't as well anymore, like Twitter APIs, which really help researchers also track behavior and patterns on platforms. So, you need to be more sensitive in your research, and it's much harder for you to collect data.  

So, one way to tackle this is with qualitative research — to actually do interviews, and then, instead of gaining access as a researcher yourself to a specific group chat, someone tells you about that group chat, what they've seen, how it functions, and maybe does a screen walk. Another part is the survey that Mariana and her team rolled out, where you surveyed the users, and obviously that way gain insight. 

And then there are other more quantitative ways to gather data from those encrypted messaging apps, but that is a bit controversial. Some people see groups on WhatsApp, as long as you can find a link to a group online as a public group — according to my research ethics, that's not a public group, because most members probably still think like, don't think that there are researchers in this group, for instance. So, I think with those messaging apps increasing in their political importance, researchers, the academic space, we need to figure out what is ethical in terms of how to research these spaces, which are still seen by many people as private and protected and de-facto also sometimes are end-to-end encrypted, and you can't actually get it to set type of data unless you're very intrusive.

Paras: Yeah, a lot of tricky and thorny issues there, but a lot of important research also to be done. Zooming out and looking ahead, what are the one or two biggest trends that each of you is watching for? 

Inga: I think the first thing that comes to my mind, it's just, you know, also related to current political developments, is if Telegram’s going to change its behavior somehow, because Pavel Durov was arrested in France. As far as I'm aware, he still has to be in France, and has, like some pending legal proceedings, although I didn't check today what the latest is. But you know, Telegram has been, if I say bluntly, like the worst offender in the space of messaging apps and how they allow for political exploitation. So, is there a way to rein in Durov, like, Pavel Durov, the founder, and Telegram in the future. That might have some implications. And other messaging apps and other company owners will be looking, right, to what's going to happen to Pavel Durov. So, that is on my mind. 

And then the other second big trend that I think we're going to continue watching, Mariana and our teams, in the future, is the feature of bloat, and how so many new features are added to these messaging apps, and how almost all of these features are exploited by manipulative actors, and how many of those actually relate to incentives by the companies to make more money. So, is, like, the earning revenue aspect going to completely trump trust and safety aspects, and we're going to see more exploitation due to that in the future?

Mariana: Yeah, I'm going to be watching that. Just closely related to the Durov arrest, certainly, other platform founders and CEOs are watching what happens to him, but bad actors are as well, and they may decide, as often happens, to migrate to another platform that they now consider more secure, given that Telegram might be compromised in their eyes. So that's one thing to watch.

Another for me is how regulators enforce these emerging laws around safety. So, on paper, in some cases, like the U.K. Online Safety Act, they require platforms to monitor and take down illegal and also harmful to children content, and if applied to messaging apps that are in turn encrypted, that's impossible without breaking encryption. And there's been a lot of debate, a lot of discussion on that, but there's been no enforcement yet. We'll see what the enforcement agency comes out in terms of guidance for messaging platforms. I don't think I've seen any yet. So, the broader debate is how to balance, constantly, how to balance safety and privacy and where they conflict. You know, how do you make that trade off? 

Paras: Yeah, certainly a lot to watch for. Is there anything that we haven't touched on yet that you'd like to add?  

Inga: So, something that we haven't mentioned in this podcast yet, but I think that is worth mentioning, and also a place to watch, is the integration of AI into our messaging apps that we've also been looking at, because AI, generative AI, is very quickly being integrated into the space of political propaganda. And we definitely have seen and have concerns about the AI-created content that is spreading, also, in messaging apps. It has a lot to do with that cross-posting and cross-platform communication that I mentioned earlier. But then there are also AI-specific features to the messaging apps that have started to be rolled out, like AI chat bots that you can invite into your group chat on Telegram. Bots have always been a huge thing, so AI will definitely, like, exacerbate trends there. 

So yeah, I think that is something I want to highlight. Messaging apps are part of the social media ecosystem. They have huge political importance, according to our report, and AI is a disruptive force in this space. And yeah, we should be watching how the apps are managing that integration and how the propagandists are going to pick up AI tools and use them in messaging apps. 

Paras: Yeah, we'll be tracking all these trends at Just Security. Mariana, Inga, thank you again for joining the show. 

Inga: Thank you so much for having us. 

Mariana: Thank you, Paras, thank you so much.

Paras: This episode was hosted and produced by me, Paras Shah, with help from Clara Apt. 

Special thanks to Mariana Olaizola Rosenblat and Inga Trauthig. 

You can read all of Just Security’s coverage of disinformation and technology, including Mariana and Inga’s analysis, on our website. If you enjoyed this episode, please give us a five-star rating on Apple Podcasts or wherever you listen.

People on this episode