Dr. Sander van der Linden, professor of social psychology at the University of Cambridge and director of the Cambridge Social Decision Making Lab discusses the prospect of reaching psychological herd immunity to propaganda and misinformation, the challenge of trying to debunk disinformation without repeating it, and the capacity of cognitive dissonance to overcome facts. He also discusses incremental design changes that could be made by social media platforms to reduce the spread of misinformation and polarizing content.
Transcript:
0:00: Grace Lovins:
With us today, we have Dr. Sander van der Linden, professor of social psychology at the University of Cambridge and director of the Cambridge Social Decision Making Lab. He's also one of the foremost researchers in building resistance to persuasion through psychological inoculation. Dr. Van der Linden, thanks so much for joining us today.
0:17: Dr. Sander van der Linden:
Yeah, pleasure to be here.
0:21: Grace Lovins:
In your latest book, Foolproof: Why Misinformation Infects our Minds and How to Build Immunity, you explore the prospect of building psychological herd immunity to propaganda and misinformation. Can you discuss what herd immunity is in this context?
0:35: Dr. Sander van der Linden:
Yeah, so if you think about the idea of a psychological vaccine, right, that you expose people to a weakened or inactivated strain of a piece of misinformation or the techniques that are used to produce misinformation and people can build up cognitive resistance over time, then you have individual immunity. And if you look at some of the literature that goes back to the sixties, of a social psychologist named Bill McGuire, he was very much thinking of this at the cognitive level. That, oh, you can pre-expose people to a weakened dose of a propaganda sort of technique and then inoculate them beforehand. But I think it was only much later that people started thinking about well, isn't the ultimate purpose of this metaphor to achieve herd immunity just like you do with real vaccines, right? The whole idea is to get enough people vaccinated so that the virus no longer has a chance to spread. And I think that's where we want to go with herd immunity.
And psychological herd immunity, I think, can mean multiple things in this context. I think it's a bit ambitious to say that at a global population scale, when 90% of the population is vaccinated against propaganda techniques, then we're going to have herd immunity against misinformation. I think that's perhaps a bit optimistic, but you can think of this in terms of communities, whether online or offline, that if enough critical mass in a given community structure is vaccinated at a certain level, then there might be enough resistance to slow the spread of misinformation or disinformation. And that's, I think, what we mean with psychological herd immunity.
And some colleagues like Josh Compton, who's at Dartmouth, worked on the idea of word-of-mouth. And so people could spread the vaccine maybe by word-of-mouth. The way we've been thinking about doing it is through you know, entertaining games or videos or conversations that people have. But the idea here is that you actually pass it on. I think initially in the literature, people thought that sometimes they're referred to as post-inoculation talk, which is the idea that people have conversations about the inoculation, was very much focused on the fact that it would actually strengthen the resistance within the individual. So if you're inoculated and you're talking about what you've been inoculated against, you're strengthening your own resistance.
The way we were thinking about it though is the passing on process. And so if you are passing it on to other people how much do other people remember and how much does the next person remember and the next person, and how does it spread within a network or community. And that's kind of the herd immunity that I'm thinking about.
And we've done some computer simulations too, to see what's possible. So this is theoretical of course, but let's say you have a structure of some social network, then you have broadcasters of misinformation, and then you inoculate people. And there's different scenarios of how you can inoculate the population. You could disperse it over time slowly, you could load it upfront. Over the course of the simulations, what we found over hundreds of simulations is that inoculation helps anyway, but it's not as effective if it's kind of a drip that's going slowly sort of throughout the population. It's gonna be most effective when you heavily vaccinate people upfront in advance of a massive disinformation campaign. Kind of makes intuitive sense, but that's also what the models show.
And then how do you do this in practice? So that's what we've been focusing on, is trying to partner up with some organizations that can really scale this, and get the vaccine out to as many people as possible on social media. And that's been the focus of some of our later work.
4:14: Grace Lovins:
You explained how trying to debug misinformation can often create a stronger memory association with the misinformation in question. How can we get around this?
4:23: Dr. Sander van der Linden:
Yeah, I should caveat this a little bit by saying that, you know, there's a lot of talk about fact checking, debunking backfiring, and it's important to sort of distinguish the different forms of backfire that can happen here. I think one of the things that people have been worried about for a long time is the worldview or political backfire effect that, you know, oh, if you say something to people, say something that's not politically congenial, if the fact check is not politically congenial, people will dig their heels in deeper and so on. And you know, it seems that there's, that's true for some extremely motivated people, but it seems that overall it's not a huge concern for most people that, you know, most people, when they're exposed to a fact check, they may disregard it, but actually doubling down on their beliefs, that's an extreme response. So you actually don't see that in most people, only people who are extreme.
However, another type of backfire effect is this idea that when you repeat the misinformation, you can inadvertently strengthen people's associations with it. And that's something I've been slightly more intrigued by because it has this sort of practical element that how are you gonna debunk something without repeating it? And that kind of intrigued me.
And the classical finding for this is that when you tell people... they give people a story in the lab usually, about some event - there was a fire and or an airplane crashed and this was the cause. And then later they say, oh, actually that wasn't the cause. The cause was unknown, or this is the actual cause. And then ask people a bunch of questions about, oh, why did this happen? How did the plane crash? Or what started at the fire? And then you notice people are still giving you the wrong explanation that they heard the first time, and they disregard the correction that you gave them in between. And that's what we call the continued influence of misinformation.
And that happens because people either fail to integrate, so if you look at some neurological research on this, there's not much, but there's a few studies and they have competing accounts. So one is about integration. So people fail to integrate the correction in their mental model of how something works. This is an integration error and it has to do with memory. You know, your memory is a spider network of you know, it's kind of like a social network, right? You have links and nodes. Nodes are the concepts. So let's say vaccine. And then you have live vaccine, inactivated, autism, side effects, immunity. And they're all kinds of links and they're all related to each other. And the more often you repeat something, the stronger these links become and the easier they are to activate.
But then when you're trying to undo a link, that's where it becomes really difficult because when you undo one link, you may not realize that there's 30 other ones that are still active. And it kind of becomes this game of whack-a-mole, where you're trying to undo all of these links and people are forming new links all the time. And when you repeat information, it can, it can strengthen them.
Another competing account is what we call the selective retrieval account, which is that when people are exposed to a correction, they have to concurrently, so the misinformation and the correction are stored concurrently in your memory. So people activate them. And the idea is that the misinformation is being suppressed by the correction, but sometimes there's an error there, and people fail to suppress the misinformation. And so there's a retrieval error that people are not retrieving the correction.
Anyway, long story. But I think for me, regardless of who's right about these accounts, I think both are probably true and they can happen both and differently for different people. I think the moral of the story is the old saying, you know, when a jury heard something in a trial that in the media they weren't supposed to hear, you know, you can't unring a bell. And I think the same is true for debunking.
And that comes to the practical recommendation, which is your question. This was a long-winded way of actually answering your question, which is you need to make the correction as prominent as possible relative to the misinformation. Because regardless of whether people fail to integrate or fail to retrieve it, it's all happening because the correction is not prominent enough. So we need to stop repeating the misinformation and make the correction the headline and the salient part of it.
Now my colleagues, some of my colleagues will have a slightly different opinion. We just did a consensus report for the American Psychological Association trying to hash this out. And I think, I compromised on the idea that it's probably okay to repeat noninfluential parts of the misinformation once, when you're doing an extensive debunking of it. But it shouldn't really be much more than that, and you should try to wrap it really in the truth. And so this is what also is called the "truth sandwich," which is you start with the facts and then you explain why the myth is misleading rather than repeating it maybe. And then you end again with the facts. And so you layer, you kind of protect the truth, you use the truth really to wrap it around a lie. And that's the truth sandwich, so that it doesn't escape. That's kind of the idea. Now experimental evidence for this is still evolving, but I think it's kind of a foolproof way of doing this. If you start and end with the facts it's very hard for the misinformation to escape. And so that's kind of the recommendation.
9:51: Charlotte Jones:
Can you explain how cognitive dissonance may incline people to unconsciously reject accurate information?
9:58: Dr. Sander van der Linden:
Yeah, so what's interesting is Leon Festinger, who was one of the original psychologists who studied cognitive dissonance, had this great example in his book that he wrote in the sixties, that he infiltrated this cult. And they were a kind of a semi-religious cult. And they were convinced that aliens were gonna come down to visit earth and kind of declared the apocalypse. And they had a whole movement that revolved around this. And Festinger kind of predicted that well, he had some hypotheses. He was like, well, they have a set date by which this is supposed to happen, so let's infiltrate this cult and let's do a little pre-post experiment to see what happens with people's beliefs once the key date, you know, he asked, well, what's gonna happen with people when they notice that the aliens are not coming?
And so he thought, well, they must be changing their beliefs, right? So now they're confronted with a piece of evidence which contradicts their prior beliefs that they'd been preparing all this time, they believed in something, now it's not happening, so they must be changing their beliefs. But that's not actually what he found. So when the date actually came and went and they noticed that the aliens, they were all gathering around outside, and they noticed that there were no aliens coming down, they actually doubled down on their beliefs. So they created a new kind of pathway that actually what was happening was that the aliens were giving them a second chance to save the earth. And so, they just went in a slightly different direction.
And so he hypothesized that what's happening here is that people feel this intense dissonance when they're being confronted with information that doesn't jive with what they already believe. And so, either people will have to spend a lot of effort trying to now change their beliefs and their identity and everything that's wrapped up in it, or they could just selectively expose themselves to the information that agrees with what they want to believe and reject the information that they're confronted with. And so I think this idea of cognitive dissonance is that when people are confronted with information that challenges their core beliefs, that's uncomfortable, that produces dissonance. And so people are more likely to reject that and accept information that furthers what they already want to believe.
And so I think that when you take that example into the political sphere, you see that all the time. That people believe in things that seem like statements that would reinforce the party line or their worldview or their spiritual beliefs or whatever it may be. And if they come across information that challenges that, people are more likely to want to exclude that information from their diet and seek out information that confirms what they want to believe to avoid that dissonance.
12:48: Charlotte Jones:
What would the ideal design of a social media platform look like in relation to stopping the spread of dis and misinformation?
12:55: Dr. Sander van der Linden:
Yeah, that's a great question. I think you know, if I had a good answer to that, I'd probably be making a lot of money trying to sell this to a social media company, to better the world. But, yeah, we had a conversation with Meta a while ago, and Instagram and that was really interesting. So they said, okay, you guys are always saying we shouldn't maximize for engagement, but what should our algorithms be doing then? And that's kind of the question, like, okay, what, you know, we were, you know, I brought my whole lab down to the Meta headquarters here in London where I'm based. And it's a tough question.
So they said, okay, listen. So yeah, we understand that if you maximize engagement, misinformation might be more likely to go viral. But you know, one interesting insight they had was like, but a lot of our stuff is based on people's behavior. So people tell us what they want. So we look at people's behavior and that's what goes into it. You know, people yelling at each other. If they're posting misinformation, other people are liking it. If that produces engagement, that's what people are doing, that's what their behavior is telling us.
But then we also talked about how people's behavior in the moment isn't always representative of what people really want. So if you look at survey data and you ask people, they say, I don't wanna waste my time in an echo chamber. I don't wanna be exposed to hateful content and disinformation. People don't want that stuff, if you ask them. And what's so interesting is when they survey people on these platforms and ping them about their experience, you know, after they've been yelling at someone for 20 hours or something like that, they actually reported, they don't like it. So their behavior is saying one thing, but then people reflect and say, actually, I don't want that.
And I think it's easy for people to get wrapped up in these things online. So how do you change that? So one suggestion we had was like, well, can't you optimize for educational content, accurate content You know, motivate accuracy? And even within my group, somebody said, well, I mean, that's nice in theory, but who's gonna make money out of optimizing educational content? Right? Nobody's gonna watch that all day long. And people are gonna opt out of that. And then there's no social media platform because there's only a small percentage of the population that wants to watch educational videos all day. And so I think, you know, that's a fair point. So what do you, what do you do then?
So another idea was, well, the algorithm could have more signals. So you could use that survey data and, and use that as inputs for the algorithm actually to calibrate it according to what people prefer, what they say they prefer, not just their passive behavior.
But here was another idea. What if you give people credibility and trustworthiness ratings on their accounts? So if you are an account that produces a lot of unreliable content, then you get downgraded. And this just creates a reputational incentive actually for people to stop sharing misinformation. Because you don't want to be downgraded. It's like your Uber rating, right? You know, somebody told me this lately, I think it was a colleague who said he was taking a ride share and the guy started talking about vaccine conspiracies and you know, that the government's all behind it and also gargling with lemon is gonna fix everything. And he said, you know what? I do this, I debunk stuff for a living, but I'm not gonna say anything here because I want to keep my Uber rating. And so I think, you know, there is something to this that, you know, people wanna protect their reputation even sometimes to avoid conflict. And so giving people these credibility ratings to accounts, I think it could be useful.
Another idea that's related is to use filters for polarizing rhetoric or toxic content. Of course the questions they have for us to think about is who determines what's polarizing, who determines what's toxic. I actually think at a basic level there are things, there are very simple things that you define as polarizing rhetoric that are not political. Like if you use "they" words more or "we" words more, we signals our group, they signal out group. There are ways that you could actually create these filters that are not that objectionable. Toxic content. I think we can all agree that name calling, and trolling and all of these things are toxic.
Of course they say, well, okay, if you make it optional, most people who engage in this behavior are not gonna turn on that filter. So there are a lot of practical challenges in how you would actually do this, but some ideas were, you know, creating filters for toxic or polarizing content, using credibility ratings on accounts to discourage the sharing of misinformation, and hence promote the sharing of accurate information. Maybe getting rid of the ad model. That's a huge problem. But then you get initiatives like asking people to pay $8 for a blue tick, right? And so, you know, there's been a lot of people who really don't want to pay for social media. And so that's another factor. If they're not gonna have ads and engagement, then how are they gonna make money?
And so I think there is a solution but it's definitely not going to be an easy one. Of course, in theoretical terms, the easy answer is incentivize for accuracy, which is also what I wrote about in the book. And some ways to do that are, leveraging reputational concerns using filters, down-ranking unreliable content. Here's something more controversial. There are research studies that showed that de-platforming super spreaders really works wonderfully in reducing content. Which, you know, there are big differences in Europe and the United States about how comfortable people are about de-platforming and limiting speech.
Personally I think there are side effects. So I do think that if you look at the de-platforming of Trump, for example, misinformation on Twitter did go down. I mean, research studies show that. But then he sets up his own social network and now you have a whole group of people who are perhaps more radical than they would've been if they had stayed on Twitter. So is de-platforming the ultimate solution? It's not an easy answer.
And so I think a lot of people want to protect freedom of speech and use softer measures. But I think at the end of the day, regulation obviously is going to play a role in holding social media accountable. But yeah, you can say a lot, you know, fines, but, you know they pay the fines, right? Germany has a law against this sort of stuff and they just pay the fines because it's nothing to them. And so what kind of regulation would facilitate better social media?
I will say, when I first learned about social media when I was younger, I thought it was a great idea. Connecting people all around the world, right? I think there is huge potential in social media. We've just now gone down a path that's not at all I think what people had hoped for. Maybe what even the creators had had in mind. And we need a better model. So that's, that's for sure. I'll stop talking there because you know, I don't have the ultimate answer, but just some ideas.
20:21: Grace Lovins:
Okay. It looks like we're out of time for today. We've been talking to Dr. Sander van der Linden, director of the Cambridge Social Decision Making Lab at the University of Cambridge, and author of Foolproof: Why Misinformation Infects Our Minds in How to Build Immunity. Dr. van der Linden, thanks so much for joining us today and we do hope to see you again soon.
20:39: Dr. Sander van der Linden:
Perfect. Yeah, I look forward to it, and thanks again. Very nice meeting you all.