Dr. Filippo Menczer explains how the natural structure and dynamics of social media inevitably leads to echo chambers.

By David Madden
01/06/2022 • 09:04 AM EST

Dr. Filippo Menczer, professor of informatics and computer science and director of the Observatory on Social Media at Indiana University, discusses how the fundamental design of social media makes it basically inevitable that users will migrate into groups that become highly homogeneous echo chambers. Within these often isolated communities, disinformation can quickly go viral and even if true and accurate information is out there, the natural structure and dynamics of the platform make it unlikely that the communities that need it most will ever see it.

0:00: David Madden:

With us today, we have Dr. Filippo Menczer. Dr. Menczer is a distinguished professor of informatics and computer science at Indiana University. Thank you so much for joining us today.

0:10: Dr. Filippo Menczer:

Thank you for having me.

0:12: David Madden:

You've written that one of the first consequences of the "attention economy" is the loss of high-quality information. Do you think you can elaborate on that process and how that happens?

0:23: Dr. Filippo Menczer:

Yes, absolutely. This idea of the attention economy, we have to attribute it to Simon who was a incredible economist, and he started noticing that generating information was getting cheaper and cheaper. So predicted that in the future, the scarce good would not be the information itself, as it has been throughout history, but our capability to pay attention to it. And this has become true as our technology has advanced and the cost of producing information has decreased, you know, we've seen benefits of that so that now the "democratization of information," everybody can become a producer of information.

But then in the last few years, we've started to realize also the possible negative consequences of that, because we are so flooded with information and we don't have good ways of selecting and filtering, so that we can devote our attention to quality information, for example, to things that are true or useful. It used to be when information was there that newspapers editors played that role for us and we had a number of trusted editors and intermediaries. Now that wasn't necessarily ideal, it was very much an elite activity and a few people  could.. it dominated basically the space. But now, with the good of the democratization, we now also see that most people cannot tell.

And so, especially people who access news through social media, you know, what they see instead of being determined by, you know, professional editors who devote their life to figuring out what is true and what is accurate and what is important, instead, what we see is mediated by algorithms who are often designed with goals that are different from exposing us to useful information, but rather to make money by keeping us on platforms. And also, by our friends and our friends are definitely not expert editors. We are not expert editors. We are often driven by all kinds of, you know, cognitive and social biases that decide what we pay attention to, like what makes us mad? Well, that's not necessarily what informs us the best.

So all of these different factors now a account for what we pay attention to, and we've even developed models showing that in an environment in which you can only pay attention to a fraction of what is out there, inevitably some things will go viral. This is an environment like a social media ecosystem, where you have a network of people connected to each other sharing information. And if you had infinite attention and you could look at everything and decide what to pay attention to, you know, then the world would be good, but in reality, we can only pay attention to a very small fraction. And in this case, some things are still going to get a lot of attention and to go viral. But those things are not necessarily good things. Even if we have no notion of quality, some things at random will go viral. And there's zero correlation with quality.

And then developed additional models a little bit more, slightly more realistic, where we can imagine that people actually can tell the quality of different pieces of information to which they're exposed. And they preferentially share things that are higher quality. But even in that situation, when we have information overload when the amount of information that we can pay attention to is a small fraction of what's out there, still there is very, very low correlation between popularity and quality. In other words, a lot of junk goes viral, and this is just inevitable consequence of the attention economy. The fact that it is so cheap to produce information, and we can't possibly process all of it. So inevitably we are going to be exposed to a lot of junk, and that's how the attention economy creates a vulnerability for us.

4:56: David Madden:

At the Observatory on Social Media, you explore the emergence of these echo chambers. Can you kind of explain how that happened and the experiment you used to study that?

Dr. Filippo Menczer:

5:08: Yes. So, this is an important topic because one of the, let's say, the vulnerabilities of social media platforms is to encourage the, or accelerate the formation of echo chambers, which are groups of users where one is mostly exposed to opinions that are similar to their own. Therefore, this is likely to reinforce their existing beliefs and biases and decrease the chances that they are exposed, that one is exposed to information from different points of view, which often is information that fact checks misinformation that one is exposed from one's own group.

So, whether you're conservative or liberal, you may be in a filter bubble or in an echo chamber where you're exposed to a lot of information that is, you know, maybe not faults, but misleading perhaps, or that encourages your existing biases. And you're unlikely to see information that would check that, that would put that in a broader context, that would help you see things in a more objective way.

So, it turns out that actually, very, very basic mechanisms that are part of our own cognitive biases and part of natural things that you can do on all social media platforms, make it basically inevitable that we end up in groups that are highly homogeneous and that are somewhat segregated from other groups.

So, we've developed a model that demonstrates this. So again, this is a model where we simulate a social network, an online social network where people share information with each other. And you can imagine that the information that you share is correlated with your own beliefs and that you have some chance of being influenced by what you observe, by what is shared by your friends, as long as that is close enough to your own opinions. If you see something that is, you know, completely different from your opinions, perhaps it will influence you less. This is based on actual experiments, so this is a well understood effect.

And then the other basic ingredient that is present in all platforms is the fact that we can not only select who we follow that are friends but also unselect them, right? So, if somebody that you follow or that you're friends with posts something that you find very offensive with a single click, you can mute them, or unfollow them. In fact, this is what platforms encourage us to do, right? So if you see something that you think of offensive and you try to report it on Twitter or Facebook, the first thing that they'll say is, well, you can just unfollow this person, or you can mute this person. And you have to do several clicks if you say, no, no, no, that's not what I want to do. I want to report that they posted something, you know, dangerous or harmful or something.

So, platforms definitely make it very easy and even encourage unfollowing people that we disagree with. And in simulations, you can see very easily that with these two basic ingredients, very, very quickly people sort themselves out into groups that are very homogeneous, and segregated in the sense that you basically have no chance of seeing different opinions. Even if you start from a situation of very heterogeneous opinions and people being connected to different opinions, but very, very quickly, if you have these two ingredients, all that will go away and you will have completely separated and completely homogeneous groups.

And there's a lot of work in this area. This is one model. Other people have developed other similar models that make different kinds of assumptions. For example, you might not even know the opinion on somebody else, but simply decide to, you know, to follow or unfollow people based on the information that they share. And so it's pretty inevitable. And in fact, empirically when we actually go and get data from social media platforms and we measure how the structure of the social network is connected to people's opinions, we find this very, very, very strong homophily. Homophily is a technical term. It means, you know, like of the same. It means that in a network you're most likely to be connected to people that are similar to you. So that's the clear mark of these community structures, where each community is very homogeneous. And we find this very, very strongly on multiple platforms, and you know it has been true as we have measured this over the last 10 years or so. It's been invariant. So, that's a natural consequence of the fact that we access information through social media.

10:21: David Madden:

Kind of going on the lines of echo chambers, we talk a lot about social diffusion. I was just curious how social diffusion impacts the information that people see on their social media platforms.

10:35: Dr. Filippo Menczer:

Well, this is like the basic idea of social media, you share stuff and your friends see it. And if your friends are similar to you then social diffusion means that most of the information to which you are exposed online is information that tends to conform with your existing opinion. And it's unlikely to challenge them. And we have these very, very strong cognitive biases. We have evolved in tribes where members of our tribes are friends and everybody else is the enemy. And this unfortunately finds reinforcement in social media platforms. So our own tendency, as we were talking about a moment ago to put ourselves into echo chambers, has as a consequence the fact that we have a very biased view of the world. Even if somebody posts let's say some false claim, and maybe that claim is fact-checked quickly and there exists out there information that shows that in fact is not true, we are very unlikely to see that fact-check. That fact-check probably is spreading through a different echo chamber. And we just don't see it.

So even if true and accurate information is out there, the natural structure and dynamics of social diffusion make it unlikely that we are exposed to it. Furthermore, within a social community where there's lots of triangles. So in other words, friends of my friends are likely to be my friends. So, we have common friends and this makes it such that as soon as two or a few people share something within one of these tight communities, then many, many, many people are exposed to multiple exposures of that. So in other words the structure of the community in social diffusion makes it such that you will immediately see many people are sharing the same thing. And one of the interesting features of opinion dynamics is that we are more likely to accept the fact or to imitate a behavior, or to believe something, when it comes from multiple sources. This also comes from ancient social and cognitive behaviors, right? If you see everybody starts running, you start running too. You don't know why, but you assume that people are not crazy. So there must be a reason.

This is the same in social media. If you see that a million people are sharing a video, you want to watch that video, you can't help it. And if all your friends are saying that David is a murderer, you think there must be something. But in fact, within these tight communities, you get the impression, you see that many people are sharing something simply because of the structure of the network itself. These people may not be independent of each other. There may be one source, but because a few people have shared it into this tight community where there's lots of triangles, you all of a sudden see it coming from many, many different sources. But these sources are not independent. Whereas, if you see zebras running in the forest, they're probably all independent, maybe somebody has seen a lion. But here it's possible that you just have the appearance of something being a wide held opinion in your tribe. And you have a very strong pressure to adopting that opinion. Even though in reality, maybe that was something that came from just one or two sources. So, those are additional vulnerabilities that come from the dynamics of information diffusion in social networks.

14:46: David Madden:

Great, great. You also spend a lot of your time researching, like social bots on social media. And I was just kind of curious if you could explain what are the dangers that social bots present to social media users?

14:59: Dr. Filippo Menczer:

Yes. Yeah. Social bots are basically more or less automated accounts. Now some of them are perfectly innocuous and even helpful. But the kind of social bots that are dangerous that we study are the ones who impersonate humans. And they're used as sock puppets, or as ways to amplify information. So we see that they can be, you know, you can pay to get a lot of fake accounts to follow you and create the appearance that you're popular, or to retweet your content, to create the appearance that the content is spreading virally.

And so these social bots are dangerous because they can manipulate our opinion. They can leverage those cognitive social biases that I was talking about a minute ago, where if you get impression that a lot of people are talking about something, there must be something there. Well, you can create that impression by just paying somebody or writing some code that creates the appearance, that a lot of people are talking about something. Whereas in fact, maybe there is just a single entity that controls all of those accounts. And in so doing, you can sort of manipulate, literally manipulate people.

So this was something that was discovered since the early days of social media. We found the first instance of a social bot in 2010 or yeah, 2010 during the midterm elections then. And we coined the term "social bots." We did not know what it was. We were just surprised, we were mapping these diffusion networks, seeing who was retweeting, whom and we thought that there was, you know, an error in our code where we saw two nodes with a link between them that was huge. Like it was like occupying the whole screen because the thickness of the edge in this network for us was representative of the number of retweets between two accounts. And we thought it was an error in the code. And so we looked at it and there were these two accounts that were retweeting each other tens of thousands of times. So then we realized, okay, who are they? What is this? And we realized that they were just doing this automatically. There is no way that a human could be doing that. And they were just promoting a particular political candidate. So we call them, well, these are bots. These are automated. So they're social bots.

And then since then we've been studying this a lot. And also we develop machine-learning tools to detect when an account is likely, you know, automated. And it doesn't mean that it is autonomous. It doesn't mean that you have a bot there that acts in a completely autonomous way. There may be a human, there's usually a human behind it or an organization. So the automation in that is in that you use the application programming interfaces provided by the platforms to control these accounts. So you can write a few lines of code and have a thousand accounts post something or retweet something. Even though that post, or the retweet, or the choice of the content that is being spread comes from a human. So we still call them bots because you are using the technology, you're using programming to create the false appearance of many pushing some opinion.

And as we have developed better machine-learning tools to detect these bots, then people who want to abuse platforms have become smarter and develop better algorithms. And so we've seen over the years, bots becoming more sophisticated and also less automated, more human controlling them. And also at this point we see a lot of these kind of false coordinated networks that are driven by real people. That in the developing worlds, there is a lot of very cheap labor so that you can actually hire, you know, an army of people and pay them very little amounts of money to post on your behalf or to create the impression that something is happening. Or you can use hacked accounts, or you can get supporters to post on your behalf. We've seen apps by political action committees that ask their supporters give them their, you know, username and password, their credentials for social media systems and then these apps post automatically on behalf of these people. So basically these people become social bots. They give permission to an entity to control an army of accounts. So these are all things that in our opinion create very strong vulnerability because they have the potential to manipulate what we see.

20:01: David Madden:

You also talked about how adding friction to social media interactions can kind of limit disinformation on these social media platforms. Can you just elaborate a little bit more on this concept of adding friction?

20:16: Dr. Filippo Menczer:

Yeah. The concept of friction comes directly from the idea of the attention economy at information overload. We talked about it earlier, right? The idea that just because we can't handle all the information that we see, and we therefore are more likely to share things that are low quality, even if we would rather not. And that contributes to lowering the overall quality of information in the system. So the natural counter-measure to that would be to decrease, you know, the amount of information to which we are exposed, so that we would have more time to digest it.

And so friction is a way to do that. Over the years, you could think from every technology that comes in, you could go back to the press, but even if we just talk more recently about the last 20 years with the web and then blogs, and then Wikis, and then you know, social tagging, and then eventually micro-blogs... the social media platforms that we have today. Every step has made it easier to post something, to share some information, you know, from having to create an entire website, to just having to create an account, to just typing in something close to text with a little bit of markup, to just typing text, to just typing on your phone. And then eventually you're just clicking, one tap on a screen. And right now we can share something with hundreds of people or more.

So every step has decreased this friction, has decreased the cost of producing information. So given that some of the consequence of that have been negative in the ways that we have been discussing, that the natural counteraction would be well, let's go back a little bit and increase the friction. Let's imagine that people have to pay some kind of cost. It doesn't have to be monetary, to produce something. Well, then you would think twice before you share something, right. Think of the old expression, "my 2 cents," right? We used to have to pay a stamp in order to send a message to one person. Now, basically for free, we can send something to millions of people.

So let's reintroduce the idea of a stamp. It could be, it could be something that says before you share, are you really sure you want to share this, or maybe solve this little puzzle, or pay 1 cent, or simply look, you've already posted 10 things today, wait until tomorrow. If you really want to post this, come back in an hour and push another button. There's so many different ways in which you could add friction. And the theory here is that doing that would just decrease the overall volume and give people a way to, you know, to think more meaningfully about what they really want to share. How many times you go back to something that you shared an hour later and you realize, oh, maybe I shouldn't have shared that. Right. So sometimes just thinking about it for a minute or an hour, might let you think more carefully about what are the possible consequences of producing or sharing, or amplifying a piece of misinformation, or a piece of information.

It might also create time for platforms to fact-check so that you know, maybe an hour later you might have a label that says, oh, by the way, this has been fact-checked by this fact-checker and it turns out that this information is disputed. Or it might be later that you find, well, actually you were retweeting an account that in the meanwhile, the platform has suspended because it turned out that it was an inauthentic account.

So, these are all different ways in which we believe that adding friction might help. Now, the problem of course, is that it might affect the bottom-line of platforms. And so the question is whether self-regulation is possible here, or if we need some kind of government regulation to create incentives for platforms to do this

24:41: David Madden:

Well, looks like we're out of time for today. Once again, we've been talking to Dr. Filippo Menczer, who is a distinguished professor of informatics and computer science at Indiana University, and also director of the Observatory of Social Media. Thank you for joining us today.

24:54: Dr. Filippo Menczer:

Thank you very much.