The Just Security Podcast

An Innovative Lawsuit Links Social Media Companies to Mass Shootings

Just Security Episode 76

In November 2021, a teenager in rural Texas downloaded the video game Call of Duty: Modern Warfare and quickly became obsessed. He began to research weapons from the game, including a military-grade assault rifle. The company that manufactures the weapon used Instagram to market it. 

The teenager spent hours on Instagram, using 20 different accounts to browse the app. He learned more about the gun, and saved every dollar he could to pre-order it. 23 minutes after he turned 18 years old, he purchased the weapon. A few days later, on May 24, 2022, the teenager walked into Robb Elementary School in Uvalde, Texas, and used the gun to kill 19 fourth-graders and two teachers.  

Now, two years after the massacre, the families of those killed are suing Instagram and Activision Blizzard, the company that publishes Call of Duty. The novel lawsuit faces many legal hurdles – among them is Section 230, a federal law which significantly shields social media companies from liability for third-party content posted on their platforms. 

How might this long shot lawsuit impact who can be held responsible for mass shootings? And what are its potential implications for Silicon Valley in other contexts?  

Joining the show to discuss the case and its potential impact on legal efforts to hold social media companies liable through the court system is Paul Barrett.  

Paul is the deputy director and senior research scholar at the NYU Stern Center for Business and Human Rights.

Show Notes: 

  • Paul M. Barrett (@AuthorPMBarrett
  • Paras Shah (@pshah518)
  • Paul’s Just Security article “Can Families of Mass Shooting Victims Hold Social Media Companies Responsible for Violence?”  
  • Just Security’s Section 230 coverage
  • Just Security’s Big Tech coverage
  • Just Security’s Domestic Extremism coverage
  • Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)

Paras Shah: In November 2021, a teenager in rural Texas downloaded the video game Call of Duty: Modern Warfare and quickly became obsessed. He began to research weapons from the game, including a military-grade assault rifle. The company that manufactures the weapon used Instagram to market it. 

The teenager spent hours on Instagram, using 20 different accounts to browse the app. He learned more about the gun, and saved every dollar he could to pre-order it. 23 minutes after he turned 18 years old, he purchased the weapon. A few days later, on May 24, 2022, the teenager walked into Robb Elementary School in Uvalde, Texas, and used the gun to kill 19 fouth-graders and two teachers. 

Now, two years after the massacre, the families of those killed are suing Instagram and Activision Blizzard, the company that publishes Call of Duty. The novel lawsuit faces many legal hurdles – among them is Section 230, a federal law which significantly shields social media companies from liability for third-party content posted on their platforms. 

How might this long shot lawsuit impact who can be held responsible for mass shootings? And what are its potential implications for Silicon Valley in other contexts?  

This is the Just Security Podcast. I’m your host, Paras Shah. 

Joining the show to discuss the case and its potential impact on legal efforts to hold social media companies liable through the court system is Paul Barrett.  

Paul is the deputy director and senior research scholar at the NYU Stern Center for Business and Human Rights.

Paul, welcome to the show. It's great to have you here to discuss this interesting and important case.

Paul Barrett: Glad to be here. Thanks for having me. 

Paras:  Sadly, as we've seen, in the wake of so many mass shootings across the country, there's often litigation brought by the survivors and their families to try to hold individuals and institutions accountable. There are suits against gun makers, there are cases against local officials, this case is different. So, could you get us started by describing the theory of this case? What are the families of these victims alleging?

Paul: Sure, it's a complicated piece of litigation, so it takes a minute or two to unpack it. The families of the 19 children and two teachers who were killed in Uvalde are suing three defendants. In California state court, they are suing Meta and its Instagram platform. And they are also suing Activision, the gaming company, in connection with Call of Duty, which was a military themed game that the shooter in the Uvalde massacre was fond of. Back in Texas, they're also suing Daniel Defense, which is the company that made the AR 15 military style rifle that was used in the massacre is based. 

I suppose we're primarily interested today in the lawsuit, the claims, against Instagram, and Meta, and at least in part because they provide something of a template that could be used in other litigation against social media companies generally, where plaintiffs who claim they've been harmed by use of social media are attempting to hold the social media companies responsible for that harm. In this case, the plaintiffs are making a very obvious and basically explicit effort to circumvent the federal law known as Section 230, that since the 1990s, has protected internet intermediaries — now including social media companies — from most civil liability related to claims that are concerned about content posted on the platforms by third parties. Section 230 has been interpreted by the federal courts very broadly and has protected major social media companies from a wide variety of litigation, often filed by quite sympathetic plaintiffs, like the families in this in this case, in other cases, families and friends of terrorism victims and others. 

And to try to get around section 230, to make sense make section 230 irrelevant to this case, the plaintiffs lawyers have come up with a innovative theory, which is: we are not suing about the content in question, we are not suing about the material that the shooter was able to find that he used to school himself on the weaponry in question, or any other type of content. Instead, we're suing under a product liability theory that the design of the platform itself, and the conduct of the company that owns and operates the platform were negligent and that the design was defective, and the defendant should be held liable under state tort law, injury Law. And that, according to this theory, makes the whole focus on the substance of the content and the role of section 230 in protecting against liability irrelevant. So that's the that's the basic theory here.

Paras: Right. And I want to unpack that point a little bit, because, as you said, they're kind of alleging a broad pattern of conduct here. And they almost say that it could be any young man, it's this shooter who was targeted by these companies and who saw the content on these websites. But really, this is about a system and how that system is operating. So, could you tell us a little bit more about what they're saying there?

Paul: Sure. You're correct that the attempt to frame this social media platform and by extension, other social media platforms, as being defectively designed, really does go beyond providing a tutorial on the use of AR 15-style rifles. Instead, it's a broad theory that says that these platforms are designed to addict users, particularly younger users, who may not have the type of judgment, or even emotional or intellectual development that older users might have, to addict them to using the platforms obsessively and to absorbing the ideas and the facts and the arguments that they encounter on these platforms to their own detriment. So, this is actually a theory that is not unique to this lawsuit, and that's why this lawsuit is potentially quite important. Similar arguments are being made by other plaintiffs against Meta and other social media platforms. 

For example, you have school districts all around the country that are filing lawsuits against major Silicon Valley tech companies, on this theory that the platforms are defective products. And the defect, these claims allege, is that they have this addictive tendency that the algorithms are set up to prioritize sensational material, material designed to provoke emotional reactions, particularly negative emotional reactions, and that this has a harmful effect and a predictable harmful effect, particularly on teenage users. So, we're at an interesting inflection point in efforts to figure out who should be held responsible for harm that flows from use of social media platforms. These platforms obviously have constructive uses. They have benign uses, but there also are side effects from using them. And one horrible side effect, allegedly, is that an impressionable young man can be led astray, and in his obsession about guns and violent video games, we'll get it in his head that the thing to do is to acquire one of these guns himself and go into a school and shoot everybody in sight. Another harm that's much broader in application or frequency is that some subset of teenagers who use social media obsessively come to the point where they think their bodies are unattractive. They are unpopular with their schoolmates, to such an extreme degree, that they end up with depression and even suicidal thoughts and in some rare set, very sad cases actually attempt suicide. 

So, you've got these different efforts to see whether the court system can be used to hold the companies that own and operate social media platforms responsible for the harms that allegedly flow not necessarily on purpose, but at least inadvertently from use of their platforms. 

Paras: Cases like this, product liability cases, personal injury cases, really turn on facts. They require specific factual allegations and factual details. And the complaint here really does a remarkable job marshaling evidence from the shooters internet browsing history, from his phone from his social media accounts, to paint a very vivid picture of how he learned about this weapon and the social media ecosystem around the gun industry. What are some of the facts that stood out to you? 

Paul: Yep, it's a very good point. And you're right, that through a legal lens, the facts are crucial in injury cases. I think they're crucial in another sense more broadly, before I get to the direct answer to your question, which is that you need compelling facts to popularize or socialize innovative claims of harm in court. Because the larger popular attitude toward a given set of facts is also relevant here. Because if attitudes popular attitudes change in connection with a controversial product, whether that product is tobacco, cigarettes, or it is painkillers, opioids, or going back further way further in time, the use of asbestos in insulation, if popular attitudes shift, and those products come to be seen as ominous and threatening, that shifts the sort of risk calculation surrounding the lawsuit. And suddenly plaintiffs can strike fear, potentially, in the hearts of the companies that are being targeted. 

Those companies and their insurance companies, not incidentally, have a new calculus as to what could happen if such cases ever ended up in front of juries. And so, you're absolutely right, that having compelling facts is really the price of admission to getting such innovative cases off the ground. So, what facts struck me? Really two categories. One was the behavior of the shooter, this, at the time, 18 year old man. Facts that were offered to back up the contention that these that social media platforms, or at least in this case Instagram, have this addictive effect on many users and particularly younger users. So, the complaint describes what he called an unhealthy, likely obsessive relationship with Instagram, that the shooter had; for example, that he had at least 20 Different Instagram accounts that he controlled, that he sometimes opened the app more than 100 times in a single day. And frequently, he was searching for information about firearms, and weapons more generally. And specifically, he was interested in AR 15 style rifles, which are these large capacity, military style, semi-automatic rifles that are used ubiquitously by the US military, much of law enforcement, and now sadly, by many mass shooters. So, the portrait that was painted of this fairly isolated young man, who, once he had entered into the environment, he did so with behavior that really did smack of addiction and being kind of out of control. So that that's one thing that was really striking to me. 

The second thing was how and why he encountered so much information, not neutral information, but encouraging information, promotional information about firearms, and particularly these military style firearms. And that goes to a set of facts surrounding Meta and Instagram’s policies about advertising firearms on the platform. In other words, why was there so much content about firearms available to this young man? It turns out as the complaint describes, and this part is not disputed, that Instagram, like the rest of Meta, has a ban on advertising that promotes the sale or use of weapons, ammunition or explosives, including firearms. But then it turns out that ban is applied in a very peculiar way in that it prohibits only paid firearm advertisements. So that gun companies like Daniel Defense, in this case, and pro-gun influencers, people who go on and provide seminars on how to use guns, who by the way, sometimes are actually paid by the gun companies to do so. Those people can post so called organic content. They're not paying to be on the platform, they just open an account and start talking, either if it's the gun company talking in a favorable way about their own product or these influencers go on and say, “Hey, here's the, you know, the three best AR fifteens on the market today,” and then they demonstrate them and with videos or what have you. 

And this is the reason why all this material, this promotional material, is available. There's this gaping loophole. And then there's the further fact that the gun industry is very much conscious of this. It's not really surprising when you think it through. But the degree to which the gun industry actually explicitly exploits the loophole to me was kind of breathtaking. There was information in the complaint about a marketing agency. He wasn't named in the complaint, but it was quite easy to figure out who it was. It is a company called Fidelitas, which has on its website — it is hired by manufacturers of various sorts, including manufacturers of guns — and it has, for example, on its website, an article that's entitled, five ways firearm brands can advertise online without Google ads and Facebook ads. So that last part, without Google ads and Facebook ads, means how you can advertise online without paying for the advertising. And the advice, specifically, points gunmakers to this loophole, it says there are some major loopholes. He calls them loopholes explicitly, in advertising regulations for Facebook and Instagram. And Fidelitas goes on to say the gun manufacturers own organic posts, and gun reviews by influencers are allowed, as long as they do not link to pages, where guns are sold, or where the prices are discussed or that kind of thing. 

But of course, you don't need to discuss the precise price of a firearm in order to promote its use and make it attractive in a an ominous and potentially quite dangerous way to a young man, who is, as I say, isolated and may have all kinds of strange ideas, himself. And the complaint included all kinds of text from these organic posts by Daniel Defense with their come-ons that are of the sort that one can easily imagine would be appealing to a young man who's obsessed with firearms, military activity, and so forth. 

Paras: Thanks for that overview. So, as you note, the complaint sets out this ecosystem of information that the shooter was exposed to or allegedly exposed to. And this case is still a legal longshot, there are a lot of different reasons for that. But one of them is section 230, which you have mentioned, it's a federal law that was passed in the 1990s. And some people have called it the Magna Carta of the internet, and it shields social media platforms from the content that is posted by third parties. So, have we seen courts grapple with and implement section 230?

Paul: In the main, courts have read section 230, which is a relatively succinct piece of legislation, quite broadly and have really promoted the spirit behind the law which Congress made explicit in the framing language, explaining the policy behind the law, which was to promote business on the internet, to make the internet from the perspective of the mid 1990s, at which time, commerce on the internet was nascent. Companies were small, not particularly influential or powerful or have playing a huge role in society, yet you're talking about message boards and, and the like. The courts took this idea that the internet was a creative environment for promoting commerce and expression and so forth, and they ran with it. And over time, the law has become a judicially interpreted very broad shield. 

Now that is not to say that it is a fail proof protection for social media platforms. For example, there are statutory exceptions to the law, written into the law itself. There are exceptions for a variety of types of civil claims that are allowed, for example, a civil claim related to a violation of federal criminal law, or a civil claim related to violation of intellectual property rules or wiretapping laws, all of those lawsuits would be allowed. In 2018. Congress added another exception for allegations in lawsuits that stem from online sex trafficking. So, there are those exceptions. Courts have also, in past cases, allowed lawsuits against social media platforms to go forward, where the court has determined that it really was the conduct of the platform, not the nature of the content that the plaintiff was complaining about. So, for example, there's a well-known case from the U.S. Court of Appeals for the Ninth Circuit, based in San Francisco, from 2019, involving a platform called homeaway.com, which was accused of violating an ordinance in Santa Monica, California, barring unlicensed home rentals. And the Ninth Circuit said this lawsuit may or may proceed toward discovery and trial, because the plaintiff — in this case it was a municipal plaintiff — said is saying that its law was violated by a company that provided short term home rental service that violated local law. It wasn't related to the content on the platform. Similarly, in another Ninth Circuit case, the appeals court said that an allegation against a different platform having to do with housing could go forward, because the allegation was that the platform was violating anti-discrimination laws by soliciting the gender or family status or sexual orientation of users who were seeking to use the website. So that wasn't about content that was about the conduct of, of the platform. And so, the question here in the Uvalde case, and another pending case in New York State, which we can talk about later if you want, having to do with another mass shooting. 

The question here is do these lawsuits fall into the category in the courts eyes of being about the conduct of the social media company in how it designed its product, and then sort of managed its product, like in connection with the advertising rules? Or in the end, is it really about the content itself? Is it about the content having to do with the guns and the violent video games, which would trigger Section 230 and potentially kill the lawsuit before it gets very far along? So that's kind of that's the big question concerning Section 230. But as you said, this is by any definition, a longshot lawsuit. And there are other potential hurdles that the plaintiffs will have to overcome. But there are hurdles, they're not absolute obstacles or 10,000 foot walls that are impossible to penetrate. 

Paras: There’s also been a lot of discussion around reform of Section 230. Last year, the Supreme Court heard a pair of cases, challenging Section 230, where they actually didn't reach the merits of that, they punted the case on other grounds. And Justice Kagan famously said in oral argument that the nine justices are not the greatest experts on the internet. At the same time, there have also been efforts to reform or even repeal Section 230 in Congress. What do you see next for Section 230? 

Paul: Yeah, there's been a sort of rolling debate about Section 230 for, I would say, at least half a dozen years in Congress, if not longer. Literally, several dozen pieces of legislation have been proposed. None of them have gone very far. They would do everything from tinker at the very margins of Section 230 to curtail its reach at one side of the spectrum all the way over to just repealing Section 230 altogether. None of these laws have made much progress, even though there is a widespread unease in Congress and possibly at the Supreme Court as well, about the way the courts have interpreted Section 230 and about the practical effects it has had when used to protect these very powerful social media companies. But despite the fact that there's sort of a rough consensus that Section 230 is problematic, the motivations that potential reformers have in approaching the task of curtailing the law are so diametrically opposed, that there hasn't been much progress. By that, I mean, Democrats and liberals have generally approached the Section 230 debate, saying that it is inhibiting platforms from doing more content moderation, from removing or down ranking harmful content of various sorts, because there's little incentive for them to do so; they they're not going to be held liable in court. 

So, they ignore their responsibility to police their platforms with the vigor that they should. On the other side of the political aisle. Republicans and other conservatives say actually, what's wrong here is a Section 230 is encouraging the platform's to take down too much content, they are censoring us on partisan grounds, censoring conservatives, and therefore Section 230 should be curtailed. Well, when to oppose political groups have a similar outcome they want, the curtailment of a law, but very, very different motives, it actually becomes very difficult for them to agree on precisely how to proceed. And perhaps for that reason, we've actually seen no legislative progress worth mentioning on this front. And as you said, the Supreme Court seemed to be teeing up the question of what is the scope of Section 230, in a case, a pair of cases actually, involving Google and YouTube, and Twitter (now X, of course) in the last Supreme Court term, but then in the end, those cases have fizzled out. The very most recent proposal for reforming Section 230 — which comes from critics — is actually a bipartisan effort. So, it's has attracted a little bit of attention because of that. And this is an idea for legislation sponsored by Cathy McMorris Rogers, a Republican from Washington State and the chair of the House Energy and Commerce Committee, and the ranking member of that committee, Frank Pallone, a Democrat from New Jersey. And they have proposed a law, very interesting structure, that would repeal the statute altogether in late 2025, unless the industry collaborates with lawmakers in limiting its reach in ways that are not quite clear, at least not clear publicly yet. 

So, the idea is that this would require big tech, as these lawmakers see it, and others to work with Congress over the next year and a half or so to enact a new legal framework that would encourage more content moderation, while still preserving free speech — coming up with a new balance, in other words. And that's the idea from McMorris Rogers, and Pallone. It was introduced in the spring, and there's no evidence it's going to make any progress during this chaotic presidential election year. And I think we just can't tell what might happen in 2025 and beyond because of the unpredictability of the overall political situation in the country. We will have such different political setup, if we have a Democratic president versus Donald Trump as president. That it's, to my eye, it's impossible to predict what's going to happen on Section 230, in the future.

Paras: Yeah, very difficult to, if not impossible, to read those tea leaves. Zooming back into this particular case around the Uvalde massacre. The case is still in the early stages, the defendants will likely file a motion to dismiss, what are you looking for next? And if it proceeds, how might it impact Silicon Valley and other areas? 

Paul: Right well, you're absolutely right about the procedural posture, and the thing to look for is whether in the California case, which is in state court there, whether the state court judge looks at these facts, looks at the theory, the defective design and corporate misconduct theory, and then compares that to California state product liability law. And whether that judge says, “Yes, this strikes me as a case about a product, and whether and how that product was designed and therefore, I'm going to let these plaintiffs move ahead past the motion to dismiss which, which I will dismiss here and let them begin discovery so they can gather facts about just how this product was designed.” 

And crucially, and this is something this is a new factor I want to put into the mix here. So, they can also gather information about whether they can really connect the defective design to the misconduct in this case. In other words, there's a big question here about proximate cause. That really is, of course, it's intertwined with everything else, but that's a separate challenge that the plaintiffs have. 

Paras: Yeah, that's such a good point. And proximate cause is a legal concept that you have to be able to connect the particular harm, the injury that a plaintiff has suffered, to the defendants conduct. In product liability and in personal injury cases. causation is often where defendants attack the theory the most, because there are many factors, there could be all kinds of intervening factors that end up causing an injury that aren't directly linked to the defendants conduct.

Paul: Their complaint, as elaborate as it is and as I think well-constructed as it is, does not really offer evidence, or an even an indication, of how they're going to prove that the shooter in this case, staring at this particular platform, night after night after night — Instagram — and where all of this gun related material is available, and they can show that he looked at the material, but how are they going to prove by preponderance of the evidence, by 51% likelihood, that it was looking at that material that caused him to buy the firearm, and with the firearm to go and kill these innocent people. 

So, that's the challenge after the motion to dismiss, that's the challenge in discovery, and then you'll have another procedural round, a motion doubtless for summary judgment, which will be based on a much richer, factual record, based on probably a year or more of discovery, going in both directions with the defendants asking questions as well. And then the judge will rule on that summary judgment motion, and that will be probably decisive. If the judge allows the case to get past this, a motion for summary judgment, then the case becomes a real threat to the defendants. And then you would see, I think, settlement negotiations begin in earnest because then — as wealthy as Meta is and is no doubt thorough as its insurance coverages — with the prospect of a potential trial, and this kind of evidence being put in front of a jury of laypeople, that would be a huge risk for the company. Meanwhile, so that's looking at the at the Uvalde case, there's a parallel case involving, tragically, another parallel mass shooting that occurred in the same month of May of 2022, pending in New York State. And this is the massacre that took place in a supermarket in Buffalo, New York. And similar plaintiffs are making a similar claim against Google, the owner of YouTube, saying in that case, that the shooter there learned about the firearm he used by looking at YouTube videos. Moreover, they are saying that there was the same type of addictive behavior, and they add a layer, that actually isn't really present in the Uvalde case, that's fascinating. And that additional layer has to do with ideology, because they argue that the shooting in Buffalo was ideological in nature. That shooter was going after black people; he went to a grocery store patronized primarily by African Americans and he shot African Americans. 

And he went into that, with a dedication to a racist conspiracy theory, the so called Great Replacement theory, which is a false made up notion that white people are being replaced in American society and other societies by immigrants and non-white people. And the allegation in the Buffalo case, is that the shooter learned about all of this on YouTube and again, therefore, YouTube is a defective product that spreads these dangerous extremist ideologies, that but for YouTube, this young man would never have encountered and provides tutorials on using potent firearms. And in the New York case, the plaintiffs have already gotten past a motion to dismiss. 

So, they are actually one step ahead of the Uvalde case. And it's not a precedent. The New York state judge is interpreting New York law and the facts in front of him. But it is a, roughly speaking, a model that shows that it's not impossible to get a lawsuit like this aloft. So, in coming months or, because of how slow the system is, years, we're going to see whether these lawsuits can overcome the obstacles they face and whether these become new templates for holding social media companies responsible. 

Paras: Yeah, that case in Buffalo is also another important one to watch. Is there anything that we haven't touched on yet that you'd like to add?  

Paul: Well, the only remaining point that I think that is potentially interesting, but is not necessary, is the identity of the lawyer in the Uvalde case. Because it's this interesting guy from Connecticut who had this big breakthrough, not against the social media company, but against gun manufacturer in coming out of the Sandy Hook mass shooting case. And that lawyer did figure out how to maneuver around a very different federal liability shield that was seen as being very, very protective of gun companies. And he ended up with a $73 million settlement, kind of the first of its kind, from a major gun company, Remington Arms. And again, it's not a neat analogy, because we're talking about a different kind of law. But in broad terms, the statute that protects the gun industry, the Protection of Lawful Commerce in Arms Act, is kind of the Section 230 for the gun companies. And this lawyer figured out a way to get his case around it to keep the case aloft, long enough that the gun companies insurance companies said we're going to settle this because we can't let this go to trial. And clearly, he's using that model very broadly speaking, in approaching Section 230 when he's going after the social media industry. 

Paras: Yeah, very interesting precedent there and a lot to watch for in this space, both on this case and on Section 230. Paul, thanks again for joining the show.

Paul: My great pleasure. Thank you for having me. 

Paras: This episode was hosted and produced by me, Paras Shah, with help from Audrey Balliette and Harrison Blank. 

Special thanks to Paul Barrett.  

You can read all of Just Security’s coverage of technology, social media platforms, and content moderation, including Paul’s analysis, on our website. If you enjoyed this episode, please give us a five-star rating on Apple Podcasts or wherever you listen.

People on this episode