
The Just Security Podcast
Just Security is an online forum for the rigorous analysis of national security, foreign policy, and rights. We aim to promote principled solutions to problems confronting decision-makers in the United States and abroad. Our expert authors are individuals with significant government experience, academics, civil society practitioners, individuals directly affected by national security policies, and other leading voices.
The Just Security Podcast
The Just Security Podcast: Regulating Social Media — Is it Lawful, Feasible, and Desirable? (NYU Law Forum)
2025 will be a pivotal year for technology regulation in the United States and around the world. The European Union has begun regulating social media platforms with its Digital Services Act. In the United States, regulatory proposals at the federal level will likely include renewed efforts to repeal or reform Section 230 of the Communications Decency Act. Meanwhile, States such as Florida and Texas have tried to restrict content moderation by major platforms, but have been met with challenges to the laws' constitutionality.
On March 19, NYU Law hosted a Forum on whether it is lawful, feasible, and desirable for government actors to regulate social media platforms to reduce harmful effects on U.S. democracy and society with expert guests Daphne Keller, Director of the Program on Platform Regulation at Stanford Law School’s Cyber Policy Center, and Michael Posner, Director of the Center for Business and Human Rights at NYU Stern School of Business. Tess Bridgeman and Ryan Goodman, co-editors-in-chief of Just Security, moderated the event, which was co-hosted by Just Security, the NYU Stern Center for Business and Human Rights and Tech Policy Press.
Show Notes:
- Tess Bridgeman
- Ryan Goodman
- Daphne Keller
- Michael Posner
- Just Security’s coverage on Social Media Platforms
- Just Security’s coverage on Section 230
- Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)
Tess Bridgeman: Welcome to the Just Security podcast. Today, we're bringing you a special episode taped on March 19 at the NYU Law Forum on Regulating Social Media to Reduce Harmful Effects on U.S. Democracy and Society. We asked our expert guests — Daphne Keller, Director of the Program on Platform Regulation at Stanford Law School's Cyber Policy Center — and Michael Posner — Director of the Center for Business and Human Rights at NYU Stern School of Business. When is it lawful for government to regulate social media platforms, and equally important, when is it feasible and desirable to do so? The live event was moderated by my Co-Editor-in-Chief, Ryan Goodman and me, Tess Bridgman.
Hello everyone, and welcome to the NYU Law Forum. Today the forum is co-hosted by Just Security, Tech Policy Press and by NYU Stern Center for Business and Human Rights. I'm Tess Bridgeman, Co-Editor-in-Chief of Just Security, and a Senior Fellow and Visiting Scholar here at NYU Reiss Center on Law and Security.
Ryan Goodman: Hi everyone, and I'm Ryan Goodman. I'm the other Co-Editor-in-Chief of Just Security, a professor of law here at NYU Law School and Co-Director of the Reiss Center on Law and Security. So, today we're here because it's a pivotal moment for technology regulation in the United States and around the world. The EU is still getting up to speed on regulating social media platforms based on perceived social harms that they cause. And in the United States, there's a dearth of regulatory efforts at the federal level, and a dearth of any expectation that Congress will be able to legislate anytime soon, though there are still continued discussions about efforts to repeal or reform Section 230 of the Communications Decency Act, which provides immunity to the companies or some of their conduct, and at the state level in the United States, they’re laboratories of experiment in which Florida, Texas, California, Utah are trying different efforts to restrict content moderation or other social harms caused by the major platforms.
So, the topic of today is, is it lawful, feasible and desirable for government actors to regulate social media platforms to reduce harmful effects on democracy in the United States? Is it lawful, feasible and desirable for government actors to do so? The structure is loosely a conversation with two experts. Tess and I will be the moderators, and the two experts will be coming from different perspectives that will examine various aspects of existing debates over whether and how to regulate social media platforms. But we'll also, I'm sure, identify large points of agreement.
We're delighted to welcome to the law school, first from across the country, Daphne Keller. Thanks, Daphne, for being with us. She's the Director of the Program on Platform Regulation at Stanford Law School's Cyber Policy Center. And then we're also delighted to welcome from down the road, but also teaching at the law school, Michael Posner, the Jerome Colberg Professor of Ethics and Finance at the NYU Stern School of Business and the founding director of the Center for Business and Human Rights at Stern.
Tess: So, just to ground where we start off. A few observations — and the panelists will speak to this at greater length from their own perspectives — but what are some of the harms that we're talking about when we think about social media platforms and the way users engage with them? I think we can loosely group maybe some specific harms, some general ones, specific ones including election disinformation, health misinformation, terrorist recruitment and organization online, dehumanizing propaganda with real world consequences for physical safety, targeted harassment campaigns, among others. General harms include things like sowing distrust of our institutions, our democracy, and the objective of truth itself. These are things that we can and will debate in terms of the nature of the harm, the extent of the harm, and the causal relationships that lead to those harms.
But just to ground this discussion, those are some of the motivating problems that lead us to this conversation today. So, of course, there's deep disagreement as well in terms of what to do about those harms. Some say aggressive government regulation is needed. It's overdue, it’s at any rate due now. Proposals for what that should look like, of course, take many forms, which we'll discuss. Others say the medicine would be far worse than the cure, and would give far too much power to the government to stifle speech or to direct what kind of speech is acceptable. It's hard to decide which is worse — concentrating power in the hands of the platforms or government actors when it comes to First Amendment-protected activity in particular, and that's one of the things we'll take up today.
So, as Ryan alluded to so far, U.S. Congress hasn't really weighted in meaningfully in this debate, at least since almost 30 years ago. It passed Section 230 of the CDA, but there has been action at the federal level. So, we have seen very recently the FTC filing lawsuits against most of the major platforms. The current House Judiciary Committee Chairman Jim Jordan has subpoenaed Alphabet, Amazon, Apple, Meta, Microsoft, Rumble, TikTok and X, seeking each company's communications with foreign governments regarding their compliance with so called foreign censorship laws and regulations. The White House, of course, has very recently set to work dismantling government agencies responsible for detecting and mitigating foreign influence in U.S. elections, including through online influence.
Meanwhile, across the pond, in the EU, we have had a series of regulations that are still getting off the ground and will perhaps discuss how much impact they've had, largely the Digital Services Act, which produces at least a modicum of transparency, and as Ryan also mentioned, states are experimenting a bit more than the federal government has been able to with laws that take a number of forms, several of which are constitutionally suspect, so we'll see which ones stick. Meanwhile, independent agencies and regulators in some jurisdictions have maintained their independence, while in others, they appear to be overtly politically motivated actors. And companies, for their part, are currently dialing back many of the content moderation rules and policies that they had had in place — dismantling fact checking operations, scaling back community standards on hate speech and misinformation, to name just a few.
So that is my opening diagnosis of the problem, just to give us something to start from. And maybe first, I'll turn to you, Mike, to give us your views. What is your diagnosis of the problem and what are the harms you're most concerned about? And then finally, who is most responsible for them, in your view?
Michael (Mike) Posner: Great. So, first of all, welcome everybody, and I want to thank you Ryan and Tess and Just Security, also the Tech Policy Press. We're delighted to be co-hosting this with you, and Daphne, to be with you on the stage.
I want to just try to do a couple of things which probably seem irrational or insane in light of where we are in our country today, in the world. As Tess laid out, we have a crisis in our democracy which is more profound than anything I've experienced, I think any of us have experienced, in our lifetime, and maybe close to in our country's history. There is a rejection of norms. There's a rejection of institutions. I saw an article this morning quoting a statement by Peter Thiel, who's one of the intellectual authors of the libertarian, Silicon Valley approach to life. And he juxtaposes freedom and democracy. And Peter Thiel’s sense of freedom, which is what we're now living through, is that you dismantle government, you basically eviscerate government. You get rid of regulators. You get rid of any kind of bounds on behavior, guardrails.
And we're seeing it play out in all sorts of ways. I don't need to spell out every one of the particulars, but the things that are most troubling to me is that we've become so divided as a society, so in each other's throats and faces. We can't agree on common facts, and we are becoming more and more radicalized. And I worked in a Democratic administration, so I'm not exactly neutral on this, but I see the current president and those around him as a monumental threat to our democracy. This is a five-alarm fire. And so, then the question is, what does this have to do with technology? Facebook and Google didn't create the polarization, the differences in — we've always been, you know, divided politically and otherwise socially. But for sure, it's exacerbating it, and it's exacerbating it, not just subtly, but dramatically. People now live online. They spend hours and hours, and it's not an accident that people are becoming more radicalized. They're becoming less able to sort out facts and fiction, because they're being driven in directions that reinforce their prejudices.
And social scientists have known for a long time that what brings people online and keeps them there is emotion, and negative emotions trump positive emotions, and at the top of the list are hate and fear. We have a president who got to be where he is because he is brilliant at basically cataloging and reinforcing people's hate and fear. And not only are the internet, the social media companies, allowing that and watching it, their business model — I'm in a business school — the business model of Meta, of Alphabet, of ByteDance, the business model is engagement. Everything is dependent on — these are advertising companies, and what advertisers want is eyeballs on their site, often and for a long time. And the advertisers may not be paying attention to every detail, but they want to make sure that there are a lot of eyeballs viewing their ads, and they want data about who it is, so they can target those ads. That's the business model, pure and simple. There may be other things going on, but we can't talk about this without understanding how undermining the business model of the big tech companies is to the health of our democracy.
So, I start from a premise that says, we need to challenge the business model by figuring out ways to disrupt it. We have a set of problems here. One is, obviously we're talking about regulation in the age of Trump. That seems like an oxymoron. We're going the other way. The second problem which we're going to struggle with in this conversation, in every conversation, is that unlike drugs or airplanes or securities, where we have regulation, this industry deals with facts, with information, with speech. And we have something called the First Amendment here, and the international treaties on human rights have similar provisions, a little less rigorous, that say you can't regulate speech. And so, we have to figure out — the question is, lawful, feasible, desirable? Absolutely desirable. We need regulation, and we need it badly. We're starting to see little inklings of that, but we need to be bolder in saying this industry needs to be regulated very strongly. Is it feasible? It depends how we do it, and is it lawful? It has to be lawful.
Now, there's some kinds of speech that are already prohibited. You can't promote child pornography, imminent violence. There are a range of things that the law says you can't do, and so it's easy for the platforms to deal with it, to take it down, or do what they have to do under the law. The problem here is that most of what concerns me is not those things. It's the content, disinformation, it's the things that Tess listed, which are lawful, but they are overwhelming our sense of truth and our ability to talk about issues in a sensible way, and the companies are part of the problem for that.
Two other things I would say quickly. I understand, and we've had discussions a lot about, can regulators be trusted? In 1906, the U.S. created something called the Food and Drug Administration. The Food and Drug Administration now has 18,000 employees. Are any of them politically suspect? Yeah. Do they make bad decisions? Yeah, they sometimes are too slow in getting a drug to market. Sometimes they're too slow in figuring out that Purdue Pharma is basically creating an opioid crisis. It took them 100 years, 103 years, to regulate tobacco advertising. When they created the FDA in 1906, nobody was thinking about regulating tobacco.
We have an agency called the FAA, the Federal Aviation Administration. When they tell Boeing your 737 actually has a default and you have to ground those planes, does Boeing like the FAA? They hate them. Industries hate regulators, but what we need is that tension between the public interest and corporate, the business model of companies to make as much money as you can, as fast as you can. And so, it's not at all surprising that the big tech companies hate the idea of regulation. They hate it with a passion, and there's lots of conversations we're going to have about the particulars. But at the end of the day, I start from a deductive approach. We need to regulate. We need to regulate with lots of people who know the industry, and we will figure it out as we go what it is that needs to be regulated and that will be feasible, lawful and desirable. But we have to start with the premise that we can't leave things as they are. We have a democratic crisis, and the tech companies are right in the middle of that, and they're not responding in a way that they need to.
Tess: Yeah, I won't press you quite yet on what that regulation should look like, but we'll come back to that. First, I want to turn to Daphne and ask largely the same question posed to Mike. What do you diagnose as the set of problems we're facing? And, you know, have governments fared any better in this equation, in terms of, are they responsible actors, can they be trusted, and what role do they play in the harms in addressing them?
Daphne Keller: Yeah, so maybe I should start by saying I was a platform lawyer for 10 years. I worked at Google from 2004 to 2014, and I wound up as the legal director in charge of web search, which is a product that tries to give users what they want using algorithms and is monetized by ads. So, I have some loyalty to at least that version of algorithmic goods delivered to users.
And a big part of what I loved most about that job was dealing with incoming demands from governments who wanted us to make things disappear from web search, and sometimes the law is, you do have to make things disappear from web search. And you know, the law for that is one thing in the U.S. and a different thing in the EU. And we complied with those laws, but we also got a lot of demands that seemed inconsistent with international human rights, that seemed inconsistent with the rights of Internet users to speak and access information around the world. And I was lucky that I was there at a time when Google was willing to throw a, like, kind of economically irrational amount of money into actually litigating those things and making a big deal about it, and trying to use public processes through laws and courts to make the law get better.
So, maybe that experience with governments informs my take. If Mike is the “platforms are scary” side of the debate, I will take the “governments are scary” as my position, and I think there are a number of reasons why we should be more concerned about state power. And my point is not that they can't make laws. They already make laws. They should make more laws. There are a bunch of platform regulations we desperately need, like better — any decent federal privacy laws — but governments are constrained by constitutional protections like the First and Fourth Amendment, and that's a really, really good thing, and we should pay attention to why that's a good thing.
So, what makes government scarier than platforms? First of all, these days, lots of platforms seem to just be proxies for the government anyway, so the government kind of has both of those sets of scary things to its advantage. Second, governments have what Louis Althusser called repressive state apparatus. That means they have men with guns, they have jails, they have airplanes, like, there is a whole set of threats from government that don't and hopefully never will have any analog in the perfectly real, perfectly worthy of response threats that we have from platforms.
So, all over the world, we have lawmakers who want to constrain the power of platforms for all the good reasons that Mike has just outlined, and the way they want to do it is by empowering themselves to set the speech rules instead. There are so many laws being passed in the U.S. and abroad that effectively take state preference about what legal speech from internet users is good or bad, and direct platforms to go and carry out those state preferences amongst legal speech as part of the platform's legal obligations. And the kind of cycle of how we get to those laws is, you start with pundits and politicians saying like platforms need to be accountable or responsible — words that we probably all agree with, but all have a really different idea of what that means, or what that law would look like. And then you get to a stage with legislators, and the current round of legislation has a lot of words, like duty of care, risk mitigation, avoiding harms, and all of those things could be the building blocks of a totally legitimate law that isn't about speech, but they are also being used as the building blocks of laws that are about speech and that do effectively put the state in charge of making these decisions about, again, like, your lawful speech, mine, your mom's.
Again, I'm not here to talk about the platform's First Amendment rights, although we can if you want later. I want to emphasize that we're seeing these from the left and the right both, in the U.S. and abroad. So, the U.S. Republican version of this that we saw coming from Texas and Florida last year was the government stepping in and saying, hey, there's some speech we like, and we want you to carry it. Like, we are going to set new rules, compelling platforms to leave up things that might be hate speech or electoral disinformation, et cetera that the platforms at the time were prohibiting. And that went to the Supreme Court last year, and the court said, no. You cannot bring in this state preference about speech. That violates the First Amendment.
From Democratic legislatures more in the U.S., we're seeing a lot of laws like California's age-appropriate design code, which was largely struck down by the Ninth Circuit in August or September. That was a law that said, platforms, you need to protect children by avoiding harms to them, and it sort of packaged it with some rules that looked like they were about privacy, or even legitimately were about privacy, but the upshot from the main requirement of the law was, go out there and police content differently, police user speech differently, so that certain kinds of lawful but definitely harmful, like, lawful but harmful speech, like, pro-anorexia content, make sure that kids aren't seeing that. And the Ninth Circuit saw right through that, and was like, this is obviously a law that regulates speech, and, you know, we can go see if there are any parts of it that don't regulate speech, and those can still stand, but they struck down under the First Amendment that part of the law.
There's a third kind of First Amendment question here that's gotten short shrift in the litigation so far. This is because the litigation, shocker, is being funded by platforms, right? So, they're going to court or their trade associations are going to court and making arguments based on the platform's First Amendment rights, which were the rights at issue in the Texas and Florida case last year. But if lawmakers tell platforms they have to avoid certain kinds of content, and platforms respond, as we know from stacks and stacks of studies, by erring on the side of taking down anything that might get them in trouble, that means the affected people are — again, you me internet users, and we won't even know, because the platform's not going to say, oh, this is because of a change that I made to my machine learning models in order not to get in trouble in California — they're just going to say, oh, this violates one of our policies. And so unfortunately, when users have tried to get involved in these cases and raise exactly that claim, the courts just haven't given it enough attention. They actually got dismissed on standing in Utah, which I'm very mad about, but that is another of the sort of latent issue that that we need to pay attention to.
So, kind of to get back to the regulatory proposal, and then I'll stop talking soon. We need to look closely at the actual laws, at the actual mandates, to distinguish what is the law that's doing something perfectly legitimate, like regulating privacy, competition, consumer protection, and what is the law that is going beyond what the government is allowed to do under the First Amendment. So, I'm not anti-having a regulator. Maybe we should have a regulator, but I'm definitely anti-a regulator that has an open-ended mandate to go out and enforce, you know, some kind of vague standard where we don't know yet what it is that we're asking them to do. Yeah, and I would actually question the idea that industries hate regulators. I don't think anyone loves the FCC as much as big incumbent telecos love the FCC, right? Like, the relationship between regulators and big incumbents is often quite comfortable and is terrible for competition, for the kinds of platforms or new, innovative user tools, user controls, user empowering systems — the kinds of things we should all want to come along become harder if you have that kind of entrenched regulator platform dynamic.
I don't want to point too much to our current crisis as evidence of what we should do all the time. Obviously, we can't point at what's going on now and say, okay, we can't have laws we give up or with the nobody in government should ever have power — that doesn't work. But we're seeing some regulators with kind of open-ended mandates doing some pretty bonkers stuff right now. So, at the FCC, we have Commissioner Chair Brendan Carr, who has gone after the three big TV networks because they did things like let Kamala Harris be on Saturday Night Live. At the FTC, we have a new investigation into platform censorship being driven by their new chair. Congress came scarily close in the past couple of years to giving the FTC some kind of authority to decide what content is harmful to children. It took transgender teenagers taking time off from their days and going to Capitol Hill to point out the problems with that, which is, everyone has a different idea of what's harmful to children. And if some people are in power, it's going to be that children can't see information about, let's say, guns. You know, a democratic FTC might think that. If other people are in power, it's going to be information about reproductive health and about transgender identity. That is not something that is safe to put with any FTC. This isn't a matter of it being this administration and its threat. It's a matter of the First Amendment being there in the first place to ensure that the government power we enable is properly constrained and doesn't even make room for that kind of abuse.
Ryan: So, I want to drill down into, Daphne, kind of as you were inviting, thinking about what that regulation actually is. Maybe starting with you, Mike, as to — that's basically the question. What space do you think there is for regulation, and what is the content of the regulation? Let me go big and then go small. Just in terms of big, I think so far in the conversation, if I understand also both of your perspectives, you're working within the boundary of accepting current First Amendment limitations. Both of you seem to do that. And when you say things like legal but awful, the legal dimension is set by the First Amendment. And so, in this conversation and in this question, I'm assuming that you don't want to run — nobody's trying to run right now run afoul of the First Amendment — but later, either in our questions that Tess and I might ask, or questions from the audience, I think there's this latent question, which is, wait a minute, is the First Amendment the right guardrail? Can we maybe re-envision the First Amendment in some ways or another? That might be a separate question. But within the First Amendment, one thing that we've talked about among the four of us before, and kind of setting up this conversation, is about transparency. So, is that a space in which there's some overlapping agreement between the two of you, or still disagreement? And just starting with you, Mike, what would be the regulation in the space of transparency, assuming that transparency can try to get at some of the problems that we've identified?
Mike: Yeah, so I should just say, coming into that, I totally agree with Daphne on the dangers of government regulating content, especially the, you know, the Florida and Texas laws that essentially tell companies that you can't take down conservative content, or the Indian or the Turkish laws of the last few years, where Modi or Erdogan are basically trying to silence their opposition.
On the question of transparency, one of the things that's really striking here is that even though these are companies that are based on information and wide dissemination of information with billions of users, they're quite opaque in terms of how they're operating. They're opaque about the algorithms which, I guess, is the secret sauce. They're opaque about even their revenue systems, their operations. Meta was saying, still says, they have tens of thousands of content moderators. How many of them get a paycheck from Meta? How many of them are outsourced to Accenture, working in India or Philippines without training? There's a whole range of things that we will be better off, and we'll be better off figuring out how to regulate, if we have a better sense of how these companies operate.
So, my sense is there's really a great benefit starting — and this is some of the early laws we're seeing in the EU, we're seeing it in some U.S. states, some other countries — beginning with the premise for these information companies to be more transparent about how they operate, and giving a better sense of the kind of mechanics of what they're doing, what they say they're doing, is it what they're actually doing? We can benefit by having a much better sense of, look under the hood. And again, regulated industries wind up being industries where somebody's asking those kinds of questions. Those kinds of questions aren't being answered because there are zero people in the U.S. federal government with an agency that's regulating big time.
Daphne: So, I think of myself as a big transparency proponent. I've testified to Congress about transparency on my own behalf at Stanford, not for Google. I've filed numerous really wonky things with the European Commission about how transparency should work under the Digital Services Act, and a lot of what I'm focused on is, how to make it actually functional, like, how to make it produce the information that we need to solve real world problems, instead of, you know, spending tons and tons of labor and, like, legislative capacity and political will and then getting the wrong thing. And my sort of big hobby horse here is scraping the kind of data collection that people like Julia Angwin use, that ProPublica uses to do research — including research on platforms — to find out what platforms are doing. I think it's really, really important that that kind of gatekeeper list data collection be lawful and widely done.
And also, I think I agree with Mike on what the main purpose, or the purpose I'm most interested in, which is to make better laws, right? Like, if we don't know what's going on, we're going to pass a bunch of dumb laws, like we've been doing. So, having better information — Like we've sometimes been doing. I'm actually a fan of about half of the DSA, maybe more than half. But I agree with that purpose. However, as with everything, you know, you can't just say transparency. You have to dig down on what exactly the laws that are and what are the mechanics. And there are three big issues I think you run into there, and two of them are constitutional. One is, are you doing something in the name of transparency that compromises the privacy of internet users or makes them more vulnerable to government surveillance? So, this is, for example, if you have mandated researcher access to Facebook posts that were privately shared. You know, now these researchers at some university somewhere in Europe are seeing your cousin's announcement about an illness or a breast-feeding photo or, you know, something that she never intended to go to that audience. There are also pretty serious concerns raised by the Center for Democracy and Technology and the ACLU that the protection users have from law enforcement demanding that data get eroded once the platform has given it to researchers, that it's harder under ECPA, the relevant statute and the Fourth Amendment, to keep things back from state surveillance once you go down that road. So, that doesn't mean you can't do it. It means, try to write the law in a way that doesn't raise those problems, right? Pay attention to them.
The second barrier — and this is something I have a big article about — is, I think there are legitimate First Amendment concerns that a lot of transparency laws, like the ones that Texas and Florida enacted, are basically mechanisms for the enforcers, which is the state attorneys general in those cases, to strong arm platforms to adopt their preferred editorial policies. And I think that there's a category of transparency laws that doesn't have that problem so much. If it's, you must allow scraping, you must have an API for researchers, et cetera, then what your enforcer does is come along and say, you implemented this wrong. You're not filling out all the fields. They have technical enforcement power. If the rule is, you must accurately describe your policies, well, every — is anyone a parent in this room? Anytime you have a policy — your kid's too young — but argumentative children who you may have, or who you may have one day been in your past, they will find the edge case to fight on every single thing. And that comes up constantly with content moderation given the scale. Every day, thousands, tens of thousands, massive numbers of examples come up where you could argue about what the outcome should be under the policy, and therefore argue about whether the policy was accurately stated or inaccurately stated. And you know, many of them are culture war flashpoint issues, so anything where what the enforcer does is come along and say, you didn't apply that right. You didn't describe It right. So, you shouldn't have taken this down because you hadn't disclosed this rule. That's the category that worries me in particular.
And then my last concern is the one in a way I almost care about most. It has nothing to do with the Constitution. It's just waste. I hate the waste of putting a bunch of effort into transparency that's not going to serve the societal goals and the human goals it was supposed to be for. So, I really think putting in the time and effort to do a better job is important, and frankly, the First Amendment kind of helps here. If we have some tests that courts apply to say, you need to narrowly tailor this mandate so it actually does its job with a minimum of collateral damage, that’s a good outcome.
Tess: Can I ask a speed round follow up? Both of you agree on a number of things in terms of the value of transparency. We also, I think, agree the nitty gritty really matters. But one of the things that you both said is the purpose of transparency in part, is to allow us to then make better laws. So, I'm hoping each of you could maybe give us an example of if you had that information, what does one of those better laws look like?
Daphne: I have one. Okay, so there is this idea that I have contributed to — I am part of the problem— that a really important correction for platforms removing the wrong things is to allow users to appeal and seek review. And that's true, right? Like, I think in many cases, especially if it's a state mandated takedown, users absolutely should have rights to appeals.
However, if you talk to lots of content moderators, they will tell you that you will find more errors looking at a random sample of content moderation decisions than looking at the ones that got appealed. And that, you know, there's research suggesting that men are more likely to file appeals than women, like, I think you can imagine a lot of societal power divides that are going to shape who uses appeals. Also, if what you're worried about is people seeing information about war crimes footage being uploaded, and researchers and human rights advocates being able to see it, the consumer, the reader, the listener, is who has the bigger investment. Relying on someone in a war-torn country to file an appeal as your only remedy is not going to solve that problem.
So, this is an example where I think if we had data confirming or falsifying this idea that a random sample would be better than relying so heavily on appeals, then laws like the DSA, which has multiple layers of appeals, might be drafted better to put the resources into other forms of transparency and other mechanisms.
Tess: Like what?
Daphne: Like the Lumen Database. So, one of the main problems for researchers in this area is they don't know what content got taken down, and they can't go look and see it because it's gone. But for Google web search take downs, and also, previously, for Twitter takedowns, a lot of that content is sitting on a third-party website. And so, the Lumen Database, which sits at the Berkman Center at Harvard, is all of these takedown demands, and it says what the, you know, what the legal basis was. And then there's the URL, and researchers can go actually look at the URL, because usually those websites are still up, and say, oh, you know, there's a 20 percent false accusation rate in this data set. There's a 30 percent over-compliance rate from platforms in this data set. There's a pattern of bias and enforcement. There are all kinds of questions that you can try to answer if anybody who's you know, interested in finding things out has access to that kind of information.
Mike: I would go back to something I said earlier and just expand on a little bit. We don't have enough of a sense of how the companies are making decisions. And again, I want to be real clear. I don't think the government, not only ought to be getting involved in content moderation relating to speech, to the specifics, I don't even think the government can regulate how the companies themselves regulate, because that gets close to the line. But it's totally appropriate to get more information about how they're actually operating. What are their — who's doing the content moderating, how much money is being spent on it, what kind of training is going on? There are a range of things, and there's at this point very little trust in the companies from the public, and I think the opaque nature of the way they operate, the opaque nature of what the engineers are putting into those algorithms — I made an assertion 20 minutes ago that the business model is driving extremism, because that's what people want. I don't know if I'm right or wrong. I think I'm right, but I'd like to have a better sense. I'd like to have more data on what exactly are their algorithms based on, and are they really — are they pushing kids to be more addicted? Are they pushing people to go to the dark places on the web where they're being — their worst fears are being reinforced? We need to know that, and that will help us understand what's exactly going on and what needs to be addressed and what the companies themselves need to address.
Ryan: Daphne, maybe to start with you on this. My understanding is that both of you think that middleware is a bit of a solution to the problems that we identified at the outset. So, can you just describe what middleware is and then how it works in your mind for how we would use — what would bring around a greater use among the public of middleware? Is that private industry incentives? Is it public education in the first place? Is it a relationship between the transparency that you're seeking, and if things are more transparent, there might be a greater demand for middleware? Is that the dynamic as to how to get there, and what exactly are you trying to get at by that? And so, if you could just start with a definition of what you mean by it and then get into it.
Daphne: So, I kind of clumped together middleware and interoperability, which are some similar concepts. And Mike Masnick has written about this really well under the title of protocols, not platforms. I used to call it magic APIs. That didn't catch on for some reason. But my Stanford colleague Francis Fukuyama, he and a group of people at Stanford did, and that's caught on more. And actually, my amicus brief in the Moody case, the Texas and Florida case, was on behalf of Francis Fukuyama, which would have entertained my college self, and seeing middleware would be a less restrictive means. If it is true that these lawmakers’ goal is to protect democracy and have diversity of voices, there’s a better way to do it that doesn't burden speech as much.
And what the middleware version is roughly like Bluesky, or what Bluesky is supposed to be as it evolves, which is, kind of a hub and spoke model. You have a platform at the center that hosts all of the data, and if there's, say, a legal takedown requirement for them to remove child abuse material, copyright infringing material, whatever that is, that happens at the center. But everything that's left that's legal, or that as far as anyone knows is legal, is available to third parties to come along and offer competing content moderation services. So, a user could say, I want the Disney ranking for YouTube videos, but I want a feminist organization to give an overlay of demoting the bad princesses or whatever it is. It could be about ESPN. It could be about your political affiliation. It could be anything. And the idea is, you can preserve the sort of the network effects value of everybody being on the same service and being able to contact each other, but get rid of centralized power over content moderation and make that something that is a competitive landscape, that users can choose what they want.
The interoperable version of it is more like Mastodon, a federated system. It doesn't have a centralized control node at the middle in the same way. But you asked about, sort of, how to get there. I have a list of barriers and things to resolve, but I think you want to know about legally how to get there? Okay. So, there are a range of perspectives on this. Francis Fukuyama’s perspective is, use competition law. Like, go in there and have a mandate to force interoperability, or to, you know, to force the mechanism to be there to allow this to happen. At the other extreme, there are some people who say market forces are making this happen already, like, look at Bluesky, look at Mastodon. We can just sit back and wait for it to happen. And I think I'm kind of in the middle. There are a lot of legal barriers to building this stuff right now. There used to be a company called Power Ventures that built an interface where you could pull in all your social media feeds in one user interface and push things out. And it sounds really useful. I wish we had it now, but they don't exist because Facebook sued them and got them shut down under an anti-hacking law, the Computer Fraud and Abuse Act, which has kind of nothing to do with any of the goals we're talking about here. But there are, you know, half a dozen laws like that that are on the books that make it legally risky for anyone to invest in building middleware. The fact that people still do it anyway, that Block Party, for example, which was a tool for blocking people on Twitter, exists anyway, I think, is a really positive sign, but taking down those barriers would be a really important step,
Mike: If I can just jump in. Yeah, I agree, Daphne, with what you're saying, most of what you're saying. I do think at the end of the day, you know, I'm not up from the technical world, but I see there's people always coming to me and saying, oh, we need to educate consumers. We're kind of dumb about what we're doing every day. And I think the idea of this effort to look at the design and give people choices is based on the notion of, we're the product. And a lot of people just get online and they do whatever they're going to do without recognizing that things that are coming their way have been chosen by the companies and their machines.
And so, to me, in simple terms, I would like to see more options so that people begin to realize it doesn't have to be this way. Some people are going to not care. Some people are going to maybe even go to a darker place. But a significant number of people, I think, if they're informed that you have a choice of how the algorithm sends stuff your way, are going to offer other than what they're getting now. And to me, that's a good thing. And the fact that the companies — again, we're talking about a handful of companies — have been so resistant to opening up the space for these middleware companies to come in and offer a service, again, the marketplace. You pay 5 dollars a month, you have a middleware company that says, these are your choices. That ought to be an attractive option. The companies themselves are not so thrilled with it, the big social media companies, because right now it's working for them.
Daphne: Actually, can I? So, I have a piece that's like of the moment for that, and we talked about this on the phone. It used to be that when I had this middleware conversation with smart people, a lot of people whose values are broadly liberal, my friends in San Francisco would say, wait a minute. What I don't like about this is that users can choose to be and they'll all hate all the time version of Facebook. or the all lies all the time. Like, people will wind up in echo chambers. And I would rather, like, at least now, Mark Zuckerberg is prohibiting that stuff, so we need Mark on that wall. Like, we would rather have the centralized control under one corporate monarch than the chaos and bad speech that could come with middleware.
And I feel like we're in a moment right now where that was already changing because of the success of Mastodon and Bluesky, because of these real-world proof of concepts that you can do this, but also the idea that we're better off with Mark in charge, I think, has, like, really faltered. So, I think that objection to middleware, I assume, it is going away.
Michael: I would just say, if I can say one sentence here, you know, we can talk about Bluesky or Mastodon, or we can talk about Parlor and Truth Social. Facebook has three billion users. YouTube has two and a half billion users. Instagram has two billion users. WhatsApp has two billion users. That's the game. It's those four companies. And add TikTok with a billion users. I'm not saying the other things don't matter. I want to get the big platforms, where everybody's spending time, to behave better. Those are the companies that are driving the train, and we need to be really focused on. We're not going to solve everything. There's nothing perfect here, but we've got to be attentive to, what's it going to take to make Facebook a more responsible social media platform.
Tess: I would agree with that in part. And I would also push you on whether quantity is the right metric in terms of some of the harms we're looking at, especially to our democracy, where some small platforms might have outsized influence. But instead of me pushing you further, I think it might be time for us to open up to the audience for Q&A, who have all been waiting very patiently, and the floor is yours.
Audience Member 1: Thank you so much for this conversation. So, I studied platform regulation from Italy, so I don't really agree with, like, Keller's framing of the problem of, like, as platforms as passive entities, somehow. I think one of the real novelties, in comparison to normal companies of platforms, is, like, an unprecedented regulatory independence, and, like, they have a lot of agency and ways, you know, they own the software infrastructure, and they can kind of counter the law with that. So, that's something that I think should be mentioned. And so, the second thing is, like, when we talk about content moderation, I don't think it's really about content moderation actually, you know? Like, you, when you make laws, you cannot — some people say you're actually scratching the surface. And there's, like, a growing body of literature with arguments, I think, in the U.S., that says, well, we, you know, there is a history of regulated industries in the U.S., and we maybe should interpret this not as company, but it's like planetary scale, for-profit administrative authorities. And the solution to that is really the public utility doctrine. So, like that will lead to kind of a separation between ownership and control of the platform. And so, I want you to know your thoughts about that.
Daphne: Thank you. I understood the first question, and I might need a clarification on the second one. So, if I gave the impression that I think platforms are passive, I hereby heartily retract that. You know, certainly at Google, we were very actively trying to decide how to rank search results. It would be really useless if we did not do that. And, you know, in the U.S. Section 230 which is this much-criticized immunity for platforms, it was enacted with the goal of getting platforms to go out there and moderate content to have the kind of agency that you described. And I think it's been quite successful in that, and indeed, in the Digital Services Act, which the EU adopted a couple of years ago, they added an article, I think it's Article Six, to try to make clear, like, yeah, go out moderate content. We want you to do that. That's not going to threaten your immunity in cases that are about illegal content.
So, the idea that platforms have agency, I think, is, like, very much baked into the law here, baked into, I mean, obviously, everything they do. But also, when I talk about them being proxies for state power, you're not a very useful proxy for state power if you don't have any agency, right? If YouTube just passively showed you every upload in chronological order, then it wouldn't be very useful for states to come along and ask them to do something different. I am being, to be clear, a little bit flippant, in the proxies for state power thing, and this is very much driven by Mark Zuckerberg’s statement a month and a half ago where he said, oh, I got bullied so much by the Biden administration. That is nonsense. He has good lawyers. He made his choices, and now I'm going to use my masculine energy to set the policies I really want, which happen to be the anti-immigrant and anti-trans policies that the new incoming president wants. That's the kind of proxy for state power I'm having.
Audience Member 2: So, Daphne, you talked a little bit about, you know, we shouldn't be mandating risk assessments and audits until we have a risk assessment or audit framework. And I think the challenge I've seen sort of looking at the history of the financial sector and even the FDA is, you kind of have to regulate that and kind of say there needs to be one to kind of push the standards bodies and push the orgs to create one, and it sort of has to be this iterative thing. So, I guess, how are you balancing that? I mean, if you're struggling with some of the legislation that's coming out around that, because there isn't a perfect standard yet, but it's a bit of a Catch-22 domino thing?
Daphne: Yeah, so, I'm not necessarily against risk assessments and audits. You know, to be clear, I'm against laws that say you must mitigate risks, and the risks are the following, you know, suicidal ideation, health, you know. I'm not coming up with the list, but, you know, where the state enumerates the list of which kinds of lawful but dangerous and harmful speech the platform has to suppress in the name of risk mitigation, that, I think, has been held by courts a bunch of times, is a U.S. First Amendment problem.
But the risk assessment and mitigation model that's in the Digital Services Act, in Articles 34 and 35, I used to be pro and I've been backing away from. Basically, when the DSA was pending, the explanation of the drafters, who I think are very, they're straight shooters, was, this is about assessing systems. There is no government authority involved in this process that would allow us to come along and say, oh, your content moderation now creates too much risk to democracy because of this lawful but harmful disinformation, so take that down. So, their position was, this is an authority that in no way lets us tell you how to moderate content.
And then, once the law was passed — that sounds fine to me. Once the — maybe, well, whatever. There might be some waste involved. Once the law was passed, a lot of people who are actually complying with the law and just reading it were like, no, I think what this says here is that they can tell you to take down lawful but harmful content because of the risk that it's creating. And so, there's this open question in European law right now about what the answer is to that. And my concerns with it — I could go into great detail about the laws on that, but I won't — my concerns with it depend on the answer to that question.
Mike: Let me just jump in on that question in a different way. Lawmaking regulation is an iterative process. You need — to make smart laws, you need to know what are the challenges you're dealing with. And so, risk analysis in the first instance is, companies are going to tell you what they tell you. Some of that will be illuminating or illustrative. They may not tell you the things that are really the greatest risks, and so, a regulator has to be, on the one hand, mindful of not overdoing it and having companies spend lots of time filling out forms that don't get you anywhere. I'm very sympathetic to companies who say this becomes an administrative nightmare. But at the same time, if you're really going to be smart about developing a sensible regulatory framework, you need to know what you're dealing with. You need to know where you're up against. So, day one, you can't know what you're actually going to be doing. You need to go out and gather some data from the companies and others, assess what's really important, then go back to the companies and say, these are the things we're hearing about. We need more from you, and at some point, that turns into a smart regulation.
So, I believe that the system, if it's done right, can really make a difference here. And it's a number of things we're talking about in collection, but with the notion that there's a serious regulatory body, people get up every morning and they say, this is my job. I come out of this industry. I know this industry. I know all about drugs, I know all about airplanes, I know all about social media and tech, and I'm going to help figure out what sensible regulation looks like.
Audience Member 3: Thank you all for being here today. We didn't really get to the question of, like, is there space to re-imagine the First Amendment, or any of the amendments? I think there's a sense that the United States is very individualistic, and that comes out a little bit in the way that we talk about individual rights. Fourth amendment, so important, that my guns are more important than children dying in schools, or, like, my rights of free speech is more important than, like, democracy at stake. So, I think now is a good time. It's a sensitive topic, but I think now's a good time to be thinking about, like, are we prioritizing our individualistic rights over community.
Mike: So, the United States is at one end of the spectrum on issues of speech. We have some very narrow limitations, but by and large, the First Amendment, as interpreted over 250 years, says speech is more important than preventing, let's say, racial or gender discrimination.
The international law of human rights takes a somewhat different view. It says that if something is going to promote hostility, discrimination or violence, states can control that. It's your balancing rights, and we are at one end of the spectrum. I could debate either side, but if you're in the United States, the First Amendment is what it is. And so, it's sort of hard to kind of say, well, we're going to do something different until somebody decides to amend the First Amendment or interpret it differently. But the question is, in my classes, I actually have a role play where I do that, you know, because the rest of the world looks at us and says, why are you letting all these really horrendous people out on the streets demonstrating against, you know, Jews or Black people or gay people? That's our system. That's our First Amendment, and the value of it is we do allow diversity of ideas. That's the theory 240 years ago. But the rest of the world thinks we're a little nuts, honestly.
Daphne: So, I spent a lot of time working in other countries, and maybe I've kind of gone native. I don't know, a lot of their speech laws seem kind of okay, right? Like, if the U.S., if somebody litigated and litigated to the Supreme Court and got them to reverse things that they've said pretty recently and move the needle so that we could prohibit more hate speech, would that be so bad? That sounds potentially fine. Move the needle on what counts as incitement to violence or what kind of speech counts as I was going to say defamation. I don't want to change that one, actually. But like, there are all of these lines between prohibited speech and First Amendment-protected speech that the court could move, and that wouldn't necessarily be bad.
But if your theory of change is, let's litigate cases to this Supreme Court until it changes, like, 100 years of First Amendment precedent, that's going to go really slow, and that's maybe an important career for a bunch of people, but it a) I don't think there's any scenario where that kind of litigation could have changed the political moment we're in now, because so much of the speech that's harmful to democracy is definitely in that legal category, and you'd have to move the needle so far to change that. The other thing is, unless that happens, unless we move the needle on what is actually prohibited or prohibitable under U.S. law, then all the talk about changing Section 230 to solve those problems is misplaced, because an immunity doesn't matter if, at the end of the day, nobody's actually liable for it anyway.
And so, I think there's a lot of misconceptions. Politicians routinely — there are studies on this in Denmark, but I'm confident that also in the U.S. — politicians routinely think speech is illegal that actually isn't. And so, they think if we get rid of immunity, then platforms will have to take down that speech. And this is actually something that The New York Times had to do a retraction on a couple of years ago. They had to run a retraction saying, oh, we said that Section 230 is the reason hate speech is online. Actually, that's the First Amendment. Like, there are just a million examples of that that we have to take seriously in order to figure out a meaningful way forward.
Mike: Can I just add a PS to what I said earlier? We're talking about the First Amendment, because it's what the government cannot do or can't do. There is absolutely nothing to prevent a private company called Meta or another one called Alphabet, YouTube, from taking down speech that is derogatory towards gay people or Hispanics or Jews or Black people or women. The internet is awash in horrendous content, which is flooding — it's part of what I talked about at the beginning. Our society is polarized. We're at each other's throats, and all of those prejudices are just swimming around, and there's nothing to prevent those companies from saying, we're going to make the judgment ourselves. The First Amendment doesn't apply to us. We want to have a healthy internet. We want to be doing what's in the public interest, and having all that stuff on there is not actually helping our society. They could do that, but they're not, because their business model says to them that's part of what people are going to see.
Daphne: How does a regulator solve that, if the regulators still can't make them change that?
Mike: It's for the companies to behave responsibly, and they're behaving totally irresponsibly and have been in the face of everything we see going on in our society. It's really a scandal.
Audience Member 4: Most of the conversation about potential regulation is focused on content moderation and the boundaries of the First Amendment, and we haven't spoken very much about the property vector — the fact that the platforms just declared that all of that data from our interaction online is just theirs, and that they can use it any way they want. And I wonder if it's just too late, that it was too early, that property grab was just so early and fundamental that it's too late to talk about that. But it seems like when we talk about the business model, they’re grabbing all of that for free, not just what happens online, but also often offline, through other apps and other tracking things — Nest, Google Street View, you know, all of those kinds of things. It seems like that's at least this as important a secret sauce as the algorithms themselves, and could that be a potential place to regulate?
Daphne: I mean, I think it's definitely not too late. I'm trying to think when California passed its big new privacy laws, maybe 2020 or something? There is all the room in the world for America to do better on privacy and data protection for internet users in ways that go directly to the point that you just made. We just can't muster the political will to do it at a federal level.
Mike: And in part, again, the companies are not particularly eager to have that kind of regulation. They're selling our data. They're making money on the fact that they know a hell of a lot about us. And they're going to the advertisers and saying, we can target groups and we know all kinds of things about them. That's part of the business model. So, it's totally within the company's prerogative to say, we are going to be more careful and we're going to support national privacy legislation.
Daphne: They do.
Mike: Well, they do and they don't.
Daphne: So, I come from an intellectual property background, and I think treating information as property gets really kind of dangerous really fast, because it works differently, right? It's like, it can be shared without taking it away from someone. It's iterative, and we stand on the shoulders of giants, et cetera, et cetera. So, I am suspicious of property as the regime, but, you know, that's, a, like, legal academic answer. if the question is, can you give users rights to say what can be done with their data when they give it to the company? Yeah, we can do that.
Audience Member 5: I just want to take up my Italian friend’s invitation in two ways. One, I think you asked about public utilities, and I think you got, I mean, you and Daphne were going back and forth on this. One thing to look at would be the Moody decision and the way in which the court talks about the common carrier argument the state had. So, you may be interested in reading — and it's going to be really hard in the U.S. to advance such a claim. But I want to invite the panelists to talk about other forms of regulation. You talked about transparency. You have a question here about data protection or property. What other kinds of reforms do you think are achievable, and I think of structural ones, since some of you been talking about incentives?
Mike: So, one example, there's some procedural safeguards. Companies say, you know, we have terms of service. They throw out all kinds of things that sound terrific, but you actually have no idea what they mean. And so, it makes sense to begin to look at some of those assertions and evaluate, how are they actually doing it? Again, more information about that, more in-depth looking at what are the assertions, the promises they're making, and are there some procedural safeguards that can make sure that those things actually have meaning?
I think there's actually a fair number of things in that category where — and that some of the laws we haven't talked about. My colleague Mariana has identified 25 laws in various U.S. states, various countries, DSA, where things like that are being tested. We're at early stage. Can't say this strongly enough. We're in the early days of a still pretty embryonic industry. You know, these companies are 30 years old or less. This industry has grown like crazy, and now we have to get our arms around it. And some of the states and some of the foreign governments and the EU are starting to tinker and look for ways. One of the ways is these procedural safeguards.
Daphne: So, I actually, I wrote an op-ed in The Hill encouraging Congress to emulate the Digital Services Act in several respects. One is having different rules for the very biggest companies. I think we're on the same page there. Another is being sure to differentiate between different parts of the technical stack on the internet. So, you're not passing a law with Facebook in mind but then it applies to Cloudflare. You know, looking at that and making different rules that make sense for different technologies.
And then another is the procedural stuff — the sort of, if you're enforcing speech rules, how, you know, be clear what they are, have notice to affected users, have appeals, but maybe not as many appeals as the Digital Services Act. But all of that is different from maybe, to make a point, it's different from enforcing terms of service if what we mean by that is enforcing the content moderation rules that the platform said it was going to enforce. That has the same problem I was talking about with transparency, and it's actually, a district court in California just issued a ruling making this same point, saying, if you take the inevitably ambiguous at the margins rules in platform speech policies, and then you have a government enforcer who can come along and quibble about it in front of a court, that creates, I think it said, unlimited discretion for state preference of what interpretation of homophobia they want, or what interpretation of Indigenous they want, et cetera, et cetera.
I actually testified on this question in a district court in Jerusalem in 2012, which was a really interesting experience. But just to illustrate how long this question has been here, of like, well, can you just use contract law, and my answer is, and that of the district court in California is, no, not if that brings in state interpretation of speech rules.
Ryan: Great. Thank you very much, and please join me in thanking the panel.
This episode featured a recorded discussion from NYU Law’s March 19, 2025 forum, titled Regulating Social Media: Is it Lawful, Feasible and Desirable, co-moderated by me, Tess Bridgeman. This episode was produced by Maya Nir with help from Clara Apt. Special thanks to our guests, Daphne Keller and Michael Rosen, and our NYU Law colleagues, Michael Orey and Ian Anderson. You can read Just Security’s coverage of social media platform regulation on our website, following links in this show's notes. If you enjoyed this episode, please give us a five-star rating on Apple podcasts or wherever you listen.