The Just Security Podcast

How Should the World Regulate Artificial Intelligence?

February 02, 2024 Just Security Episode 54
The Just Security Podcast
How Should the World Regulate Artificial Intelligence?
Show Notes Transcript

From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems. 

Policymakers have taken steps to address these risks, but industry and civil society leaders are warning that these efforts still fall short. 

Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development, in November, the U.K. hosted the world’s first global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December European Union policymakers passed a deal imposing new transparency requirements on AI systems. 

Are efforts to regulate AI working? What else needs to be done? That’s the focus of our show today. 

It’s clear we are at an inflection point in AI governance – where innovation is outpacing regulation. But while States face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited. 

There is no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI. 

Show Notes: 

  • Robert Trager (@RobertTrager
  • Brianna Rosen (@rosen_br)
  • Paras Shah (@pshah518
  • Just Security’s Symposium on AI Governance: Power, Justice, and the Limits of the Law
  • Just Security’s Artificial Intelligence coverage
  • Just Security’s Autonomous Weapons Systems coverage
  • Music: “The Parade” by “Hey Pluto!” from Uppbeat: https://uppbeat.io/t/hey-pluto/the-parade (License code: 36B6ODD7Y6ODZ3BX)
  • Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)

Paras Shah: From products like ChatGPT to resource allocation and cancer diagnoses, artificial intelligence will impact nearly every part of our lives. We know the potential benefits of AI are enormous, but so are the risks, including chemical and bioweapons attacks, more effective disinformation campaigns, AI-enabled cyber-attacks, and lethal autonomous weapons systems. 

Policymakers have taken steps to address these risks, but industry and civil society leaders are warning these efforts still fall short. 

Last year saw a flurry of efforts to regulate AI. In October, the Biden administration issued an executive order to encourage “responsible” AI development. In November, the U.K. hosted the global AI Safety Summit to explore how best to mitigate some of the greatest risks facing humanity, and in December, European Union policymakers passed a deal imposing new transparency requirements on AI systems. 

Are efforts to regulate AI working, and what else needs to be done? That’s the focus of our show today. 

Welcome to the Just Security Podcast. I’m your host, Paras Shah. Co-hosting with me today is Just Security’s Senior Fellow and Strategy and Policy Fellow from the University of Oxford, Brianna Rosen. 

Brianna Rosen: Thanks, Paras, for having me on the show. It’s clear we’re at an inflection point in AI governance, where innovation is outpacing regulation. But while states face a common problem in regulating AI, approaches differ and prospects for global cooperation appear limited. 

That’s what we’re discussing today, and there’s no better expert to navigate this terrain than Robert Trager, Senior Research Fellow at Oxford University’s Blavatnik School of Government, Co-Director of the Oxford Martin AI Governance Initiative, and International Governance Lead at the Centre for the Governance of AI. 

Welcome, Robert. 

Robert Trager: Thanks guys, so glad to be here. 

Brianna: Robert, a patchwork of AI regulation is emerging, from the EU AI act to the U.S. executive crder to frameworks from China, Singapore, the OECD, G7 and beyond. Various approaches have emerged not only at the national level, but also at sub state levels and across different sectors. For example, last year, more than 25 states introduced AI related legislation in the U.S. alone. Can you give us an overview of the AI governance landscape right now? Are areas of consensus emerging? Or is there a risk of regulatory fragmentation, and what would be the implications of that?

Robert: Yeah, thanks for that question. It's, there's really a lot to say there, there's so much that is going on. In a way, you know, AI is such an amorphous term, and it always has been and it's almost impossible to say like, what's everything that's happening. One of the things that's been sort of a focus recently is regulation of what sometimes goes by the name of frontier AI, which is very much a contested term, or some version of advanced AI, which people define as highly capable AI models that perform a wide variety of tasks and match or exceed the capabilities of present-day advanced systems. So, you know, that's one area that sort of receiving a lot of attention. And some people don't like to term it frontier AI. You might think about advanced AI or something like that. But there's a ton of activity just around that. 

There are various national processes. One of the things that the AI Safety Summit in the U.K. did is to really normalize the discussion of security risks when it comes to advanced systems. That was something that that some people were worried about beforehand, but not everybody was worried about, and some people still aren't worried about that. But at least, I think that those discussions have been normalized. And with that goes a variety of international processes and a variety of national processes.  

So national level, you have various attempts to impose some form of regulation. And in many cases, really, it's still voluntary regulation. The leading industry players are being asked to self-regulate, in effect, and we can talk about how effective that is or isn't, as the case may be. There's, of course, the E.U. AI Act, which is moving beyond that a little bit, although even though it imposes obligations on the providers of AI systems, in many cases, the enforcement is voluntary ,and checking to see if they've actually met requirements is actually voluntary. So, we have a variety of national processes. Also, in China, we've had a few laws there that have been passed that relates to AI. Actually, China was one of the earliest adopters of binding regulation. And then we have the executive order and things surrounding it in the United States, which, again, it's not a law, it's an executive order. So, it doesn't have the force of law, but it's leveraging some laws and the power of the executive in order to place some disclosure and other requirements on AI firms. 

And then there's international processes, and there's a variety of these in the G7 and the Hiroshima Process. There's a lot of discussion at the OECD, there's the Council of Europe formulating some principles, which, when adopted, would have the effect of law. And the UN, of course, is now getting into the game and has the high-level advisory board, and that's been very high profile. It's clear that Secretary-General Guterres is very interested in AI, he wants to do things on AI, he has said as much, and what that will be is still very much uncertain that from a governance perspective. 

So the high-level advisory board has actually released its preliminary report. That preliminary report defines some essential functions that governance at the international level should fulfill, but it hasn't converged on mechanisms of governance for doing those things. So that's still very much under discussion, and exactly what should governance look like, that's still very much under discussion. I think I covered a lot of what's going on in the world, but not everything. There's so many things that are happening.

Paras: Yeah, that's a lot to cover, there's a lot happening. Where do you see the fault lines in these different approaches? Where are they diverging the most?

Robert: You know, I think it's actually too soon to say how much they are diverging. In terms of governance, where the rubber really hits the road, we don't really know exactly what they're going to do. And at the level of principle, there's extraordinary convergence. At the level of principle, most of the statements that we've had so far have been about principle, and now we're just at this kind of precipice moment, which is very difficult for member states and international actors. And we've seen that before. I mean, if you think about climate change, you know, there was a similar moment in the climate change movement, where people realize we need to move from sort of the principle of this is happening, and we need to do something about it, too, well, what are we really going to do exactly about it? And that transition is really hard. It's really hard for states to actually [have] consensus on what they should do. And so that's when there's going to be more divergence. 

But right now, we haven't even moved to that place. I mean, where you can see the beginnings of divergence is some states will want things to be regulated through the UN, that will seem like a good idea to some states. Some states will want broader fora that include them, other states will want narrower fora that include them and exclude some other states. So, that's the kind of thing that you can see already. 

And so right now, for instance, there is a push to do some regulation through the UN, or regulation may be a too strong word, actually, some governance, let's say, through the UN, but some member states are pushing back against doing some of those things through the UN. So, that's where you start to see the divergence. But that I think, is not out in the open yet, because the discussions, really, in terms of practice, are not out in the open in many of those cases.

Brianna: Another key question concerning this issue of regulation and incentivizing compliance is this idea of regulatory capture, which we know is a huge problem in this space, where a handful of these major tech companies control the data and the algorithms that are governing these products. How can governments ensure that there's more transparency in this space, and ensure that the companies that are developing these technologies are not the ones who are writing the rules of the road? What role does government and the public and civil society and academia have to play in preventing this type of regulatory capture? 

Robert: Yeah, I think that's a great question. It's definitely the right question to ask. Regulatory capture is a real thing. It's happened, obviously, it's happened so much, and in many different industries, and industries have a lot of information. In this case, they have, there's a huge imbalance in information between industry and government. And so, governments have to go to industry to learn things. And that gives the opportunity for industry to have a huge amount of influence on the political and regulatory process, that, and, and other forms of influence, more direct ones that that industry has. 

So, I think we have to ask this question. I think that when it comes to big tech, if you will, there are some market dynamics which also can lead to industry concentration. And so, it's even a heightened danger, in this case, that we could end up with a real concentration of power that would not be in the interests of people overall. I think we also, however, shouldn't assume that any call for regulation represents industry capture.  I think there are some actors who are calling for regulation for reasons that have to do with the bottom line at their firms. I really do. I think that's, you know, how they're thinking about it. Or maybe they're thinking, well, you know, we're going to get some form of regulation. So, let's make sure it's a form of regulation that we think is a little better than what we might get otherwise. I think that's going on. 

But I also think there are actors that really feel the need for regulation. And, I think some of these folks, you know, it's like, even if they trust themselves, they think that, that they can, they can do it, you know, they can create a product and make it safe. And, you know, people who lead companies tend to be sort of confident folk. And I think that they have a lot of confidence, often in themselves, although not always, but they're worried about the other company. And they feel like, well, we better have regulation so that I don't have to worry about what those other companies might do. That's a long way of saying, I think that we should be worried about it. But I think that we can't be so worried that we don't take proposals for regulation seriously.

Brianna: We've spoken elsewhere, Robert, about the summit after the summit in the U.K. context and what we should be looking at next. But, and regardless of whether you think that the U.K. AI safety summit was a success or a failure, it does seem like we're in the cycle of summit after summit after summit on AI, with a Korea summit coming up in May 2024 and the one in France coming up in November. I'm wondering, in your view, what should the next global AI summits seek to achieve, and what can policymakers do now to lay the groundwork for success? So, we're not simply coming up with a laundry list of broad principles, but actually starting to operationalize those principles in a meaningful way, in the ways that you've described earlier? 

Robert: Yeah, well, one of the funny things about having summit after summit is that there's kind of a good side to it, and a not so good side to it. The good side is that when you feel like you didn't achieve everything that you wanted to in one summit, you're hopeful, because you think, oh, well, I have another summit to do it. And it's only in six months. So that's great. The problem with that, too, is that there's a lot less pressure to achieve what we need to achieve in any summit, because political leaders can kick the can down the road. 

So, it's great that we have this summit process. I think it can play a useful role because it is a place where sets of actors are coming together that probably wouldn't be coming together in in any other fora. And so, I think it does offer something different from the other fora that are out there, and in that sense is, is useful.

In terms of what the next summit should try to achieve — one understanding of what was achieved in the U.K., I think there were a few different things that were achieved. But, at the end of it, there was a sort of agreement on the need for global rules of some kind. But very little discussion, actually, of what those global rules should be, and how we could have some kind of international architecture to instantiate them, and so that's something for future summits. And now that is, again, a really challenging something for future, Something that I think, you know, if you talk to the policymakers in Korea, or in France — the two hosts of the upcoming two upcoming summits — they would be a little bit nervous about because it’s very, very hard to have those conversations and come to those concrete resolutions. 

But I think there's actually a lot of other things that they can also do that would be very useful, some of which involve, kind of, doing an update over what was done in the last summit. So, for instance, at the end of the last summit, there was the agreement to have Yoshua Bengio do a State of the Science report, sort of, you know, where's the science? What risks does that imply, from a kind of consensus scientific perspective, similar to the IPCC and the climate area? And we can have an update on that, you know? Where's that gone? What is the consensus? You know, that'll be a useful thing that the next summit can do. 

Something else that we got in the last summit was all these companies talking about responsible scaling protocols. So, companies thinking, well, how do we want to regulate? How do we want to scale up these technologies? And they came out, and they said, you know, at least a bit, what they were going to consider as sort of risky, and how they were going to take different mitigations when they thought that something was risky, and in some way. And so, I think we can do a check-in on that, we can say, okay, well, you know, all these companies said that they were going to be doing these things. Well, have they really put procedures in place for making that real? Have they actually defined the terms that were vaguely defined before? Have they, you know, defined, well, what, what constitutes a line where a set of mitigations become needed, according to their responsible scaling protocols? 

You know, we talked about interoperability, and there really isn't interoperability across the scaling protocols. And even I think within firms, there's probably disagreement about, you know, what would constitute a severe risk and less severe risk. And so, you know, even though we have these responsible scaling protocols that are entirely voluntary, it's not really clear that they are highly actionable. But that's a challenge for companies. And I think they are working on it. And so, the AI summit is a place where we can check in on that, on that progress.

Brianna: Robert, I'm curious on your views on whether AI governance is moving fast enough. So, when we talk about AI, a lot of times we talk about long term future risks. But, we all know that many of these risks are in fact already here today. And there are certain areas such as military applications of AI where the problem seems increasingly urgent, whether you look to Israel's use of AI for accelerated targeting and operations in Gaza, the U.S.’s planned plans for the replicator drone program, which is just in the next year or so. Are we doing enough, and are we doing it quickly enough to try to rein in this technology? And if not, what other steps can be taken?

Robert: Yeah, well, the area of lethal autonomous weapons in particular has been fraught, of course. Depending on how you count, it's more than a decade of negotiations in the CCW, the Convention on Certain Conventional Weapons at the UN, and those negotiations haven't produced the sorts of binding international agreements that the advocates of international governance had hoped for. 

So, I think we needed to learn from that because that's been a slow process. And, I think there are reasons for that. It's really the primary producers of autonomous weapons systems who have resisted regulation in the context of the CCW. And, I think we can expect similar when it comes to the states that are at the frontier of technologies that can be dual use. And in fact, in general, it has been incredibly difficult to regulate and there isn't really a convincing single example, actually, when you really drill down into the details of a case where major powers are limiting themselves, particularly when it comes to the development of a technology for which they don't have military substitutes. 

So, you can think of cases like the Biological Weapons Convention that, you know, that is banning the development, stockpiling use, etc., of a technology. But, that's a case where, even in its conception, that is, when the Nixon administration was thinking about the Biological Weapons Convention, they thought of biological weapons as a sort of poor person's nuclear weapon. They thought of nuclear weapons, was a substitute —only better, from their point of view — than biological weapons. So, they didn't like the idea of the spread of nuclear weapons. They didn't like the idea of the spread of biological weapons. And so, it made sense to have a Biological Weapons Convention. 

But that's not a case of limiting the development of a system that they didn't have a military substitute for there. When you get into the details, there aren't such convincing cases of that. So, I think it's — arms control has, everybody knows arms control is challenging, but it's actually more challenging for dual use technologies like advanced AI, that there aren't military substitutes for. That doesn't mean that we, we can't do anything. You know, the solution for nuclear weapons was non-proliferation and norms of use.  

And maybe we can do some things like that for AI. We can certainly think about non-proliferation, and I think, in the area of lethal autonomous weapons, you know, that's, I think, a really fruitful area to think, because some of the worst misuse cases when it comes to autonomous weapons have to do with their proliferation. If it's just, you know, the U.S., China and Russia, well, that's not, that's not a perfect world, I mean, far, far, far, far from it. But, if it's lots of countries around the world that have autonomous weapons capabilities, and sort of swarm capabilities, if we're going like one or two generations down the road, you know, that's probably a more dangerous, more dangerous world, from the point of view of security. So, I do think non-proliferation is very interesting in this case.

When it comes to norms of use, we have this additional challenge over the nuclear case, because when a nuclear weapon goes off, you usually know that it's happened. But you don't actually know, often that an autonomous technology was used, it's always contested. They can say, oh, it was just drone technology, there was a human controlling it or something like that. And if it's advanced AI, they can say, oh, no, no, we weren't doing anything, you know, that was just, you know, whatever, some system, but it wasn't, it wasn't, whatever the set of restricted systems is, it wasn't that. And so, it's hard to know if the line was crossed, and when it's hard to know if the line was crossed, then it's hard to get everybody to agree to the line because some actors will feel ,well, maybe they'll agree to the line, but those other actors, you know, they're going to cross it and say that they didn't. And so, the first set of actors will say, well, we're not going to agree to it, because we don't trust the others to agree to it. And so, there's this additional challenge in norm creation in the space. But as I say, I do think that there are things that we can and should be doing in the in the armaments space.

Brianna: And of course, there's been talk about creating some kind of IAEA for AI. But of course, there are, as you point out, Robert, there are different challenges associated with AI that don't exist in nuclear non-proliferation. We can't track enrichment stockpiles like you do for nuclear non-proliferation. There's the problem of attribution, which makes it really hard to think about when those lines are crossed. Is that really the best model for thinking about AI governance, or would you say there's a different type of model that would be more applicable?

Robert: Yeah, I think right now, even though a lot of people talk about IAEA as a model, a lot of those folks don't really have a clear understanding of what the IAEA does. So, the IAEA took on the mandate of enforcing the nuclear non-proliferation treaty. And so, it's doing that. It's not really enforcing regulation between the declared nuclear powers very much. And so, I think all these folks that say, oh, we need an IAEA in AI, I think they have in mind something which actually is governing relations between the U.S. and China and other major AI powers. So, I think, you know, sort of from the get go, and also for other reasons, in addition to that reason, the IAEA analogy isn't a particularly good one. 

And maybe, the even more important thing to say is that I think it's unlikely that a frontier AI state, like let's say, the U.S., is going to say, let's have an international organization that's interacting with OpenAI and going into its offices and examining data centers that it's using and making sure that it's doing everything according to these international regulations. For one thing, given that this is a dual use technology, that would be a highly proliferating moment to have an international organization that was governed internationally going and doing something like that. And everybody has always thought that the IAEA has had quite a few spies in there, in the IAEA. And so, I think, you know, states would reckon with a lot of information being made available through that path if there were such an international organization. So, I think it's, it's unlikely that security communities in frontier AI states are actually going to allow something like that. 

Now, is there something else that we could do? You know, I think there are some other models. I happen to like the model where we think about doing similar things to what's done some other industries like the International Civil Aviation Organization and the civil aviation industry, or the International Maritime Organization, or the Financial Action Task Force, and in fact, other industries. And, what they have there is an international body that is not auditing firms in states, it's auditing the regulatory apparatus of states to be sure that those states are regulating their own firms in an appropriate way. So, I like that idea. I think there's a lot of potential there. That's the kind of thing that I think would be more palatable for states, including frontier AI states.

Brianna: There's been so much hype surrounding AI, and there's been so much discussion on AI governance, and what are the ethical principles, the laws and the policies that should regulate it. Are there any areas in this conversation that you think have been missing from the debate? Is there anything that we're not talking about that we should be talking about instead? 

Robert: I think, you know, one of the things that's really hotly debated is whether we should just be talking about “near term things” and “long term things.” And the way that people talk about that, it's like, long term is just these, you know, some people talk about existential risk or catastrophic risk and that gets identified with long term, and let's hope that that really is long term. 

But, I think that there's a lot more to think about when it comes to the implications of the technology that isn't quite here today, but could be here soon. And I really wish people, when they were talking about things, they were wrestling with some of those things a little bit more. So, you know, we have a whole set of concerns about the technology that's here today that, thank goodness, a lot of people are thinking about, and that's essential. And, you know, we have potential issues when it comes to catastrophic risks, let's say, in the future, but then we have, you know, other things that are maybe coming. Maybe ways of understanding society is going to have to change when, you know, we have the ability to predict in all kinds of ways what democratic electorates are going to do, or influence democratic electorates, through all these methods. You know, this year we have more elections than ever in the history of the world, and the impact of these technologies on elections both today, but also, you know, what if it's the next generation of the technology in two years, what effect does that have on elections? I think, you know, that's the kind of thing that I wish people were talking about a little bit more, but I think it gets lost, because of the way that people break down kind of near term and long term. 

Brianna: We’ve talked a lot about risk, but we haven't talked about how can we ensure equitable access to the benefits of AI, which are likely to be concentrated in a few powerful states and companies. And that's also, I think, a huge issue going forward. Not only, how do we mitigate risks versus benefits, but how do we ensure equity in doing so. 

Robert: Exactly. I mean, that's an essential issue that everybody is talking about, but I don't think there's that much thinking about what's the institutional solution to be sure that we have broad-based access to the technology, and also broad-based voice in the governance of the technology. And I think there's another thing, which is sort of interesting, in that often, there's this idea that will regulation is maybe a kind of a bar that will be hard for countries in the global majority to meet. And, you know, global majority is an interesting term in this case, because, you know, if we're talking about frontier AI countries, and maybe that's like, just a few countries, so the global majority is like everybody else, it really is an extreme majority. 

You know, so I think many of those countries worry that, well, if there is an international standard, will it be the kind of thing that excludes them, because it's hard for them to meet it. And I think that is a danger that's really important to think about. And, you know, if we're going to have a regulatory regime, we need to be sure that doesn't happen. And in fact, you know, the International Maritime Organization or the Civil Aviation Organization, you know, they have actually really interesting and significant programs to help the regulatory systems of countries to meet international standards. 

So, I think, you know, that would be something that's, that's very important. But the thing that, the point I also want to make is that regulation isn't just a bar to be met. It's also a way to govern these technologies. And once we have some sort of global governance framework, then we can really talk about, well, what are the institutional aspects of it that are going to ensure voice and access for the global majority but without some sort of global governance framework? I don't see how we have broad-based voice and access.  

Brianna: I suppose one challenge is just developing the right standards. And then, the challenge after that is how do you actually enforce those standards and ensure compliance, particularly given the difficulties of going back and checking AI and the pace and scale at which this type of work is occurring, and the difficulties of accessing the data and having, you know, transparency with the companies that own it, or the governments that own it — a whole slew of problems beyond just what is the correct regulation. And as you said, the difficulty will be going from those high-level principles and operationalizing them down as well to something that actually impacts the day-to-day life of everyday citizens for the good and to protect their individual rights and data. 

Robert, if governments could do just one thing on AI regulation in 2024, what, in your view, Should it be?

Robert: Phew. I don't know. That's a tough one, because I think we do need a slate of things, and I haven't really thought in terms of prioritizing all of those different things. So maybe I'll just take a little bit of pressure off myself and say this is the first thing that comes to my mind.

So, the thing that comes to my mind is, I think, right now there is there was sort of celebration when companies were releasing these responsible scaling protocols. But, as I guess I mentioned before, there isn't really a clear set of lines for, well, when are they in one regime in terms of the required mitigations and when are they in another? So, I think really actually defining what those lines are, and then specifying what the mitigations are that are required after each of those lines — tat would be something that would be really useful. I'm not saying that it is the single one thing that should be done, but it would be really useful if governments could do that. 

Paras: Robert, this has been so helpful. Thank you again for joining the show. And we'll be following all of this at just security picks again. 

Brianna: hanks so much, Robert. Great to have you on the podcast.

Robert: Thanks guys. 

Paras: This episode was co-hosted and produced by me, Paras Shah, and Brianna Rosen with help from Clara Apt. Clara Apt. Our theme song is “The Parade” by Hey Pluto. 

Special thanks to Robert Trager. You can read Just Security’s coverage of AI, including our ongoing Symposium on AI Governance: Power, Justice, and the Limits of the Law, on our website. If you enjoyed this episode, please give us a five star rating on Apple Podcasts or wherever you listen.