The Just Security Podcast
The Just Security Podcast
Strategic Risks of AI and Recapping the 2024 REAIM Summit
From gathering and analyzing information to battlefield operations, States are integrating AI into a range of military and intelligence operations. Gaza and Ukraine are battle labs for this new technology. But many questions remain about whether, and how, such advances should be regulated.
As political and military leaders, industry, academics, and civil society confront a rapidly changing world, how should they approach the role of AI in the military? This week, more than two thousand experts from over 90 countries gathered in Seoul, South Korea, for the second global summit on Responsible AI in the Military Domain (REAIM). The Summit focused on three themes: understanding the implications of AI on international peace and security; implementing responsible application of AI in the military domain; and envisioning the future governance of AI in the military domain.
This is the Just Security Podcast. I’m your host, Paras Shah.
Just Security Senior Fellow Brianna Rosen and Co-Editor-in-Chief Tess Bridgeman were among the participants at the REAIM Summit, chairing and speaking on several breakout sessions. Today, Brianna joins the show to share her key takeaways from the Summit, including on how it inform future efforts to build consensus and strengthen AI governance in the military domain.
Show Notes:
- Brianna Rosen (@rosen_br)
- Paras Shah (@pshah518)
- Tobias Vestner and Simon Cleobury’s Just Security article “Putting the Second REAIM Summit into Context”
- Just Security’s Artificial Intelligence coverage
- Just Security’s Diplomacy coverage
- Just Security’s Military coverage
- Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI)
Paras Shah: From gathering and analyzing information to battlefield operations, States are integrating AI into a range of military and intelligence operations. Gaza and Ukraine are battle labs for this new technology. But many questions remain about whether, and how, such advances should be regulated.
As political and military leaders, industry, academics, and civil society confront a rapidly changing world, how should they approach the role of AI in the military? This week, more than two thousand experts from over 90 countries gathered in Seoul, South Korea, for the second global summit on Responsible AI in the Military Domain (REAIM). The Summit focused on three themes: understanding the implications of AI on international peace and security; implementing responsible application of AI in the military domain; and envisioning the future governance of AI in the military domain.
This is the Just Security Podcast. I’m your host, Paras Shah.
Just Security Senior Fellow Brianna Rosen and Co-Editor-in-Chief Tess Bridgeman were among the participants at the REAIM Summit, chairing and speaking on several breakout sessions. Today, Brianna joins the show to share her key takeaways from the Summit, including on how it could inform future efforts to build consensus and strengthen AI governance in the military domain.
Brianna, welcome to the show. Thank you so much for fighting what I know is a lot of jet lag to join us. I want to start by zooming out. So, this is the second REAIM Summit, and last year's was held in the Netherlands, and it comes amid a number of other conferences and diplomatic efforts around the potential regulation of AI, including the Summit of the Future, which will take place next week in New York. So, what did the REAIM Summit seek to accomplish, and did it achieve those goals?
Brianna Rosen: Thanks so much, Paras, and thanks for having me on the show. As you said, I’m fighting a 14-hour time zone difference and serious jet lag. But let's get into it.
So, the 2024, REAIM summit sought to build on the call to action from the previous summit last year in The Hague, and the blueprint for action was an attempt at the Summit to translate general principles for AI governance, principles at a very broad level, into more concrete plans for action and for implementing those principles in practice.
Now, unfortunately, only about two thirds of the government representatives at REAIM endorsed the blueprint for action, meaning that roughly 30 countries did not. The United States endorsed it, among other allies, but China did not, for example, even though it previously endorsed last year's call to action, Russia also did not endorse the blueprint, and Israel, notably one of the few states who is actually using AI tools on the battlefield right now in Gaza, has not endorsed any of these government initiatives, nor the U.S.-led political declaration that was announced at REAIM last year.
So, in short, progress is slow and it's disappointing, if not surprising, because the principles in the blueprint are actually quite banal. They emphasize, for example, the need for human control and accountability, as well as the applicability of all relevant international laws to AI, including international humanitarian law and international human rights law. So, it's really the lowest common denominator of agreement amongst experts on what we need to achieve at the global level for AI governance.
And ultimately, for me, the failure of states to agree on such fundamental basic principles suggests that the framing around responsible AI remains contested at the global level, with democratically aligned states adopting a more values-driven approach than authoritarian regimes to mitigating potential harms from AI, both to individuals and society as a whole. But that lack of consensus should not be interpreted as a lack of concern, and this is something that I really want to emphasize. Interestingly, in one of the breakout sessions that I participated in at the REAIM Summit, Chinese representatives privately expressed concern about Israel's use of AI in Gaza, such as its reported use of AI decision support systems in targeting that have contributed to significant civilian harm. And that, to me, underscores a shared sense of urgency that the inability of AI governance frameworks at present to keep pace with advancements in technology is already resulting in significant real-world harm.
Now, I've been a little bit pessimistic about progress being slow, but REAIM, of course, is only one of several forums where discussions on military AI governance are unfolding. As you mentioned, the U.N. Summit of the Future takes place in New York in just over a week, and that will provide another opportunity to try to build on consensus and address these challenges at the global level.
Paras: Thanks so much for that overview. There's a lot to unpack there, and at the end of the day, AI is also changing how states interact with their own citizens. So, these discussions are also broader than just military uses of new technology. What is the debate on military AI still missing? Where are the gaps, and how can states better build consensus around AI regulation?
Brianna: Yeah, it's a great question. So, AI changes not just how we use force, but also how we conceive of the use of force, and that's something that's fundamentally different from other types of conventional military technologies or the drone program, for example.
So, what we see with AI enabled technologies — and I say technologies because it's really a whole suite of AI applications, it's not one technology, but rather general-purpose technology with many different applications — and what we see with AI is that the character of warfare is not only changing with these different AI applications, but actually that human perceptions of war are changing too. And so that's a significant shift.
Another shift is that AI is democratizing the use of force, so it's allowing non-state actors such as tech companies, but also a terrorist groups, to challenge state sovereignty in new ways. And all of this has potential to undermine international stability and the rules based international order, which is exactly the kind of implications that have been missing, until now, from the debate on the governance of military AI. So, you know, you asked what's missing in the debate, and for too long, the debate has been focused on the tactical level rather than the sort of strategic level. So, it's been highly focused on how do we regulate or ban lethal autonomous weapons systems, rather than what's the impact on AI on global strategic security, for example.
And it's also been focused on the weapons aspect to the detriment of other broader AI applications, such as the use of AI-decision support systems in targeting or even in decisions to resort to the use of force. And that's gradually changing. What was encouraging me — I was a little bit pessimistic earlier about REAIM — but what was actually really encouraging to me is that there was a general emerging consensus amongst the stakeholders there that the debate, for too long, has been focused on the wrong issues, that we need to be more forensic about the questions that we're asking, that we need to be more precise in identifying knowledge gaps, and that we need to be much more concrete in translating principles into action, focusing on specific use cases for AI and marshaling empirical evidence to test hypotheses and experiment with governance structures to see what works in the real world. So, that's the encouraging thing that came out of REAIM.
And you know, I'm just going to be very blunt. There are really no quick wins in this space. There are no easy ways to build global consensus around AI regulation. It's not going to happen overnight. We talked a lot at REAIM about the need for more robust transparency and confidence building measures to bridge this growing divide between what I call techno-authoritarian and techno-democratic states. And there are lessons to be learned here from the cyber and the nuclear domains and arms control regimes that we can apply to AI.
But what I really want to emphasize is that even in the absence of global consensus, there are steps that policymakers can take today, right now, to promote responsible AI use. And some of these came up in the panel that Tess and I were on that was co-sponsored by the Carnegie Council for Ethics and International Affairs and the Blavatnik School of Government at the University of Oxford. And some of the things that we outlined in that panel, and it will come out in an article in the coming days, is that there are a couple of really concrete measures that can be taken.
So, what does that include? It includes developing and promoting national policies for AI governance, as well as mandating regular legal reviews of AI enabled systems in the military domain. And that's at a very basic level of what states can do. Another thing that states can do is to start to redact and declassify policies and procedures for the use of AI in national security contexts, and I say national security context rather than in the military domain, because I think it's really important here that we know what the policies and procedures are surrounding the use of AI in decision-making on the resort to force, for example, or in the intelligence community, where AI is being fed into intelligence processes that begin to targeting operations in the combat operations.
So, it's really important that we don't just focus on the military domain in that respect. And I think the Biden administration is moving towards that. Hopefully, we'll see a version of their national security memorandum in the coming weeks.
It's also really important to start creating compendiums of responsible AI use cases. So, we talked at REAIM about the need for concrete use cases and looking at governance mechanisms and testing hypotheses within those use cases. And it would be great if governments started to publish that, you know, not only for transparency with the public, but also to share with allies and competitors alike best practices for responsible use of AI in concrete use cases. And alongside that, it's fundamental, it's absolutely imperative that we start to develop shared interpretations of how international law applies in those cases. So, there's a general agreement that international law does apply to AI, that IHL and IHRL and all the applicable laws for specific cases apply.
But, as we saw on the cyber domain, it's less clear how international law should be interpreted and applied to specific applications of AI. So, that's where we need to get the lawyers in the room amongst like-minded states and competitor states to start developing what those shared interpretations look like.
Additionally — and this is something that, so, we spoke with a lot of US diplomats at REAIM, and it's something that they're already doing — but I think that they should continue to sign up more countries to the political declaration, because that's an important document alongside the REAIM process, you know, that states can commit to do more to develop and deploy AI in responsible ways. And, you know, the U.S. has leverage over allies who have not signed up to the declaration, such as Israel, who may be, you know, some of the states that are actually not adhering to these principles. So, I think that's something that the U.S. should push for.
Yeah. So, those are a couple of things that states can do today. There's a lot more that we can talk about and that we'll write about in the coming days, but these are a couple of steps that states can take, in the absence of legislation, in the absence of international consensus, to start promoting more transparency, more confidence, building measures around AI norms that will help socialize and increase the legitimacy and effectiveness of these principles surrounding AI governance.
Paras: And shifting gears slightly, you talked about the focus on tactical versus strategic priorities, and you were on a U.K. government-sponsored panel about those strategic security risks of military AI. In your view, what are the primary risks that we should be concerned with when it comes to the strategic security environment?
Brianna: So, we talked about a number of risks on the U.K.-government sponsored panel at REAIM, but I'll just highlight three here and those are escalation dynamics, interoperability, and the erosion of the rules-based international order. And I want to stress that these are by no means exhaustive. There are a number of other strategic risks that need to be considered, but these topics represent key concerns in understanding how AI affects adversarial behaviors, allied responses and the rules of the road underpinning both.
Paras: So, I want to take each of these in turn, starting with escalation. How does AI change the traditional dynamics around military escalation?
Brianna: So, if we talk briefly about escalation dynamics, AI is presenting new challenges for escalation management because it alters how states perceive risk, how they manage crisis situations, and how they communicate intentions. What it is doing is introducing more uncertainty and unpredictability into escalation management. And at the same time, unlike conventional military escalation, AI is changing the pace and the scope of decision making, making it speed up, in many cases, which potentially reduces the time available for human decision makers to assess and react to emerging threats.
Now, when we consider the impact of AI and escalation dynamics, in my view, we're actually talking about at least three distinct issues, and we don't have time to cover them all here, but I'll just list them briefly. So, we're talking about how AI changes power dynamics that fuel escalation, how AI contributes to conflict escalation by creating more uncertainty and unpredictability, and how AI itself is a cause of escalation, such as through an AI arms race or triggering the use of nuclear weapons. And I think it's important not to conflate these issues and to avoid overarching generalizations about AI and escalation, which is highly case dependent. And that, by the way, is a theme throughout the discussions at REAIM, that we have to stop both over hyping AI technology and over-generalizing about either the risks or the opportunities, because the answer is, as the lawyers will tell you, that it depends, it really depends, on the case that you're talking about and the specific context in which it's used.
So, bearing that caveat in mind, there are a couple of features of AI that complicate escalation dynamics I'd like to draw out. So, as I said, it increases uncertainty and unpredictability in the escalation ladder, and that alters signaling and perception dynamics. So, AI-augmented strategic warning systems, for example, may not be able to distinguish between signaling resolve or deliberate escalation, and we know from multiple studies that large language models repeatedly have exhibited unpredictable, escalatory behavior patterns and simulations, even to the point of recommending first-strike nuclear use.
And compounding this is the fact that we're about to enter an era of mutually assured transparency, where adversaries not only anticipate each other's moves, but also know that their own plans are likely understood and vice versa. So, for example, not only will we know that Putin will invade Ukraine several months ahead of time, but he'll also know that we know that. And it may be that foreknowledge enhances deterrence, but on the other hand, it could also incentivize preemptive military action, which is something that keeps me up at night. And it could potentially incentivize it for earlier than what we've seen today, raising uncomfortable questions about evolving legal justification surrounding the use of force under elongated interpretations of imminence. So, can you imagine, for example, a legal justification for the use of force and self-defense several months in advance of an actual attack based on an AI prediction. That's a scenario that that kind of keeps me up at night. And I don't think, to be clear, that states would actually say that this justification for the use of force is based on AI. I think they would say that it's based on intelligence, even if the intelligence is simply an AI prediction.
So, as AI leads to shifts in deterrence strategies in these ways, it may also incentivize earlier or riskier interventions and into conflicts in different regions with uncertain outcomes. And at the same time, AI introduces acute verification problems which further complicate attribution and policy responses. So, for example, states could plausibly deny responsibility for attacks by claiming that a third-party actor had manipulated AI systems and algorithms or poisoned the data through cyber-attacks. And these types of difficulties within attribution, which are often fueled by AI-driven misinformation and disinformation, pose significant risks for decision-makers who are seeking to respond to attacks without provoking an escalatory spiral. It becomes very difficult to know how to respond and to whom and in what way.
And, you know, there are steps that can be taken to mitigate these risks. We talked a little bit about it on the FCDO-MOD panel, some of which I talked about earlier — transparency and confidence building measures. But, it's a really difficult issue, and we need to do much more work on that. And part of the answer, at least in terms of attribution side, is ensuring that we have robust public-private partnerships in place to secure advanced AI systems from cyber-attacks, so it becomes more difficult for states to claim that the systems are manipulated, that that led to an attack. But there's no really good answer to this problem.
Paras: Another key concern is how the U.S. and its allies respond to threats collectively. How is AI impacting interoperability?
Brianna: So, alongside reducing the risk of escalation and other challenges, lies in ensuring how allies are prepared to respond collectively to threats. And so, I think that a policy priority going forward should really be AI interoperability and capacity building and knowledge transfer among key stakeholders. And I want to be clear, this can't be an afterthought. We talked about capacity building a bit. It was in the blueprint for action, but I haven't seen any long-term strategic planning about how we can reduce, you know, strategic, tactical and reputational risks of AI, of working with partners on AI, while preventing misalignment and regulatory fragmentation. So again, this goes back to what I said at the beginning, that we need to develop shared frameworks, particularly with allies, on things like ROE, the interpretation of international law to specific use cases, the adoption of AI and joint military operations.
These are all critical for military and coalition effectiveness. And absent this, I'm afraid that we'll see something similar to what we saw in the in the Global War on Terror, where the U.S. had joint counterterrorism operations with the E.U. that were at times, you know, drone strikes and so forth, that were at times impeded by different interpretations of how international law applies to those operations. And I think we can see something very similar happening in in the AI space in the coming months and years.
So, there are a lot of things that policy makers can do to address this risk. We don't have time to go through them all, but one of the key things is establishing structured and sustained knowledge exchanges to promote AI literacy, something that's already happening, but needs to be expanded to more diverse actors. I also think that leading AI powers such as the U.S. and U.K. have a responsibility to bridge the emerging digital divide between those states that have more resources and those that have fewer resources to implement and have the institutional capacity to integrate AI into existing military systems. So, that's a key responsibility for leading AI powers.
But ultimately, you know, the biggest thing is that we have to do something similar to what we did in the cyberspace. You know, things similar to the Oxford Process, for example, on international law protections in cyberspace, where we really hash out in a concrete way, how does international law, how should it be interpreted in XYZ cases where AI applications are deployed in a military or intelligence context. And it's really clear that alongside that, we should be sharing best practices with allies, so with NATO, with other countries, with the Five Eyes, for legal reviews of AI enabled technologies, including AI-decision support systems, and these legal reviews are classified, but they can be redacted and declassified and shared with allies to promote that kind of transparency that we urgently need.
Paras: The heads of MI6 and the CIA recently published an op-ed warning that the international order is "under threat" in ways that have not been seen since the end of the Cold War, which was true even before adding AI into the mix. What do you see as the biggest challenges that AI will pose to the rules-based international order?
Brianna: We've talked about it a little bit, but essentially, what's happening is that the accelerating adoption of AI in military contexts — it's posing a number of profound challenges to a rules-based international order that's already under great strain. And that's partly because of the intersection of shifting geopolitics and technology, including AI. So, it's a very big topic. We're not going to get into all here, but I just like to make two very brief interventions for our listeners to just keep in mind.
So, the first is that, like the drone program, AI, is likely to lower the threshold for the use of force. But what's different — by making it easier to resort to use of force — but what's different here is that it's also, at the same time as it's lowering the threshold for the use of force, it's also likely to increase the pace and the scope of war. And so that further contributes to this blurring of the line between war and peace that we've seen for more than two decades since 9/11. And you know, this is something that's been a trend for quite some time, and I think AI is only likely to deepen that trend. And all of that threatens to erode the bedrock prohibition on the use of force and pose increased risks to civilians, absent more stringent legal guardrails, which just aren't in place right now.
And finally, we touched on it a little bit in the beginning, but AI democratizes power, so this has real implications for state sovereignty and international stability. Not only will AI technologies — and they already are equipping non-state actors such as tech firms and terrorist groups and other groups with tools to more effectively reshape and challenge the international order — but it's also equipping such groups with tools to conduct more effective cyber-attacks, more effective disinformation campaigns. And all of this has potentially destabilizing effects, particularly on states that are poorly governed or ungoverned and don't have that institutional capacity in place to counter those types of threats. And that's something that I'm going to be talking more about with the U.S. and U.K. government next month in London, as we continue to have a dialog on that particular issue.
But through all of this, and this is the thought that I just want to leave you with, it's important to consider how AI is changing the nature of the relationship between the state and its citizens. And this is a question that we don't often talk about in the military domain, because we see it as more of like a domestic social science type of profession, but it has real strategic security military implications. So how does that relationship affect public trust, the will to fight, or control of the information environment? In other words, there are a lot of strategic implications surrounding that domestic context, that relationship between the state and the citizens, that are not just about the military, but have profound implications for the military. And what I worry about is, I see a lot of research from foundations and think tanks and great research about the connections between AI and democracy and fundamental freedoms and all, it's great research, but no one seems to be really drawing out that connection to the strategic level, and what are the implications for international stability or security, for national security and military readiness, and I think that's an area where more research is needed.
But all of that points to the fact that, you know, technology, we always say technology in itself is neutral. It's not about the technology itself. It's about the ways in which it is used. And that's doubly true for AI. So, I think what's come out of REAIM, and what was really heartening to see is that a lot of people recognize this, is that we need to do a better job of grounding discussions of AI risk within the broader social, institutional and really uniquely human environments in which these technologies are deployed. So, that's a key takeaway that I hope everyone left with.
Paras: So, we've covered a lot of different risks, and governments have different levels of resources and capacity, some more than others. How should they think about triaging and prioritizing among these competing priorities?
Brianna: Yeah, it's a great question, because, as you said, there are varying levels of capacity. There are varying levels of political will, and we know that governance is not at all moving at the pace of technological advancement, at the pace that we need to be at. So a real question is, given all of these challenges — and we've only outlined a few here — what can policymakers realistically do to keep individuals and society as a whole safe from these risks?
And so, I think that that there is a degree of prioritization that can happen to a certain extent. So, policymakers can focus on high impact concerns, such as AI's potential to destabilize strategic deterrence and lead to nuclear warfare. That's a place where a lot of people want to start and build consensus from there. Another approach is to tackle risks that have already manifested, such as the use of AI decision support systems in targeting or AI for offensive cyber operations. And governance frameworks could also conversely prioritize the regulation of AI tools which are both highly destabilizing and could easily fall into the hands of terrorists.
So, those are three different categories, if you will, of prioritization that could be addressed, but that list alone highlights the imperative to address both near term and existential threats simultaneously, and that's really the challenge that's coming out of these governance conversations, and something that Kenneth Payne from King's College London suggested in one of the discussions that, you know, the debates that we're having on military AI governance and existential risk, like the Bletchley Declaration, have been largely siloed up to this point. But, in Ken's view, those debates could soon very well converge because of the fact that we need to be addressing both near term and existential risk when it comes to this space. So, in short, a really short answer to your question, when it comes to AI governance, policymakers need to do everything all at once, and that's the challenge here.
Paras: Yeah, everything everywhere, all at once when it comes to AI regulation.
Brianna: That's right.
Paras: Well, this has been so helpful. Brianna, thank you again for joining the show and for sharing all of your insights from the REAIM Summit. We'll be following all these issues at Just Security. Thanks again.
Brianna: Thank you so much, Paras. And I just want to say thank you to the South Korean government for hosting REAIM. Thank you to the Carnegie Council and the Blavatnik School of Government for sponsoring the panel. Thanks to Just Security, and particularly Tess Bridgeman, for lending their expertise to that panel, as well as to Kenneth Payne, Paul Lyons and Toni Erskine, who were fellow panelists on our breakout session. And I hope you're able to hear me from the airport at South Korea. Thank you for bearing with me through the jet lag and for hosting these types of critical conversations on how we need to progress governance on military AI to make it more responsible, more transparent and ultimately safer for individuals around the world. Thanks so much, Paras.
Paras: Thanks again.
This episode was hosted and produced by me, Paras Shah, with help from Clara Apt.
Special thanks to Brianna Rosen.
You can read all of Just Security’s coverage of Artificial Intelligence and the REAIM Summit, including Brianna’s analysis, on our website.
If you enjoyed this episode, please give us a five-star rating on Apple Podcasts or wherever you listen.