
The Just Security Podcast
The Just Security Podcast
Key Takeaways from the Paris AI Action Summit
The Artificial Intelligence Action Summit recently concluded in Paris, France, drawing world leaders including U.S. Vice President JD Vance. The Summit led to a declaration on “inclusive and sustainable” artificial intelligence, which the United States and United Kingdom have refused to join, though 60 other nations, including China and India support the declaration.
What are the key takeaways from the Summit? How might it shape other global efforts to regulate artificial intelligence?
Joining the show to discuss the Summit is Dr. Brianna Rosen, Director of Just Security’s AI and Emerging Technologies Initiative and Senior Research Associate at the University of Oxford.
Show Notes:
- Brianna Rosen (LinkedIn – X – Bluesky)
- Paras Shah (LinkedIn – X)
- Just Security’s Artificial Intelligence coverage
- Just Security’s Tech Policy under Trump 2.0 Series
- Music: “Parisian Dream” by Albert Behar from Uppbeat: https://uppbeat.io/t/albert-behar/parisian-dream (License code: RXLDKOXCM02WX2LL)
Paras Shah: The Artificial Intelligence Action Summit recently concluded in Paris, France, drawing world leaders including U.S. Vice President JD Vance. The Summit led to a declaration on “inclusive and sustainable” artificial intelligence, which the United States and U.K. have refused to join, though 60 other nations, including China and India, support the declaration.
What are the key takeaways from the Summit? How might it shape other global efforts to regulate artificial intelligence?
This is the Just Security Podcast. I’m your host, Paras Shah.
Joining the show to discuss the Summit is Dr. Brianna Rosen, Director of Just Security’s AI and Emerging Technologies Initiative and Senior Research Associate at the University of Oxford.
Brianna, thanks again for joining the show. You've developed a real talent for joining us on the road. Last time, I believe you were at the airport, and now you're on a bus. So, thanks again for helping us to unpack these developments so quickly after they happen. Could you start by telling us your key takeaways from the Summit?
Brianna Rosen: Thanks Paras, and thanks for having me back on the show. I've just wrapped up several days at the AI Action Summit in Paris, and I'm on route now from Paris back to London, so apologies for the background noise. We should call this the Euro/Oxford 2 podcast. Then here we are. So, what are some key takeaways from the summit? I would highlight three. The first is that Europe and France in particular are trying to chart a new pro-innovation path that encourages growth, loosens some forms of regulation, but still focuses on integrating AI into societies in ways that are inclusive, just and sustainable. Now, as we heard at the Summit, Europe's focus is going to be on AI adoption, which they view as their niche in the global AI race, as well as fostering a more cooperative ethos that aims to bring talent together from different countries and sectors around the world, essentially acting as a bridge between the U.S. and the rest, and in some ways, trying to fill the leadership vacuum that has emerged in global AI governance and the advent of the Trump presidency.
Now, it remains to be seen whether these efforts will be successful, but it's fairly clear even now that this is out of step with the U.S. vision, as articulated and reiterated by JD Vance at the Summit, for achieving AI dominance and unleashing innovation with fairly light touch regulation. So, it seems that you know, the U.K. will probably follow the U.S. lead in this respect, which leads us to wonder how much influence the EU will really have and how much leverage they'll have to push the U.S. on this issue, particularly since the vast majority of AI innovation and capital and infrastructure is happening in the U.S.
Now, JD Vince really underscored this point in his speech at the Summit, where the very first thing he said was about maintaining U.S. leadership on AI, and he made it clear that this is a priority for the new administration, rather than fostering inclusivity or cooperation, and the driving ethos behind that, in his words, are that the AI future will be won not by “hand wringing” about security, but by building. So, this very much echoes the sentiment directly after President Trump was elected, when a lot of venture capitalists went to social media proclaiming that now's the time to build in AI. So, that's point number one.
The second takeaway that I would highlight is that everyone is clearly embracing the power of open-source, and perhaps this comes as no surprise, as the Summit is occurring at a moment when the world is still reeling from the rise of China's DeepSeek, and it's a powerful open-source model that rivals leading capabilities from U.S. companies. But there's a clear emphasis on the need to build robust open-source AI tools that will also promote safety and trust. So, essentially building an open-source ecosystem that promotes safety online, that protects vulnerable communities, and that also is embedded with democratic values, so, challenging China's dominance in the open-source market when it comes to AI.
And the third takeaway that I would highlight is that AI security appears to be the new AI safety. So, the term AI safety, this was very clear in conversations at and around the Summit, has now become politicized, particularly under the current Trump administration. And there's a sense that people can't even really talk about safety, norms, responsible AI, even risks in the way that they used to, even in the way that that they did just a year ago at the U.K. AI Summit. Now, AI security doesn't carry the same political baggage, insofar as security is largely a bipartisan issue, so in some ways, if we can't talk about safety, then at least we can talk about security, even though they're very different things.
And Vance, in his speech, alluded to the need to protect AI technologies and semiconductors from theft and misuse, underscoring that the new administration will also have a focus on AI security. So, in this and other ways, the U.S. government is sending a strong signal to industry, and to AI organizations more broadly, that AI security is a priority requiring urgent investment and also urgent research around benchmarks and how to meet those benchmarks, particularly due to the long lead times. So, that was one semi-positive development coming out of the Summit, and that there is a more urgent focus on AI security, although, of course, it should not be at the expense of AI safety.
Paras: If you had to describe the vibe of the Summit in one word, what would it be?
Brianna: Schizophrenic? I mean, there were very different sets of conversations taking place in and around the Summit. That's one thing that struck me quite a bit. And in some ways, these seemed like diametrically opposed conversations, because at the political level, much of the focus was on innovation and opportunities rather than risks or safety. And yet, civil society, academia and even industry-sponsored events on the sidelines of the Summit continued to focus quite a bit on risk and how to mitigate that risk, but the way that we were talking about it was very different than in the past. So, for example, there was a stark contrast between this Paris AI Action Summit and the U.K. AI Safety Summit that was held last year. That summit focused largely on existential risk, whereas this summit is focused on innovation and opportunity, and its very kind of pro-industry vibe.
And you know, similarly, I feel that the conversation has shifted quite a bit in the sense that even though Macron and some of the other EU officials emphasize the need for multilateral collaboration, the emphasis on inclusivity that has been a bedrock of some of these global AI governance debates didn't feel like it was there. And this was probably the saddest part of the summit for me. I spoke with a number of colleagues, former UN people working in government and civil society, and they just felt that the Global South was left behind in these discussions, and that there's no longer any appetite for building a truly inclusive global framework for AI governance. And so, I worry that that's an omen for what's to come in the future.
And some of the reason why the debate felt so schizophrenic was because, you know, it was falling along these lines — innovation versus regulation, AI hegemony versus inclusivity. And then the other fault line was the closed-source versus open-source debate, which, of course, has been made all the more urgent by the rise of DeepSeek. So, I spent all of Sunday at the AI Security Forum, where discussions were focused on securing advanced AI models from cyber-attacks, cyber theft. And then I spent the evening with open-source evangelists who are launching new open-source tooling to build trust and safety online. And so that just felt like rather a jarring disconnect for me, to go from one place to emphasize so much the closed-source models and one so much the open-source ecosystem.
But I actually think on reflection, and this is something that came out in one of the final sessions of the Summit, that we shouldn't see this as a binary choice, open-source versus closed-source, right? The answer really, truly is that we need to build both ecosystems. So, we urgently need robust open-source tools to promote safety online, and we also need to move a lot faster in protecting advanced AI systems that are critical to national security.
Similarly, it's not innovation or regulation. We need to have both innovation and regulation. In fact, we need to unlock innovation through safe AI, through stable AI, through secure AI. And so, even though there was seemingly like diametrically opposed discussions within and outside of the summit, I think it's — one thing going forward that we need to really emphasize is that these are false binary choices. We need both innovation and regulation. We need safe and secure, closed-source and open-source ecosystems, and we need inclusivity. And it needs to be continued to be emphasized in debates on global AI governance.
Paras: And what do you see as the biggest challenge ahead?
Brianna: So, it's clear to me that safety unlocks innovation, security unlocks innovation, ethics unlocks innovation. All of these things aren't going away, but it's easy to say that safety unlocks innovation, and it's much harder to do that in this current political climate and geopolitical environment.
Paras: Thanks for that. What are you most optimistic about coming out of the summit?
Brianna: One thing that really excited me coming out of the AI Summit was the potential to develop technical solutions to what I view as being policy problems — these seemingly intractable governance issues that there's no good solution to. So, for example, the security dilemma that exists between the U.S. and China. How do we develop a really effective AI, global AI governance framework given the lack of trust between the two major players in the space and the security dilemma that exists there?
I was really excited by some of the research that I heard at the AI Security Forum about how technology can help solve some of these security assurance dilemmas. So, some people were building technical solutions, hardware-enabled mechanisms to promote this trust, even when there is no trust, and to ensure that people wouldn’t defect from arms control regimes, for example, surrounding AI. So, I think that's a really smart thing to do. It's something that we're trying to do at Oxford as well. The high lab there is building the philosophy to code pipeline, which aims to embed all of these normative principles that have been developed surrounding AI development and use to actually take those principles, instantiate them, put them into code and embed them into AI products themselves.
So, I think there's a new push now to embed both philosophical and legal principles directly into technology. We've all gotten a little bit tired of talking about AI governance in these broad, sweeping terms. So, we're moving away from declarations and principles that nobody really knows how to operationalize, and I think moving towards something that that could be more action-oriented, that could be more results-driven, where we figure out for specific use cases of AI, what are the legal principles that apply? How does IHRL, international human rights law, or IHL, international humanitarian law, how do they apply to those specific use cases? And then, how can we code those principles? How can we actually embed those principles into the technology so that it's used in ways that are in compliance with international law, or so that it can't be misused by bad actors, whether state actors or non-state actors?
So, that's an effort that we're trying to do working with some labs, and the highlight is doing it on the philosophy side as well. But I think it's really important to build these law to code pipelines, philosophy to code pipelines, and work on developing technical solutions to persistent policy challenges. That's one of the most exciting opportunities coming out of this summit.
Paras: Thanks again for your time, Brianna, we really appreciate it, especially while you're on the move and in transit. We'll be following all these issues at Just Security. Thanks again.
Brianna: Thanks so much, Paras, always a pleasure.
Paras: This episode was hosted and produced by me, Paras Shah, with help from Clara Apt.
Special thanks to Brianna Rosen. You can read all of Just Security’s coverage of artificial intelligence and emerging technologies on our website. If you enjoyed this episode, please give us a five-star rating on Apple Podcasts or wherever you listen.