
The Just Security Podcast
Just Security is an online forum for the rigorous analysis of national security, foreign policy, and rights. We aim to promote principled solutions to problems confronting decision-makers in the United States and abroad. Our expert authors are individuals with significant government experience, academics, civil society practitioners, individuals directly affected by national security policies, and other leading voices.
The Just Security Podcast
Trump’s AI Strategy Takes Shape
In early April 2025, the White House Office of Management and Budget (OMB) released two major policies on Federal Agency Use of AI and Federal Procurement of AI - OMB memos M-25-21 and M-25-22, respectively. These memos were revised at the direction of President Trump’s January 2025 executive order, “Removing Barriers to American Leadership in Artificial Intelligence” and replaced the Biden-era guidance. Under the direction of the same executive order, the Department of Energy (DOE) also put out a request for information on AI infrastructure on DOE lands, following the announcement of the $500 billion Stargate project that aims to rapidly build new data centers and AI infrastructure throughout the United States.
As the Trump administration is poised to unveil its AI Action Plan in the near future, the broader contours of its strategy for AI adoption and acceleration already seem to be falling into place.
Is a distinct Trump strategy for AI beginning to emerge—and what will that mean for the United States and the rest of the world?
Show Notes:
- Joshua Geltzer
- Brianna Rosen
- Just Security series, Tech Policy Under Trump 2.0
- Clara Apt and Brianna Rosen's article "Shaping the AI Action Plan: Responses to the White House's Request for Information" (Mar. 18, 2025)
- Justin Hendrix's article "What Just Happened: Trump's Announcement of the Stargate AI Infrastructure Project" (Jan. 22, 2025)
- Sam Winter-Levy's article "The Future of the AI Diffusion Framework" (Jan. 21, 2025)
- Clara Apt and Brianna Rosen's article, "Unpacking the Biden Administration's Executive Order on AI Infrastructure" (Jan. 16, 2025)
- Just Security's Artificial Intelligence Archive
- Music: “Broken” by David Bullard from Uppbeat: https://uppbeat.io/t/david-bullard/broken (License code: OSC7K3LCPSGXISVI
Ryan Goodman: Hi everyone. This is Ryan Goodman, Co-Editor-in-Chief of Just Security. On the day this podcast went live, Just Security branched out into Substack, giving us the opportunity for even more engagement with readers and listeners. We aim to keep all of our content free and available to all. Please follow us there as well. We look forward to seeing you in Substack as well as our regular content at justecurity.org.
Brianna Rosen: Last week, the White House Office of Management and Budget, OMB, released two major policies on federal agency use of AI and federal procurement of AI, M-25-21 and M-25-22 respectively, replacing the Biden-era guidance. Now, these memos were issued in accordance with President Trump's January Executive Order removing barriers to American leadership in artificial intelligence and coincide with a new Department of Energy Request for Information on building AI infrastructure on Department of Energy lands. This follows the announcement of the $500 billion Stargate Project, which aims to rapidly establish data centers and AI infrastructure across the United States. So, it seems that as the Trump administration plans to unveil its broader AI action plan in the near future, the broader contours of its AI strategy are, in fact, already falling into place. Is a distinct Trump strategy for AI beginning to emerge, and what will that mean for the United States and the rest of the world?
This is the Just Security Podcast. I'm your host, Dr. Brianna Rosen, Senior Fellow and Director of the Artificial Intelligence and Emerging Technologies Initiative at Just Security. Joining the show to discuss Trump's emerging AI strategy is Joshua Geltzer. Josh served as Deputy Assistant to President Biden, Deputy White House Counsel and Legal Advisor to the National Security Council. Before that, he was Deputy Assistant to the President and Deputy Homeland Security Advisor. He also served as Special Assistant to the President and Special Advisor on countering domestic violent extremism, overseeing the development and implementation of the U.S.’s first ever national strategy for countering domestic terrorism. Josh is currently a partner at Wilmer Hale in Washington, DC, where he focuses on cutting edge national security issues, including AI and cybersecurity. Josh, thanks so much for joining us on the show.
Joshua Geltzer: Brianna, thanks so much for the chance to have this conversation with you.
Brianna: Josh, you're someone who's worked really closely on legal and national security issues at the highest levels of the White House. So, walk us through a little bit how you see these two new OMB memos fitting into the broader trajectory of federal AI policy? Are we, in fact, seeing signs of an emerging Trump strategy on AI, and what are the contours of that strategy?
Josh: I do think we're seeing a vision for AI coming from the current administration, and some nuanced areas of continuity and discontinuity that we should dig into in the course of this discussion. Maybe I'll start by breaking down what it means to have policy on AI into three components. One is a federal government policy towards the government's own adoption and use of AI, and that's where those two OMB memos that you mentioned that we should talk about further come into play. Then there's U.S. government policy with respect to the U.S. private sector’s continued leadership, really global leadership in AI, in the AI space. That relates more to the RFI put out by the Department of Energy, as well as some other emerging policy. And then there's a third piece, which is how to think about U.S. AI technology reaching the rest of the world, including countries less friendly to the United States. And there have been some developments on that front, too.
Let me start with the first. I know we want to go into all of it. When it comes to those two OMB memos, I think there's a lot of continuity there. You see an administration like the one before it that is keen to push the federal government to use this technology in ways that much of the private sector already has. And you see attention to guardrails, since going too fast can be irresponsible, even for all the benefits that this technology can produce. I do think there's some difference in what those guardrails are, and there are some rhetorical flourishes that really emphasize the desire for the government to get more fully into the game.
But fundamentally, I read those OMB memos as a continued spur to departments and agencies across the executive branch to figure out where AI can increase efficiency, effectiveness and drive policy outcomes.
Brianna: You talked about continuity and change. What are some of the specific changes that we see where the Trump strategy for AI is going, with the memos, with the executive order, with the RFIs? What are some key areas of change that we see from the Biden administration, because obviously, Trump rescinded the Biden-era October 2024 executive order on AI, but they've kept some elements of Biden's AI policy in place, like, so far, the National Security Memorandum on AI is still in place, infrastructure, you know, and there are a few other areas of continuity. So, walk us through a little bit more, some areas of major change, where this is a big shift from Biden's strategy.
Josh: Let me point to a notable shift that emerged in that RFI, that Request for Information, that was issued by the Energy Department in early April. Where there's continuity to begin with, is in the notion that there is federal land that could be utilized by the U.S. private sector for the needs of continued AI development, for so-called data centers critical to continuing to create frontier models in the AI space. And indeed, not just land, but land that has energy connectivity, because these things take a lot of energy.
But then, to answer your question, Brianna, there's a notable difference. The Biden administration had really emphasized not just energy, but environmentally friendly energy, clean energy, in the drive to utilize these federal lands for this space. That piece drops out in what we see from the Trump Department of Energy. There is instead an explicit desire in receiving responses from the private sector to this RFI, to see the full range of possible energy that could be utilized, not necessarily clean energy, which I think opens up a broader array of options for those who might want to use these sorts of sites. And the sites are very specific. The Energy Department mentions and indeed provides some pictures, diagrams mapping for 16 particular federal sites, asking the private sector to come back and indicate where there may be interest in using those for AI, and indeed what energy uses, clean or otherwise, might be able to power the use of such land for AI needs.
Brianna: That's so helpful, Josh, thank you. And on the OMB memo side, what do you see as being some of the key areas of continuing change? Because we'll take, for example, the first OMB memo, M-25-21. So, this replaces the Biden directive on the same topic, M-24-10, and instructs federal agencies to focus on three priorities when it comes to accelerating federal use of AI, so innovation, governance and public trust. And it's really emphasizing a forward-leaning approach to AI adoption that's trying to minimize bureaucratic hurdles.
And you know, some of the guardrails that are in that memo sound similar to things that were under the Biden administration. There are chief AI officers. There's a management process for potentially high-risk use-cases of AI, which the Trump administration is now calling high impact cases. What's missing from the OMB memos, from your vantage point? Are there certain areas of risk that are not addressed, that need to be addressed.
Josh: I do think your read is mine as well, which is, big picture, there is a lot of continuity there. You're right that there is language about being explicitly forward leaning, about being explicitly grow innovation, as well as language about, and I'm quoting again here, lessening the burden of bureaucratic restrictions. So, I think there is an emphasis here on the go and the do, perhaps a bit more than the guardrails, but the guardrails are. It would be a caricature to say otherwise, and many of the guardrails are similar to ones that the previous administration had indicated were important in the AI space, such as ensuring that the outcomes that are produced by reliance on AI are accurate, ensuring there's pre-deployment testing, ensuring that there's a way to see what AI is actually yielding once it's adopted by a particular department or agency for a specific use.
I think there are pieces that are emphasized less in in the Trump administration's approach, consistent with an overall approach by the administration, emphasis on things like equity and diversity. But I think if you zoom out, you see in that April 3 OMB memo, fundamentally the same pairing of a spur to the executive branch and a sense that this technology is new, and that, whether it's for reasons of protecting privacy and civil liberties and civil rights, all of which are mentioned, or whether it's for reasons of simply ensuring that this technology yields outcomes that are reliable rather than mistaken or errant, there are checks and guardrails, especially in those cases that you mentioned. They are now called high impact, but they fundamentally get at the same thing that not just the U.S. government, but other governments have been identifying for a few years now, which is trying to identify a category where reliance on AI may have more direct, more immediate, more consequential impacts on human beings at the end of the day, and then making the guardrails bit firmer, a bit tougher, a bit more robust in those spaces.
Brianna: You know, one thing that struck me that's that was missing from both memos, which is, again, this was also quite common under the Biden administration, but both memos have national security carve-outs, right? So, neither of the OMB memos apply to the Department of Defense or the intelligence community or other areas within the national security ecosystem that are integrating AI technology and systems into their processes right now. So, is that really significant, or does that not really matter in a sense, because, as I said earlier, the National Security Memorandum that does govern those national security use cases is still in place for now. But, you know, it also struck me that a lot of the guidance in the OMB memos should potentially be followed by the national security community as well. So, how do you see those carve-outs working? Do you think that's a concern or not really? Do you think that the forthcoming AI Action Plan will address some of that, along with broader uses of AI throughout the federal government?
Josh: This is a really important point you raised, Brianna, and well worth dwelling on. And maybe I'll step back in the hope of setting the table a bit for listeners. The Biden administration's approach was to issue first an executive order on U.S. government AI adoption deployment, followed by an OMB memo to implement it, and then, about a year later, a National Security Memorandum specifically focused on AI adoption, deployment, use by the national security components of the executive branch. Now, the first day of the Trump administration, the first EO I mentioned was revoked and a review was set in motion of anything that flowed from it, including presumably things like the National Security Memorandum that I just mentioned.
Now, we see new OMB memos, or maybe I should say revised or amended OMB memos that seem, as you and I have been saying, to maintain a lot of continuity in the non-national security space. I think we're in the wait and see mode a bit in the national security space, and that means waiting and seeing for the results of the review set in motion, things like the National Security Memorandum and other AI promulgations from the prior administration, what will be their fate in the review underway by the new administration, and that Action Plan you mentioned, because we don't know exactly the contours the parameters of the Action Plan on AI that the Trump administration is developing. Maybe it will include, maybe it’ll incorporate work in the national security space. Maybe it won't. But I think you are quite right that we have a stronger sense of direction for the U.S. government's approach to AI in the non-national security space thanks to these two OMB memos than we do yet for national security components of the federal government, like the Defense Department and like the intelligence community.
Brianna: Right, and of course, for listeners who might be following it less closely, the reason that the national security carve-outs are so essential is that these are potentially high-risk domains where AI tools are being used in military action and intelligence collection analysis and so forth.
Josh, I want to turn a little bit to what you view as some of the implementation challenges with the OMB memos, because reading through it, it struck me that one of the potential challenges with the OMB guidance is that, on the procurement side, for example, agencies are told to procure AI that “works well.” But many agencies in the federal government have not identified, may have not identified specific outcomes or metrics that would meet that criteria. So, I'm wondering, in your experience, how critical is it that federal agencies define these performance requirements early on, and what concrete steps do you think the administration should take to ensure that agencies are able to effectively implement the OMB guidance more broadly?
Josh: I do think you've homed in on a critical point, and I might even say it's a challenge, both for the federal government, but also for the private sector, for those offering really novel, really important, potentially really consequential AI tools. But both sides, I think, have a challenge of trying to meet the criteria set out, really, in the second of the two memos issued that same day by OMB. Because, as you indicate, OMB calls for product demonstration wherever possible and performance-based acquisition techniques. In other words, to put that in plain English, they want departments and agencies, ideally before contracting for the use of AI tools and services from the private sector, to figure out what actually works and how they know it works, and how they'll be able to keep auditing and otherwise testing as to whether it's working in the ways being utilized by the government.
We have seen AI do remarkable things in the private sector, drive efficiencies. We've also seen challenges from the hallucination, so-called hallucination problems that I think folks find particularly intriguing, where an AI result is fabricated in response to a prompt, and sort of imagine something that would be responsive, or simply errant. That's going to happen with any new technology. But I think it is a challenge, as I say, not just for the consumer, so to speak, the federal government here, of these AI products, but really for the, I'll call them the vendor, for the private sector, to try to figure out what it means to be able to pitch valuable products in a way that meets the terms of this OMB memo, because in some ways, and again, zooming way out from the important details here, what this memo that's focused on acquisition and efficiency says is use this stuff more, better, faster, but also make sure you're using it right, well, effectively. And I don't think that's an unusual tension in trying to get the government or any other actor to utilize new and evolving technology, but it is still a tension.
Brianna: Yeah, I think implementation is always the biggest challenge. So, even if it's a well-intentioned policy or the language is generally where I think it needs to be, as you've seen in your experience working at the White House, sometimes these policies get stalled or watered down in practice over time. So, just wondering, from your own experience, what do you think are some of the key steps that the administration you touched a little bit on the private sector or even civil society should take to ensure that agency needs are well defined, that technical features are linked to outcomes, that there's appropriate oversight, just turning this division that's articulated in these two updated policies into action. What are some key steps that the administration should take in particular?
Josh: I do think one key element is the establishment, through these OMB memos, of a senior official at each relevant department agency, and for these memos, it's the non-national security components of the executive branch, as we said earlier, but a senior official to be the lead on these matters, of course, working with colleagues throughout those agencies. But this is in some ways what we in government used to call the one neck to squeeze approach — that if something is hard and important and fast moving and meriting accountability, it's useful to know the one neck to squeeze if something is going awry or going wrong. And I think it is to the credit of OMB that they're trying to use these memos not just to set out an approach to articulate a framework for U.S. government AI adoption going forward, but also a structure for implementing that approach.
And I think having AI leads at departments and agencies, having structures in which they coordinate within their agencies and with each other, and doing so in a way consistent with the memos call for auditing, accountability where possible, transparency — although AI introduces its own challenges to transparency, just based on the sheer way in which the technology functions — I think those structural elements are — they begin to get At the implementation challenge that you rightly point to. They also, I hope, will allow the federal government and stakeholders beyond to have places to go for dialogue on what's working and what's not and where this all goes.
Brianna: One thing that struck me as being very positive, particularly in the OMB memo M-25-21, is the emphasis on transparency, which is actually more than I think we've seen in the past, where they're mandating that federal agencies will develop AI strategies and then make those publicly available on their websites within 180 days, and that not only should those strategies be publicly available, but also understandable to the broader American public, so that they can understand how agencies investment in AI benefits them.
So, I thought that emphasis on transparency was really a step in the right direction. But I wonder, and I wonder if you have any views on this, whether that applies across the board? For example, there's been some reporting that DOGE plans to use AI to increase its efficiency efforts. Do you think that it will have to follow the guidance laid out in these memos?
Josh: Like you, I found that part notable, and I think interesting and frankly important for a private sector, but also a civil society, that is understandably very interested in where the federal government's use of these technologies goes. Exactly how it will align with current reported use of AI, I think is probably in the still to be determined phase, but I do think that's valuable, as you say, not just for those who may have interests and concerns about the way these technologies can get used, but even those who are trying to understand how best to meet the federal government's interest in the next generation or some adaptation of it.
I also want to, sort of paired with that transparency point, point to another feature of one of the OMB memos outside the executive branch, which I'm not sure transparency is the right word, but interoperability, I think, is the word memo uses across the executive branch, because, and I say this having served in the executive branch, the executive branch is built upon particular departments and agencies. And to those who know about contracts and acquisitions, those often get done by particular departments and agencies, or really subunits of them, and that can create inefficiencies. And I do think at the dawn, or close to the dawn of this technology’s adoption by the federal government, it is notable to see the Office of Management and Budget explicitly call for agencies to try to develop criteria for data interoperability and standardization so that agencies are sharing and able to share across the federal government, where possible, their use of AI, rather than have each department agency take this guidance and on its own try to figure out what implementation means, contract with some vendor and then not realize that another piece of executive branch is doing almost exactly the same thing. So that, I think, is another attempt to put this all in motion, in a direction that gives it some rationality and harmony, while also saying, do this with aggressiveness, with speed.
Brianna: Yeah, I think your point on interoperability is really crucial, because that's one of the things that keeps me up at night as a someone who formerly worked in government for a long time, is this idea that we've got this patchwork of different AI policies and regulations, even within the federal government and across the federal government, and what does that mean in terms of when you have to integrate these systems up to the National Security Council level, from an interagency perspective, how does this all come together? And what are some of not only inefficiencies that can happen when there isn't that interoperability, but also mistakes, because they're not playing, they're not operating off of the same sheet of music, and errors that might be introduced in one federal agency can then be kind of compounded up through the interagency process, and what might that mean? So, I think that's a really a key point that you raise.
Josh: It's a good point, and it pairs with another point in the memos, which is, I think it's called an AI-ready federal workforce, because to do what you've just described, and frankly, to implement virtually all aspects of these memos, one needs a workforce that understands this technology, has some comfort with the technology, and is able to do things like determine whether the technology is yielding results versus errors or some other outcomes that are not what one wanted from it.
And so, one other key aspect, it seems to me, is the call in one of these memos for promoting AI talent within the federal workforce. I think it's called upskilling, but helping to train existing staff on a technology that may not have existed when some started in the federal workforce, and trying to just increase the fluency across the executive branch in these technologies. That seems important to getting right all these other pieces we've been talking about.
Brianna: Yeah, of course, the emphasis on upskilling in AI is not really new, because I think there was a push towards the end of the Biden administration to do that as well. But I think what is a bit different is there's a real emphasis on regulating for specific use-cases, rather than across the board, and that might be quite helpful in terms of getting people up to speed on specific use-cases of AI, and getting the skills and talent that we need to develop policies for those specific use-cases.
Josh: That's exactly right. And this brings us back to one of the things we had rightly put in the wait and see bucket earlier in the conversation, which is the fate of the National Security Memorandum, which is structured in significant part around particular use-cases, around particular pilot projects, rather than just generally say to the Defense Department, the intelligence community, law enforcement, use this stuff more and better, but have guardrails that actually said for this particular entity within the executive branch, here’s a way to use it. Let's see how it works. Figure it out in this many days and then and then determine you know how the results look.
And whether that structure of progress through use-cases or progress through pilot projects is retained, either because the NSM as a whole is retained or because that element is incorporated into some amended or new version or not, I think falls into the wait and see category.
Brianna: Right, and of course, there's a growing repository of federal AI use-cases, which I think we'll see expand even further with this kind of test pilot scale approach, which I think is probably the right approach for a technology that has multipurpose applications. It's not a monolithic technology, and really there needs to be quite a lot of experimentation in controlled, you know, sandbox-type of environments so that we can actually see what the impact is for specific use-cases, particularly for high-risk use cases, has to be in a controlled environment to kind of test what the policy should be. So, I think we'll probably see more of that.
And you know, so, part of this, we've been talking about implementation and a little bit about the need to define outcomes and goals. But I think the flip side of that is that to adopt AI responsibly, federal agencies also have to have solid testing protocols in place, right? So, organizations such as NIST, the National Institute of Standards and Technology, are crucial to ensure that AI systems are being tested to meet the desired outcomes. But under the Trump administration, we know that NIST is facing significant budget and staffing challenges, in particular the future of the AI Safety Institute still remains uncertain in some ways. So, I wonder how big a threat is that in your view to robust AI adoption? Because on the one hand, we have the Trump administration saying they want to innovate and accelerate innovation on AI and do that in a responsible way, but on the other hand, they're cutting funding and stuff for some of the key organizations that would monitor and kind of test the outcomes of those testing for specific use cases of AI.
Josh: I think you're pointing to another open question, an important one of, kind of, what is the fate of some of these pieces of the executive branch tasked, really, across administrations, with doing work in this space. You know, it's worth, I think, emphasizing that, really, across two administrations now, the point of establishing AI guardrails, AI standards is not just because guardrails and standards have value in responsible use of a new technology. It's partly to drive the more experimental, more novel, hopefully useful deployment of the technology. In other words, they're not guardrails just for guardrails’ sake. They're guardrails because guardrails actually give comfort in trying something new with the technology that is showing many benefits in the private sector already.
And so, I think what the federal government is hearing, at least from pieces of the private sector, is that they certainly don't want this technology to be bound, restricted in ways that halt the progress being made technologically, but that things like standards from the federal government actually are enablers, rather than merely being restrictors, because they then allow that federal government to meet the exact called for elements of these memos. In other words, when you have standards, you're then able to adopt a technology, because you can run through what OMB has set out here, indicate that the Commerce Department has set out a matrix for understanding what responsible use looks like, this use and this particular technology fit within it, so go forth and try. Go forth and conquer that use case as we were just talking about them. So, I think that must be an ongoing conversation within the executive branch, but also one with, with the private sector as well.
Brianna: Yeah, it's this idea that came up on the sidelines of the Paris AI Action Summit as well, that it's safety that unlocks innovation, security that unlocks innovation, stability that unlocks innovation. I think that's a clear message that, you know, the private sector, civil society are sending to the Trump administration. I just wonder if that message is being fully heard, particularly since we've seen some policies that do seem counterproductive to the stated desire to fast-track AI innovation like the DOGE-directed cuts to NIST and the Department of Energy and the NSF, some of the cuts to the federal and university research grants, and along with the more restrictive immigration policies where there are some real fears emerging within the AI expert community that there will be an emerging brain drain, or that talent will be pushed overseas, whether to China or to other countries. Do you think that some of these policies are hurting the U.S.’s ability to “win” the AI race in the long term? And do you think that the Trump administration might shift course on some of that as a result?
Josh: I think there are some real ongoing debates about aspects of AI policy. This was, I think, what you and I both touched on towards the beginning of the conversation, saying there were areas of continuity, areas of discontinuity, and areas still up for up for grabs. I'll give one example. It's a little bit different from examples we've been talking about, but it kind of shows that the ongoing conversations and debates, which is reports emerged this week of a letter sent last week by a number of Republican senators to the Commerce Secretary about the so-called AI diffusion rule. This was something promulgated towards the end of the last administration, putting the countries of the world essentially in three tiers. And for at least tiers two and three, restricting the more sophisticated U.S. AI technologies from being able to be fully, at least exported, to those. It's a lot more nuanced than that, but I think that captures it in a nutshell.
And there's a, I think conversation underway within the administration ahead of what would otherwise be May 15 implementation beginning for that rule as to whether to stick with it, ditch it or amend it. And you saw a number of Republican senators weigh in with a letter to the Commerce Secretary opposing it. The same time, you have those who come at it from a national security-protective angle of advanced U.S. technologies, who, as I understand it, remain supportive. So, I don't think there is a final answer within the administration on AI diffusion, but I use that as an example that fits with some of the others you mentioned, where AI policy is not one thing. It is multifaceted. It is about the government, the private sector, foreign governments, foreign private sectors.
And some of these are ongoing conversations, and I think we will see in the AI Action Plan at least a lot of key indications as to where this administration is headed. But frankly, AI policy, with all of its dimensions, is going to be an essential component of this administration and ones to follow it given the centrality of this technology.
Brianna: Absolutely. I want to expand a bit on the energy piece of all this, because you mentioned in the beginning the Request for Information from the Department of Energy, and how in some ways, the Trump administration's approach at least thus far, extends Biden-era policies surrounding AI infrastructure, whilst also de-emphasizing certain provisions such as clean energy. And we've touched a bit on the $500 billion Stargate Project and the drive to find new energy sources to power AI innovation, such as nuclear power. So, walk us through some of the opportunities and risks concerning the push to build AI infrastructure on federal land and explore diverse energy sources. What are the key metrics we should be tracking in this space?
Josh: This is an effort to utilize federal land for the massive energy needs that will be associated with, just as you say, Brianna, AI technology development. And what this RFI — request for information — from the Department of Energy does is it asks the private sector to kind of help build out this concept. It's a step forward from the government in identifying 16 specific sites and asking some very specific questions about energy needs, about other needs if these sites were to be utilized. So, you can think of it as the next step in a conversation between government and the private sector.
The RFI itself indicates a desire to get construction on some subset of these sites, or it could be other sites, if those come up in the continuing process of working through this. The goal is to start construction by the end of calendar year 2025, with a goal that that operations could begin by the end of 2027. So, in some ways, moving in this calendar year to even get going is quite swift for the enormity of this idea. In some ways, it is a long-term project to try to keep the infrastructure of AI frontier model development in the U.S., to keep it domestic. That's just by way of backdrop.
I do think there are benefits to this, and I guess two administrations now have seen the benefits of it, but there are risks. And I think, to the credit of the Energy Department, they're asking about both in this request for information. I do think they want to know about benefits, such as how useful is it to be located near federal facilities that might actually have relevant expertise or resources, how useful will it be to local economic development when these are built? But it also asks about challenges in terms of maintaining the data, in terms of what exactly is needed, physically, in terms of the energy pull.
And so, it's a bit like the tension we identified the OMB memos. I think this RFI tries to do, in essence, two things at once — go quickly with this project, which now has support across two administrations, and suss out quickly what the benefits and risks are to try to maximize the former and mitigate the latter.
Brianna: Yeah, so I guess you would see this as being more of an extension or acceleration, then, of Biden's policies in this space.
Josh: I do think it builds on that, right? This was a very late Biden administration executive order that put this in motion, and now you have the Energy Department doing the next step of getting more specific, of offering up these 16 sites as particular places for the private sector to think about, and trying to elicit from the private sector even more granular feedback on what else would be needed. How do you envision using this land? Exactly what would the energy needs be, given the enormity of the data centers being imagined.
There have been shifts. And the one I mentioned before, I think, is a notable one. I think that what you see from the Trump Energy Department de-emphasizes environmental considerations. Again, that's consistent with an overall administration approach, but fundamentally, I think it reflects two administrations saying, how do we keep the U.S. private sector at the forefront of this critical technology? And it's not the only answer, but one answer is, well, if there are massive energy and land needs, can the federal government's own land be helpful? And we'll see what the private sector says in response to this, but the Energy Department is hoping the answer comes back, yes, and here's how.
Brianna: Yeah, it'll be interesting to see what comes out of the RFI, because this will be — this was a key, has always been a key push for the administration, not when it comes to AI. As Vice President JD Vince said at the AI Action Plan in Paris, if you lead in AI, you have to lead in energy production. And of course, given the enormous energy consumption needs for these AI systems, nuclear power is the key source for that.
But yeah, I just wonder about, you know, keeping it all domestic within the U.S., whether that's the right strategy? And you know, we saw just recently, I think it was just two days ago, Nvidia announced that it's going to manufacture its AI supercomputers entirely in the U.S. and plans to commit up to half a trillion dollars of USA infrastructure through these private partnerships. Is there anything lost in focusing so much on the domestic infrastructure piece of it? Does this erect any kind of barriers to global talent or technology partners around the world? Or, how might this intersect with some of the Trump administration's other policies, such as the tariffs?
Josh: Yeah, I think that intersection is maybe the most interesting and most complicated part of this, because you have, within the span of a few days, on the one hand, as you say, a much-touted investment domestically of half a trillion dollars in AI infrastructure, and you have the federal government coming out and resetting a little bit the bar for which type of chips can actually be exported. There had been a bar. You have to set a bar somewhere. There'd been a bar set by the previous administration. There have been some private sector development kind of just below that bar, with the hope of being able to export those to China in particular. And you have now the Trump administration saying, no, those still exceed the acceptable bar for export.
So, there is a tension there between enabling, empowering U.S. development of chips, of artificial intelligence, of key emerging technologies, and then, of course, a U.S. private sector that wants to market those not just to U.S. consumers, but to global consumers, because there are many U.S. consumers, but there are many more globally. And then you have a national security dimension of U.S. policy that says, for pieces of the world, no, or at least not for particularly advanced technology. That's also where the AI diffusion rule that we mentioned before comes into play, trying to restrict the diffusion of certain AI technology to at least a limited set of countries of concern and then a broader second tier, which encompasses much of the world.
And so there is some, I think, that intersection, as you point out, between domestic development and then what U.S. companies are allowed to do having developed AI chips, whatever it may be, abroad. That has complexity, and I think it's also a subject of some ongoing conversations within the executive branch and between the executive branch and Congress.
This is not the only way to enable and facilitate continued U.S. private sector leadership, really, global leadership on AI. And I think you will see, consistent with what we've heard from top officials in this administration, an effort within the executive branch to figure out what are those other ways now in terms of actually being at the forefront of technological innovation. That's going to be the private sector. That's why you see the federal government taking approach to utilizing AI not by making its own or developing its own models, but by taking commercially available models and trying to use them responsibly. But I think the land use piece is a kind of notable and quite particular way in which the federal government is trying to keep the U.S. private sector at the forefront of technological innovation for AI.
Brianna: With the increased role of the private sector in U.S. AI development and deployment, another outstanding question is, what oversight, if any, is needed on that private sector involvement and those public-private sector partnerships?
Josh: Yes, and, you know, all of this we could go into in such greater depth. So, one point I'll just make quickly on that is, to the extent that the current federal government is disinclined to regulate or otherwise restrict the private sector, I think there is still an open question about what states might do, and I think you've heard this at least from some state leadership, state attorneys general, as they think about what it means to treat AI at the state level. So, that's a space to be watched as well.
Brianna: Josh, we've been talking about the contours of an emerging Trump strategy on AI, and you've laid out some key areas where we're seeing that start to become much clearer. Now, looking ahead, the forthcoming AI Action Plan is expected to provide additional guidance on both government wide on the civilian side and national security uses of AI. So, given your experience working in the White House and in the private sector since then, what are you hoping to see in that plan? How can it complement the OMB memos and the RFIs to date, and how can that plan ensure that the Trump administration's AI strategy truly accelerates AI adoption in a responsible manner?
Josh: These sorts of big picture government documents on complicated issues that tend to evolve in the weeks or months after you issue them — they're very hard, and so, those who are working hard on this have my sympathies and admiration right now, and I think there will be a lot of interest for when it emerges. I don't think, until we see it, we will know the exact scope, which is fair indeed. We won't know whether its emphasis is on some or all of the three buckets I outlined at the beginning of the conversation, but I think to the extent it is able to address them and begin to sew together the seams between them — meaning U.S. government, its relationship with this technology directly, its support to the U.S. private sector, and its approach in regulating or otherwise influencing U.S. private sector’s engagement with the rest of the world with this technology — I think that would be a notable step forward.
Each of those is complicated. Each of those will take other documents in the course of this administration and future ones to keep current. But given, as this conversation has revealed, the relationship between those elements and sometimes the tradeoff between them, if all of that gets at least a big picture framework in an action plan, I think that will not only be useful for understanding where the U.S. government is. I think it important for the U.S. private sector, as it wants to rely on a particular approach to know how to invest in this technology.
Brianna: Thank you so much for sharing your insights on this, what has been a very wide-ranging conversation on all different aspects of the Trump administration's emerging approach to AI. You've given us so much to think about, not only at a macro level, but also on a micro level, as we unpack the recent developments with the OMB memos and the request for information from the Department of Energy. So, thank you so much for sharing your time and your insights and for joining us on the Just Security podcast.
Josh: Thank you, Brianna. I really enjoyed the conversation.
Brianna: And thank you to our listeners for tuning in. Stay with us for more conversations at the intersection of law, national security and emerging technology next time. This episode was hosted by me, Dr. Brianna Rosen, and produced by Maya Nir, with help from Pooja Shah and Clara Apt. Special thanks to Joshua Geltzer for joining us today.
Tess Bridgeman: Thanks for listening to this episode of the Just Security Podcast. I'm Tess Bridgeman, Co-Editor-in-Chief of Just Security. On the day this podcast went live, Just Security branched out onto Substack, giving us an opportunity for even more engagement with readers and listeners, in addition to our regular content at justsecurity.org. We look forward to seeing you on Substack. Thank you.