Leading AI developers like OpenAI and Anthropic are selling software to the U.S. military: making the Pentagon more efficient without letting their AI kill people.
Today, their tools are no longer being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking and assessing threats, Dr. Radha Plumb, the Pentagon's chief digital and artificial intelligence officer, said in an interview with TechCrunch . Telephone interview.
"We're obviously adding ways to speed up the execution of the kill chain so that our commanders can respond at the right time to protect our forces," Plumb said.
The "kill chain" refers to the process by which military forces identify, track and neutralize threats, involving a complex system of sensors, platforms and weapons. Generative AI is proving useful in the planning and strategizing stages of the kill chain, Plumb said.
The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic, and Meta rolled back their usage policies in 2024, allowing U.S. intelligence and defense agencies to use their artificial intelligence systems. However, they still don't allow their AI to harm humans.
Asked how the Pentagon works with AI model providers, Plumb said: "We have a very clear idea of what we will and will not use their technology for."
Still, it sparked a round of speed dating between artificial intelligence companies and defense contractors.
Meta is working with companies including Lockheed Martin and Booz Allen to bring its Llama AI model to defense agencies in November. That same month, Anthropic partnered with Palantir. Last December, OpenAI reached a similar agreement with Anduril. More quietly, Cohere has also been deploying its model with Palantir.
As generative AI proves its usefulness at the Pentagon, it could push Silicon Valley to loosen its policies on AI use and allow for more military applications.
“Generative AI can help play different scenarios,” Plum said. "It allows you to leverage the full suite of tools available to our commanders, but also to think creatively about different response options and potential trade-offs in an environment where there is a potential threat or a range of threats that require prosecution."
It’s unclear whose technology the Pentagon used in this effort; using generative AI in the kill chain (even in the early planning stages) does appear to violate the usage policies of some leading model developers. For example, Anthropic's policy prohibits the use of its models to generate or modify "systems designed to cause harm or loss of human life."
In response to our questions, Anthropic pointed TechCrunch to a recent interview its CEO Dario Amodei gave to the Financial Times, in which he defended his military work:
The position that we should never use artificial intelligence in defense and intelligence settings doesn’t make sense to me. That we should go all out and use it to make whatever we want—even doomsday weapons—is obviously just as crazy. We are trying to find the middle ground and do things responsibly.
OpenAI, Meta and Cohere did not respond to TechCrunch's requests for comment.
In recent months, a defense technology debate has ignited over whether artificially intelligent weapons should really be allowed to make life-and-death decisions. Some believe the U.S. military already has weapons that can do this.
Anduril CEO Palmer Luckey recently noted on X that the U.S. military has a long history of purchasing and using autonomous weapons systems like CIWS turrets.
"The Department of Defense has been purchasing and using autonomous weapons systems for decades. Their use (and export!) is well known, tightly defined, and clearly regulated by rules that are not voluntary at all," Luckey said.
But when TechCrunch asked the Pentagon about buying and operating fully autonomous weapons—weapons with no human involved—Plumb rejected the idea in principle.
"No, that's the short answer," Plum said. "For reliability and ethical reasons, we will always involve humans in decisions about the use of force, and that includes our weapons systems."
The term “autonomous” is somewhat vague and has fueled debate throughout the tech industry about when automated systems—such as artificially intelligent coded agents, self-driving cars, or self-shooting weapons—are truly independent.
Plame said the idea of autonomous systems making life-and-death decisions independently was "too binary" and the reality was not that "sci-fi." Instead, she sees the Pentagon's use of AI systems as really a collaboration between humans and machines, with senior leaders making active decisions along the way.
"People tend to think of this problem like there's a robot somewhere, and then the Gonculator (a fictional autonomous machine) spits out a piece of paper, and the human just checks a box," Plum said. “That’s not how human-machine collaboration works, and it’s not an efficient way to use these types of AI systems.”
Military partnerships aren't always popular with Silicon Valley employees. Dozens of Amazon and Google employees were fired and arrested last year for protesting the companies' military contracts with Israel, codenamed "Project Nimbus."
In contrast, the response from the AI community has been rather lukewarm. Some AI researchers, such as Anthropic's Evan Hubinger, say the use of AI in the military is inevitable, and it's critical to work directly with the military to ensure they use it correctly.
“If you take seriously the catastrophic risks posed by AI, the U.S. government is an extremely important player, and trying to prevent the U.S. government from using AI is not a viable strategy,” Hubinger said in an online post in November expressed in. The forum is less error-prone. “It’s not enough to just focus on catastrophic risk, you also have to prevent governments from abusing your model.”