Home Tech Pentagon says AI is speeding up ‘kill chain’

Pentagon says AI is speeding up ‘kill chain’

82
0
Pentagon says AI is speeding up ‘kill chain’

Leading AI developers, such as OpenAI and Anthropic, are making a fine needle for selling software to the United States military: making the Pentagon more efficient, without allowing AI to kill people. Currently, these tools are not used as weapons, but AI gives the Department of Defense “significant advantages” in identifying, tracking, and evaluating threats, the Pentagon’s Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview. “We’re obviously improving the way we can speed up the execution of the kill chain so that commanders can respond in time to protect our troops,” Plumb said. The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats, involving complex systems of sensors, platforms, and weapons. Generative AI is proving helpful in planning and strategizing the kill chain phase, according to Plumb. The relationship between the Pentagon and AI developers is a new one. OpenAI, Anthropic, and Meta have rolled back their use policies in 2024 to allow US intelligence and defense agencies to use AI systems. However, they still don’t allow AI to destroy humans. “We’ve been very clear about what we will and will not use the technology,” Plumb said, when asked how the Pentagon might work with AI modeling providers. Instead, it kicks off a round of speed dating for AI companies and defense contractors. Meta partnered with Lockheed Martin and Booz Allen, among others, to bring the Llama AI model to defense agencies in November. That same month, Anthropic merged with Palantir. In December, OpenAI made a similar deal with Anduril. More quietly, Cohere has also distributed models with Palantir. As generative AI proves useful in the Pentagon, it could push Silicon Valley to loosen its AI usage policy and allow other military applications. “Playing through different scenarios is something that generative AI can help with,” said Plumb. “It allows you to take advantage of the various tools available to commanders, but also to think creatively about the various response options and potential trade-offs in an environment where there is a potential threat, or series of threats, to be prosecuted.” It is not clear which technology used by the Pentagon for this work; using generative AI in the kill chain (even in the early planning phase) seems to violate the policy of the use of some leading models loss of human life.” In response to our question, Anthropic pointed TechCrunch to CEO Dario Amodei’s interview with the Financial Times, where he defended the military’s work: The position that AI should not be used in defense and intelligence settings makes no sense to me. The position that it should be gangbusters and use it to create anything they want – up to and including doomsday weapons – which is clearly insane. We try to find a middle ground, to do things responsibly. OpenAI, Meta, and Cohere did not respond to TechCrunch’s requests for comment. Life and death, and AI weapons In recent months, a defense technology debate has broken out over whether AI weapons should be allowed to make life and death decisions. Some argue the U.S. military already has the weapons to do so. CEO Anduril Palmer Luckey recently noted on X that the US military has a long history of purchasing and using autonomous weapons systems such as CIWS turrets. “The DoD has been purchasing and using autonomous weapons systems for decades now. Their use (and export!) is understood, strictly defined, and clearly regulated by non-voluntary rules,” Luckey said. But when TechCrunch asked if the Pentagon was buying and operating weapons autonomously — ones with no humans in the loop — Plumb rejected the idea on principle. “No, the short answer is,” Plumb said. “As a matter of reliability and ethics, we always have humans involved in the decision to use force, and that includes our weapons systems.” The word “autonomy” is somewhat ambiguous and has fueled debate throughout the tech industry about when automated systems — such as AI coding agents, self-driving cars, or self-firing weapons — become truly autonomous. Plumb said the idea that automated systems make life-and-death decisions is “very binary,” and that reality is less “science fiction.” Instead, he suggested using the Pentagon’s AI system as a collaboration between humans and machines, where senior leaders make active decisions throughout the process. “People tend to think about it like there are robots everywhere, then gonculators [a fictional autonomous machine] sucking up paper, and humans just checking boxes,” said Plumb. “That’s not how human-machine teamwork works, and it’s not an effective way to use these AI systems.” year, dozens of Amazon and Google employees were fired and arrested after the company’s military contract with Israel, a cloud deal under the codename “Project Nimbus.” There was a relatively muted response from the AI ​​community. Some AI researchers, such as Evan Hubinger from Anthropogenic, said that the use of AI in the military is inevitable, and it is very important to work directly with the military to make sure that they are right using AI is not a viable strategy,” Hubinger said in a November post to the online LessWrong forum. “It’s not enough to just focus on catastrophic risk, you also have to prevent any way the government can abuse your model.”

Source link