WORLD NEWS
Sam Altman announced late Friday that OpenAI has reached an agreement with the United States Department of Defense (rebranded by the Trump administration as the Department of War) to deploy its AI models on the department’s classified networks — while embedding strong ethical safeguards.
Altman said in a statement on X that the Department showed a “deep respect for safety” and agreed to uphold two of OpenAI’s core principles: prohibiting domestic mass surveillance and ensuring human responsibility for the use of force, including systems that could otherwise be programmed as autonomous weapons. These principles have now been written into the agreement.
The deal represents a significant milestone coming on the heels of a public dispute between the Pentagon and Anthropic — a rival AI firm that refused Pentagon demands to remove internal safeguards against mass surveillance and autonomous weapons use. That standoff escalated when US President Donald Trump ordered all federal agencies to stop using Anthropic’s AI technology, with the Pentagon labeling the company a national security supply‑chain risk.
Despite the broader tensions, Altman framed OpenAI’s move as a way to uphold ethical limits while still supporting national security efforts. He emphasized that AI safety, broad benefit, and responsible deployment remain central to OpenAI’s mission, even in complex geopolitical and military contexts. The agreement also includes technical safeguards and controlled deployment methods, such as limiting use to secure cloud infrastructure and staffing forward‑deployed engineers to oversee operations.
Altman has called on the Defense Department to offer the same ethical terms to all AI companies, hoping for broader industry alignment rather than legal confrontation. The move illustrates ongoing debates over how frontier AI technologies should interact with national defense priorities and ethical constraints.