OpenAI Handed the Pentagon a Quick Yes — Then Came the Fine Print

01OpenAI Struck a Pentagon Deal in Hours. The Fine Print Shows What It Gave Up.

Anthropic spent months negotiating with the Pentagon over two conditions. Its AI would not power mass domestic surveillance or direct lethal autonomous weapons. The Department of Defense wanted something broader: the right to use Anthropic's models for "any lawful use." On February 27, after Anthropic let a 5:01 PM deadline pass without agreeing, Defense Secretary Pete Hegseth designated the company a supply-chain risk to national security. President Trump ordered federal agencies to stop using its technology.

Hours later, Sam Altman posted on X that OpenAI had its own deal.

Altman said OpenAI shared Anthropic's red lines. The published agreement prohibits "domestic mass surveillance" and bars its models from directing autonomous weapons. Same two conditions, on the surface. Different story in the contract language.

OpenAI's deal does not explicitly prohibit the Pentagon from collecting Americans' publicly available information. The government already purchases aggregated commercial data without a warrant: cell phone location records, fitness app logs, browser histories. The contract restricts "unconstrained" collection of private data but draws no line around public data. Anthropic argued that AI applied to publicly available information at scale constitutes mass surveillance. That stance was a principal reason its Pentagon talks collapsed.

OpenAI anchored its protections to existing legal frameworks, citing compliance with Executive Order 12333 and current surveillance statutes. The published excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use," according to MIT Technology Review. Anthropic's position was that current law has not caught up with what AI makes possible. OpenAI's contract accepts current law as the boundary.

Altman acknowledged the deal was "definitely rushed" and that "the optics don't look good." He framed the decision as strategic de-escalation. "If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses," he wrote. "If not, we will continue to be characterized as rushed and uncareful." He called Anthropic's blacklisting an "extremely scary precedent."

The deal puts OpenAI on the Pentagon's classified networks. TechCrunch described the shift as a transition from "a wildly successful consumer startup into a piece of national security infrastructure." No established framework governs how an AI company should operate in that role. The contract Altman called rushed now defines the terms between the U.S. military and the most widely deployed AI system in the world.

OpenAI's contract leaves the public-data gap Anthropic tried to closerefusing Pentagon terms now carries existential business risk for AI startupsno legal framework assigns accountability when AI operates inside classified defense networks

02Supreme Court Closes AI Copyright Path as London Sees Largest-Ever Anti-AI Protest

Last week, AI's boundaries were drawn in both courtrooms and streets. The U.S. Supreme Court sealed off the copyright path for AI-generated works, while London saw the largest anti-AI protest on record.

On Monday, the U.S. Supreme Court declined to hear Stephen Thaler's appeal over whether AI-generated art can receive copyright protection. Thaler, a Missouri computer scientist, has spent seven years arguing that his AI system DABUS should be recognized as an author under U.S. law. The Copyright Office rejected his application in 2019. A federal judge upheld the rejection in 2023, ruling that "human authorship is a bedrock requirement of copyright." The appeals court affirmed in 2025. The Supreme Court's refusal to hear the case effectively closes the door on copyright protection for autonomous AI output. Thaler previously tried the same route for AI patents — also rejected by the Supreme Court. Seven years, two legal paths, both dead ends.

The same weekend, roughly 500 protesters marched through London's King's Cross tech hub, stopping outside the UK headquarters of OpenAI, DeepMind, Meta, and Google. The "March Against the Machines," organized by five groups including Pull the Plug and Pause AI, was described by organizers as the largest anti-AI protest to date. Solidarity actions took place simultaneously in Berlin and at data center sites across the UK.

The protesters' grievances were specific: AI products pushed on the public without basic safety verification, data centers damaging local communities and the environment, and young people's mental health eroded by AI-powered social media algorithms. A poll found 84% of Britons believe the government prioritizes corporate partnerships over public interest in AI regulation. The march's core demand was a binding Citizens' Assembly — ordinary people, not tech companies, deciding AI's boundaries — and a pause on frontier model development until safety is proven. "AI going wrong, job losses, economic crashes — it's people like us who get hurt," a Pull the Plug spokesperson said.

The court denied AI output legal protection. The streets demanded the brakes be pulled on AI expansion itself. Different concerns, but pointing to the same reality: the debate over AI's boundaries is spilling from industry circles into courtrooms and public spaces.

Copyright precedent closes the legal protection path for AI-generated worksanti-AI sentiment has escalated from industry debate to street-level movementAI industry faces institutional rejection and public backlash simultaneously

03Nvidia Bets $4 Billion on Photonics as Apple Turns to Google for AI Servers

Nvidia committed $4 billion on Monday to two photonics companies: $2 billion each into Lumentum and Coherent. Both firms build optical transceivers, circuit switches, and lasers that move data at high speed across data centers. The GPU maker, whose chips dominate AI training, is betting that the bottleneck is shifting from processors to the connections between them.

Days earlier, The Information reported that Apple asked Google to set up servers for a Gemini-powered upgrade to Siri, one meeting Apple's privacy requirements. Apple announced in January that Google's Gemini models would help power the new Siri. The latest report indicates Apple needs Google's physical infrastructure too, not just its models.

Two separate deals, different companies, different technologies. They point to the same structural shift. Model quality is converging across the industry. The scarce resource is now the physical layer: optical interconnects fast enough to link thousands of GPUs, and server fleets large enough to run inference at consumer scale.

Nvidia's move is telling. The company sells the chips every AI lab wants. It could have let customers handle data center networking. Instead, it spent $4 billion to secure the optical supply chain, a sign that chip-to-chip bandwidth is becoming the binding constraint on cluster performance. Faster GPUs deliver nothing if data can't move between them.

Apple's situation reveals a different facet of the same problem. The company holds roughly $160 billion in cash. It designs its own silicon and runs one of the world's largest cloud services in iCloud. Yet for AI inference at the scale Siri requires, it turned to a direct competitor. Building AI-grade server infrastructure from scratch takes years, not quarters.

Open-source releases and shared training techniques have commoditized the model layer. Physical infrastructure has not followed. Nvidia is locking in the optical supply chain, while Apple rents compute from a rival rather than building its own. Two years ago, neither move was on the table.

Optical interconnect capacity may constrain AI scaling more than chip supply within two yearsApple's dependence on Google servers hands a rival leverage over its core product roadmapinfrastructure ownership, not model quality, is becoming the durable competitive moat
04

Anthropic's Claude Hit by Widespread Service Outage Thousands of users reported problems accessing Claude on Monday morning. Anthropic acknowledged the disruptions but has not disclosed a root cause. techcrunch.com

05

14.ai Sells AI Agents That Replace Startup Customer Support Teams Married co-founders built 14.ai to automate full customer support workflows at startups. The company also launched a consumer brand to measure how much of the support workload AI can realistically handle. techcrunch.com

06

Lenovo Shows AI Desktop Companion Concepts at MWC Lenovo revealed two standalone desk devices at MWC: an always-on "AI Workmate" and a robot arm with expressive eyes. Both target office workers as productivity assistants. Neither has a ship date. theverge.com

07

CUDA Agent Applies Reinforcement Learning to GPU Kernel Optimization A new paper introduces CUDA Agent, a system that uses large-scale agentic RL to generate high-performance CUDA kernels. Current LLM-based approaches to CUDA code generation still underperform compiler tools like torch.compile. huggingface.co

08

Memento Proposes Embedding AI Coding Sessions Into Git Commits An open-source project called Memento captures the full AI interaction transcript and attaches it to the corresponding commit. The goal: let future developers audit how and why AI-generated code was written. github.com

09

CiteAudit Benchmark Targets Hallucinated Scientific References Researchers released CiteAudit, a benchmark for verifying whether citations in LLM-generated text point to real publications. Fabricated references have already appeared in submissions and accepted papers at major ML conferences. huggingface.co

10

New Training Method Extends Video Generation From Seconds to Minutes A paper proposes decoupling local visual fidelity from long-term coherence using a Decoupled Diffusion Transformer. Separate training heads handle short-clip quality and long-sequence consistency, sidestepping the scarcity of high-quality long-form video data. huggingface.co

11

dLLM Provides Unified Open-Source Framework for Diffusion Language Models Researchers released dLLM, a standardized framework for building diffusion-based language models. The project consolidates components scattered across ad-hoc research codebases into one reproducible library. huggingface.co

12

LK Losses Directly Optimize Acceptance Rates in Speculative Decoding A new training objective called LK Losses optimizes the token acceptance rate in speculative decoding instead of using KL divergence as a proxy. Standard KL training leaves performance on the table when draft models have limited capacity. huggingface.co