Laid-Off Screenwriters Now Grade the AI That Replaced Them as Google's Bug-Hunter Finds Its First Zero-Day

01The screenwriters Hollywood stopped hiring are now grading the AI that replaced them

A Wired essay this week opens with a working screenwriter's tally: 20 AI training contracts in eight months, across five platforms. The writer, who used to make television, calls the contracts "soul-crushing" and frames the work as "the new waiting tables" not just for screenwriters but for "job seekers all over."

The arrangement is secret by design. The piece is titled "Everyone Who Used to Make TV Is Now Secretly Training AI," and the writer says the descent is general across their Hollywood circle. The labor pipeline that scores model output for the major labs runs on per-task pay and anonymous credits. Former producers grade prose for the systems that studios will eventually buy as a cheaper substitute for producers.

That essay landed the same week a commencement speaker at the University of Central Florida told graduating humanities students that AI was the "next Industrial Revolution." The students booed her. According to 404 Media's account of the ceremony, one graduating senior yelled "AI SUCKS!" loud enough to carry across the auditorium. She kept reading from her notes. The booing did not stop.

Days earlier, 404 Media ran a separate piece titled "Your AI Use Is Breaking My Brain." The complaint was not that model-generated prose is incompetent. It was that the prose is now unavoidable. It sounds the same wherever it appears, and constant exposure is wearing down the author's tolerance for ordinary human writing. The author writes for a living and reads for a living.

Three vantage points in roughly eight days. Producers grading what they used to write. Graduates refusing to applaud the framing of inevitability. Readers describing model output as a cognitive load they no longer want to carry.

The Wired writer does not estimate how long the supply of qualified graders lasts. A twentieth contract in eight months suggests the pool has not thinned. The UCF audience suggests the next cohort is arriving with a different posture toward the same pitch.

Top screenwriting talent now servicing AI labs as per-task contractors instead of studioshumanities graduates publicly rejecting the "next Industrial Revolution" framing on stageAI prose homogenization being named as a cognitive cost by professional readers

02Google caught the first AI-developed zero-day. OpenAI launched Daybreak to hunt vulnerabilities the same week.

Google's Threat Intelligence Group says it has spotted and stopped the first zero-day exploit developed with AI. According to GTIG, the vulnerability targeted two-factor authentication on an unnamed service. "Prominent cyber crime threat actors" had staged it for a "mass exploitation event" before the bug was patched.

Days later, OpenAI announced Daybreak, an initiative built around its Codex Security agent. The product description reverses GTIG's report. Daybreak ingests an organization's source code, builds a threat model, and ranks attack paths. It then validates suspected vulnerabilities and automates detection of the high-priority ones.

Until now, AI-assisted vulnerability research mostly produced demos and proof-of-concept writeups at security conferences. GTIG's report moves the category into operational use. The attackers did not just generate exploit code in a lab. They reached the point of mass deployment, with a specific target, a working 2FA bypass, and a plan to scale it across one service's user base.

The Daybreak announcement lands on the opposite premise. OpenAI is selling the idea that the same capability — code analysis, attack-path reasoning, exploit synthesis — can be turned outward as scanning over a codebase. The Codex Security agent it relies on shipped in March. Daybreak is the productized wrapper.

Both releases share an unstated assumption. A human reviewer is no longer the bottleneck on either side of the vulnerability lifecycle. GTIG's adversary scaled exploit development past what a small crew could write by hand. OpenAI is pitching scaled detection no internal security team could match in headcount.

Security teams that previously discounted AI-generated exploits as unreliable now have a documented case of one reaching the staging phase of a mass campaign. The buy decision for AI-powered scanners no longer competes only with traditional SAST tools but with the cost of being the next target. GTIG did not name the service whose 2FA was targeted.

Threat models can no longer assume AI-written exploits are non-functionaldefensive AI agents are now a product line, not research demossecurity tooling spend now competes with breach risk, not just SAST budgets

0330 million gallons went unbilled while $275 million flowed into orbital data center rockets

Two numbers landed this week that frame how far AI compute has pushed past conventional infrastructure. A data center pulled roughly 30 million gallons of water from a local supply without being detected or billed for months, according to a report from Ars Technica. Cowboy Space closed a $275 million round to build rockets, citing a shortage of global launch capacity for orbital data centers, TechCrunch reported.

The water case points to a regulatory failure rather than a metering one. Ars Technica reported that the facility ran for months before officials caught the missing volume. Initial nonpayment was not flagged by routine utility checks. AI-driven cooling loads have grown faster than the municipal frameworks built to track them, leaving counties without a clear baseline for what normal looks like.

Cowboy Space frames the supply problem from the opposite end. The company says current launch cadence cannot deliver the mass needed to put compute clusters in orbit at a price operators will pay, according to TechCrunch. Its pitch is that rockets, not GPUs, are the binding constraint for space-based data centers. Investors funded that premise to the tune of $275 million.

Read together, the two stories describe one trend. Compute demand has outstripped the systems that meter, allocate, and deliver physical inputs. On the ground, that shows up as silent withdrawals from aquifers and grids. In orbit, it shows up as a startup raising nine figures to manufacture launch vehicles because the existing market cannot serve the workload at any price.

The compute supply chain has been cracking in different places for a year. Chip suppliers have rationed allocations to lock in capacity. Launch providers and parts vendors have moved upstream into their own supply. Hyperscalers are siting facilities where water rights are loose. Operators are routing around constraints faster than regulators or vendors can respond.

The next signal worth watching: whether the utility involved seeks retroactive payment, and whether Cowboy Space books a launch contract before its first vehicle flies.

County water regulators have no AI-specific monitoring baselinelaunch capacity is now priced as a compute input, not aerospaceexpect retroactive utility billing disputes at large data center sites
04

OpenAI spun off DeployCo to handle enterprise deployment OpenAI announced DeployCo, a separate unit that takes business customers from API access to production systems. The arm targets workflow design, governance, and integration work that consultants currently bill for. openai.com

05

Mira Murati's Thinking Machines reveals "interaction models" Thinking Machines said its first research direction is interaction models — systems that continuously take in audio and video to collaborate with users in real time. The startup had stayed quiet since its 2025 launch. theverge.com

06

Google shipped AI-powered Finance to Europe Google launched its rebuilt Finance product across European markets with full local-language support. The tool answers natural-language questions about stocks, exchange rates, and company data. blog.google

07

Digg relaunched as an AI news aggregator The revived Digg told beta testers the new site will track influential voices in each topic and surface stories worth attention. The team is positioning it against algorithmic feeds. techcrunch.com

08

Wired argues Nvidia's CUDA stack is the real lock-in Wired's piece details how CUDA's developer mindshare creates a deeper barrier than Nvidia's chip lead. AMD and Intel keep pushing alternative programming layers without dislodging the software dependency. wired.com

09

OpenAI says users over 35 drove Q1 ChatGPT growth OpenAI reported its over-35 cohort grew fastest in Q1 2026, with gender usage now more balanced than in any prior quarter. The data ships in a new quarterly signals release. openai.com

10

A Nobel economist listed three AI trends to watch MIT Tech Review's interview with Daron Acemoglu covers three areas the 2024 Nobel laureate is tracking. His pre-Nobel paper questioned AI productivity claims and drew pushback from Silicon Valley. technologyreview.com

11

MIT Tech Review tracks AI's bottom-up spread inside finance teams The piece describes finance staff deploying AI tools before CFOs put governance in place, inverting the usual rollout pattern. Compliance teams are catching up to existing usage. technologyreview.com

12

OpenAI launched a Campus Network for student clubs OpenAI opened a signup form for a global student club program offering AI tool access, event support, and community resources. The form goes to club leaders across universities. openai.com

13

HyperEyes proposes parallel multimodal search agents Researchers introduced HyperEyes, an agent that dispatches multiple grounded visual queries per round instead of running entities sequentially. The design targets retrieval-heavy tasks where sub-queries are independent. huggingface.co