Elon Musk’s so-called Division of Authorities Effectivity (DOGE) operates on a core underlying assumption: The United States needs to be run like a startup. Thus far, that has principally meant chaotic firings and an eagerness to steamroll rules. However no pitch deck in 2025 is full with out an overdose of synthetic intelligence, and DOGE isn’t any totally different.
AI itself doesn’t reflexively deserve pitchforks. It has real makes use of and might create real efficiencies. It isn’t inherently untoward to introduce AI right into a workflow, particularly in the event you’re conscious of and in a position to handle round its limitations. It’s not clear, although, that DOGE has embraced any of that nuance. In case you have a hammer, all the things seems to be like a nail; if in case you have probably the most entry to probably the most delicate knowledge within the nation, all the things seems to be like an enter.
Wherever DOGE has gone, AI has been in tow. Given the opacity of the group, rather a lot stays unknown about how precisely it’s getting used and the place. However two revelations this week present simply how in depth—and probably misguided—DOGE’s AI aspirations are.
On the Division of Housing and City Growth, a faculty undergrad has been tasked with utilizing AI to seek out the place HUD rules might transcend the strictest interpretation of underlying legal guidelines. (Businesses have historically had broad interpretive authority when laws is imprecise, though the Supreme Courtroom not too long ago shifted that energy to the judicial department.) This can be a activity that really makes some sense for AI, which might synthesize info from massive paperwork far quicker than a human may. There’s some danger of hallucination—extra particularly, of the mannequin spitting out citations that don’t in reality exist—however a human must approve these suggestions regardless. That is, on one stage, what generative AI is definitely fairly good at proper now: doing tedious work in a scientific manner.
There’s one thing pernicious, although, in asking an AI mannequin to assist dismantle the executive state. (Past the very fact of it; your mileage will differ there relying on whether or not you suppose low-income housing is a societal good otherwise you’re extra of a Not in Any Yard kind.) AI doesn’t truly “know” something about rules or whether or not or not they comport with the strictest attainable studying of statutes, one thing that even extremely skilled attorneys will disagree on. It must be fed a immediate detailing what to search for, which suggests you can’t solely work the refs however write the rulebook for them. Additionally it is exceptionally desperate to please, to the purpose that it will confidently make stuff up fairly than decline to reply.
If nothing else, it’s the shortest path to a maximalist gutting of a serious company’s authority, with the possibility of scattered bullshit thrown in for good measure.
No less than it’s an comprehensible use case. The identical can’t be mentioned for one more AI effort related to DOGE. As WIRED reported Friday, an early DOGE recruiter is as soon as once more on the lookout for engineers, this time to “design benchmarks and deploy AI brokers throughout dwell workflows in federal companies.” His intention is to get rid of tens of 1000’s of presidency positions, changing them with agentic AI and “releasing up” employees for ostensibly “larger impression” duties.
