According to Fast Company, Sam Corcos, the Chief Information Officer at the Treasury Department, has approved spending at least $1.5 million to acquire up to 3,000 licenses for OpenAI’s ChatGPT. Federal spending records show the agency has obligations to hit that $1.5 million figure and has already paid out more than $500,000. A user agreement reveals employees are allowed to use ChatGPT for “authorized” mission purposes, which can include working with “controlled unclassified information.” The rules strictly forbid using the AI with personally identifiable information, market-sensitive economic data, or federal tax data. Employees must have a human review any AI output and cannot hide the AI’s role in their work. Violating these terms could result in termination.
The Guardrails Are Everything
Here’s the thing: the dollar amount and license count are eye-catching, but the real story is buried in that user agreement. The Treasury isn’t just throwing ChatGPT at every problem. They’re building a very specific, fenced-in playpen for it. Banning its use on tax data or personally identifiable info? That’s a massive carve-out that shows they understand the risks. The mandate for human review and transparency about AI’s role is basic AI hygiene, but making it a fireable offense gives those rules real teeth.
It’s a fascinating glimpse into how a massive, sensitive bureaucracy tries to adopt cutting-edge tech. They’re under pressure to modernize and “increase efficiency,” but they can’t afford a leak or a hallucination-induced policy disaster. So they’re moving forward, but with more red tape than a… well, a government agency. The question is, will these guardrails be so restrictive that they choke off any real utility?
A Template for the Whole Government?
This feels like a pilot program for the entire federal government. If Treasury can make this work—using AI for drafting, analysis, or research on semi-sensitive materials without a major incident—you can bet every other department will follow suit. Sam Corcos, with his startup and DOGE (Department of Government Efficiency) background, is exactly the kind of person you’d want running this experiment. He gets both the tech potential and the bureaucratic nightmares.
But let’s be skeptical for a second. We’re talking about a tool known for “confidently” making things up. Can you truly rely on it for any official analysis, even with a human in the loop? The spending records show the money is flowing, but the real cost might be in the time it takes to fact-check and verify every single output. The efficiency gains might be smaller than everyone hopes. Still, this is a huge, concrete step. The federal government is officially, and expensively, in the generative AI business. Buckle up.
