According to Forbes, former President Donald Trump posted on Truth Social this week stating his intention to adopt an executive order that would override state-specific laws regulating AI. The stated motivation is to prevent a confusing “morass” of state laws that could delay or stymie AI advances. This comes as states like Illinois, Utah, and Nevada have already enacted new laws specifically governing AI that provides mental health guidance, while Congress has failed to pass any overarching federal law. The use of generative AI for mental health advice is exploding, with ChatGPT alone boasting over 800 million weekly active users, a notable portion of whom use it for mental health. The executive order, while likely covering all AI uses, would directly impact this emerging and risky field where AI systems have already faced lawsuits for a lack of safeguards.
The Startup Crunch
Here’s the thing: this order isn’t just about big tech. The immediate, messy impact would land hardest on startups. Right now, states are going it alone. Illinois has one law, Utah another, Nevada something else—none are copying each other. For a small AI mental health startup, this is a nightmare. Do you build 50 different versions of your app? Do you spend your seed funding on an army of lawyers instead of engineers? The burden is real. A giant like OpenAI might shrug off a fine from a state, but a startup could be crushed before it even gets started. Trump‘s argument is about removing barriers to innovation, but the flip side is it could also remove the few guardrails that exist, creating a wild west where only the biggest, most legally armored players can afford to play.
Safety Versus Speed
And that brings us to the core tension: safety versus speed. Some of these state laws, like Illinois’s, essentially say generic AI cannot give mental health advice to residents. Period. The concern isn’t trivial. We’ve seen headlines about AI helping users co-create dangerous delusions. So, is requiring AI makers to figure out compliance a “burden” or a basic cost of doing business in a high-stakes field? Wiping these laws off the books with a federal order might clear the path for faster deployment, but it also potentially clears the path for more harm. It basically swaps a patchwork of protections for a vacuum. The promise of accessible, 24/7 AI therapy is huge, but so are the risks if it’s deployed without any rules of the road.
What Happens Next
Look, if this order happens, the chaos moves to the courts. States that have passed laws will likely sue to defend their authority. We’ll get a huge legal fight over federal preemption. In the meantime, AI makers are left in limbo. Do they keep building for the strictest state’s rules, or do they assume the order will stick and open the floodgates? For the average person using ChatGPT for a mental health chat, nothing might seem different at first. But the long-term signal is clear: the push for any tailored, thoughtful regulation of specific AI uses—especially sensitive ones like mental health—just got a lot harder. The question is, are we okay with that trade-off?
