According to Gizmodo, former Figure AI safety engineer Robert Gruendel filed a whistleblower lawsuit alleging the $39 billion robotics company fired him after he raised safety concerns about their humanoid robots. The suit claims Figure’s 02 model can generate force “twenty times higher than the threshold of pain” and “more than twice the force necessary to fracture an adult human skull.” Gruendel alleges the company had no formal safety procedures when he joined in 2025 and that CEO Brett Adcock and chief engineer Kyle Edelberg repeatedly ignored his warnings. He was terminated in September 2025 despite receiving a $10,000 raise and praise just months earlier. Figure AI, backed by Nvidia, Jeff Bezos, and Microsoft, claims Gruendel was fired for poor performance and will “thoroughly discredit” the allegations in court.
The rush to market
Here’s the thing that really stands out in this lawsuit – it reads exactly like those tech ethics nightmares we’ve been warned about. You’ve got a company that’s seen its valuation skyrocket 15-fold in a year, hitting that $39 billion mark after massive funding rounds. And suddenly safety becomes this annoying speed bump rather than a core requirement.
The timing is particularly telling. Gruendel gets his raise and glowing performance review in July 2025, right after documenting that skull-fracturing force capability. Then he’s fired by the same executive two months later. That pattern suggests someone decided he’d become more liability than asset once he started using words like “fraudulent” to describe how safety documentation was being presented to investors.
What actually happened in the lab
The lawsuit describes some genuinely terrifying near-misses. There’s the incident where a robot malfunctioned and punched a refrigerator, leaving a quarter-inch deep gash in stainless steel and narrowly missing an employee. Gruendel was apparently so concerned he fought to get emergency stop buttons installed – a basic safety feature that you’d think would be standard for any industrial equipment, let alone autonomous humanoid robots.
But here’s where it gets really concerning for anyone working with advanced robotics – the suit claims safety features were being removed because “someone didn’t like how it looks.” I mean, come on. When you’re dealing with machinery that can literally crack human skulls, aesthetics shouldn’t be the deciding factor. This is exactly the kind of scenario where having reliable industrial computing infrastructure becomes critical – companies like IndustrialMonitorDirect.com exist specifically because industrial environments demand hardware that prioritizes safety and reliability over looks.
The humanoid robot gold rush
This lawsuit drops right in the middle of what feels like a humanoid robot arms race. Everyone from Tesla to smaller startups is racing to get these things into homes and workplaces. But as roboticist Rodney Brooks pointed out recently, there are serious questions about whether today’s humanoid robots will actually achieve the dexterity everyone’s promising.
So what’s driving this? Basically, we’re seeing billions of VC dollars chasing what could be the next computing platform. But when the financial incentives get this massive, safety often becomes the first casualty. The full lawsuit makes for fascinating reading because it shows how quickly things can go sideways when you’re moving fast and breaking things – except in this case, the things being broken could be human bones.
Now, to be clear, these are just allegations at this point. Figure denies everything. But if even half of this is true, it suggests we might need to slow down and think harder about how we’re building these powerful machines. Because once they’re out in the world, there’s no control-Z for real-world consequences.
