According to The Verge, Elon Musk’s AI company xAI compelled employees to submit their own biometric data to train its “Ani” female chatbot, which was released over the summer for users who subscribe to X’s $30-a-month SuperGrok service. At an April meeting, xAI staff lawyer Lily Lim told employees they needed to provide their faces and voices as part of a confidential program code-named “Project Skippy.” Employees assigned as AI tutors were instructed to sign release forms granting xAI “a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” to use their biometric data. The Verge’s Victoria Song described Ani as “a modern take on a phone sex line” after testing it, noting the anime avatar with blond pigtails includes an NSFW setting. Some employees reportedly balked at the demand, concerned their likeness could be sold to other companies or used in deepfake videos.
The privacy nightmare scenario
Here’s the thing about that “perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” language – it’s basically giving xAI carte blanche to do whatever they want with employee biometric data forever. And we’re not just talking about using it internally for training models. The sub-licensable part means they could literally sell your face and voice to other companies without your additional consent. That’s the exact scenario employees were worried about when they raised concerns about deepfakes.
The ethical red flags
So let me get this straight – employees were apparently told that submitting their biometric data was “a job requirement to advance xAI’s mission.” But when you’re talking about creating an AI girlfriend with NSFW capabilities that employees found off-putting, that starts to feel… problematic. There’s a huge power imbalance here. How many people felt they could actually say no when their livelihood was potentially on the line? And we’re not just talking about anonymous data collection – we’re talking about faces and voices that could be directly associated with real people.
What this means for AI training
From a technical perspective, using real human biometric data makes sense if you’re trying to create more human-like interactions. The subtle nuances in facial expressions, voice inflections, and conversational patterns are incredibly difficult to synthesize artificially. But there are other ways to approach this – synthetic data generation, paid consent from non-employees, or even using publicly available datasets with proper licensing. The fact that they went straight to compelling their own workforce suggests either extreme urgency or cost-cutting. Or both.
The bigger picture
This isn’t happening in a vacuum. We’re seeing a pattern where AI companies are pushing ethical boundaries in the race to develop more advanced systems. When you combine that with the pressure of working for Elon Musk’s companies, where employee surveillance and intense work expectations are well-documented, you create an environment where people might feel they have little choice but to comply. The real question is: where do we draw the line between advancing AI and protecting individual rights? Because right now, it feels like that line is getting pretty blurry.
