The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"A Woman Made Her AI Voice Clone Say 'Arse.' Then She Got Banned"
From the MIT Technology Review (Jessica Hamzeou):
Both Joyce Esser, who lives in the UK, and Jules Rodriguez, who lives in Miami, Florida, have forms of motor neuron disease—a class of progressive disorders that result in the gradual loss of the ability to move and control muscles….
AI is bringing back those lost voices. Both Jules and Joyce have fed an AI tool built by ElevenLabs recordings of their old voices to re-create them. Today, they can "speak" in their old voices by typing sentences into devices, selecting letters by hand or eye gaze…. It's been a remarkable and extremely emotional experience for them—both thought they'd lost their voices for good….
[But at one point,] Joyce typed a message for her voice clone to read out: "Come on, Hunnie, get your arse in gear!!" She then added: "I'd better get my knickers on too!!!"
"The next day I got a warning from ElevenLabs that I was using inappropriate language and not to do it again!!!" Joyce told me via email (we communicated with a combination of email, speech, text-to-voice tools, and a writing board)…. "… [B]ecause the next day a human banned me!!!!" …
Joyce contacted ElevenLabs, who apologized and reinstated her account. But it's still not clear why she was banned in the first place….
Thanks to Jordan Brown for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Thats what happens when someone else owns the software.
That was my immediate reaction, too. Ownership matters. Only being allowed to rent stuff is a recipe for ... well, lots of bad stuff.
I used elevenlabs for fiction, with copious cussing, and I have never run into this problem.
I wonder whether this is a UK user-specific issue.
Might be. It IS the UK, after all, they're not big on freedom of speech.
Probably this. What was merely harsh leaning on corporations and people to speak only in ways the powerful want has, severed from a First Amendment, immediately been outlawed.
This should be a warning, given the terrible status of freedom around half the world, and all history, but, get this, it is touted as a wondeful feature!
Democracy can safely wield the powers of tyrants! Any demagogue skilled in blowing the winds of passion can institute bans by getting elected. That'll keep freedom safe!
"No, don't exaggerate! Good hearted people will only elect...
Only elect...
Nevermind."
I've tried using AI for writing fiction, the ones I've experienced have some definite limits when it comes to expressing sexually-themed material, haven't tried so much with merely vulgar.
eleven labs is not writing software. You cut-paste written text, and it reads in in a voice. Its sophisticated voice-to-text
Are we sure the quoted input is what got her banned?
Mildly amusing, but maybe more important cases going on for speech expert legal profs to be focusing on?
https://news.bloomberglaw.com/us-law-week/republican-moves-to-impeach-judge-who-ruled-against-trump-order
Upwards of 99% of readers are more likely to be affected by this kind of safetyism-oriented censorship than by expecting judges to limit their orders to what the Constitution allows.
Joyce lives in the UK. But if she lived in the US, would the ADA give her a cause of action?
ADA is just DEI for cripples and dwarfs
Didn't answer the question. Fight fire with fire.
Just another example showing that AI is an oxymoron. It doesn't do anything better than humans can do, it just does it faster. That includes making errors that humans have to correct; in this case the error was harmful and embarrassing. Perhaps this one wouldn't have flagged the word "ass", for which the word "arse" is simply a colloquialism. Perhaps this one is also programmed to flag the word "bloody" when used as an expletive, which in Britain is all the time.
Humans make mistakes also.
This case raises serious concerns about the overreach of AI moderation and the lack of transparency in enforcement policies. If saying something as mild as 'arse' can trigger a ban, where does the line get drawn? AI-generated voices and content are becoming increasingly popular, especially in entertainment and gaming. Many users seek modified apps for better accessibility and features, like Wink Mod APK for PC (https://apkwinkk.com/wink-mod-apk-for-pc/), which allows for an enhanced experience. The debate on AI censorship is only just beginning, and platforms need to adopt more balanced approaches rather than outright bans