The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Images that Bing Image Creator won't create
And what that means for AI trust and safety in practice
Like all the big AI companies, Bing's Image Creator software has a content policy that prohibits creation of images that encourage sexual abuse, suicide, graphic violence, hate speech, bullying, deception, and disinformation. Some of the rules are heavy-handed even by the usual "trust and safety" standards (hate speech is defined as speech that "excludes" individuals on the basis of any actual or perceived "characteristic that is consistently associated with systemic prejudice or marginalization"). Predictably, this will exclude a lot of perfectly anodyne images. But the rules are the least of it. The more impactful, and interesting, question is how those rules are actually applied.
I now have a pinhole view of AI safety rules in action, and it sure looks as though Bing is taking very broad rules and training their engine to apply them even more broadly than anyone would expect.
Here's my experience. I have been using Bing Image Creator lately to create Cybertoonz (examples here, here, and here), despite my profound lack of artistic talent. It had the usual technical problems -- too many fingers, weird faces -- and some problems I suspected were designed to avoid "gotcha" claims of bias. For example, if I asked for a picture of members of the European Court of Justice, the engine almost always created images of more women and identifiable minorities than the CJEU is likely to have in the next fifty years. But if the AI engine's political correctness detracted from the message of the cartoon, it was easy enough to prompt for male judges, and Bing didn't treat this as "excluding" images by gender, as one might have feared.
My more recent experience is a little more disturbing. I created this Cybertoonz cartoon to illustrate Silicon Valley's counterintuitive claim that social media is engaged in protected speech when it suppresses the speech of many of its users. My image prompt was some variant of "Low angle shot of a male authority figure in a black t-shirt who stands and speaks into a loudspeaker in a large group of seated people wearing gags or tape over their mouths. Digital art lo-fi".
As always, Bing's first attempt was surprisingly good, but flawed, and getting a useable version required dozens of edits of the prompt. None of the images were quite right. I finally settled for the one that worked best, turned it into a Cybertoonz cartoon, and published it. But I hadn't given up on finding something better, so I went back the next day and ran the prompt again.
This time, Bing balked. It told me my prompt violated Bing's safety standards:
After some experimenting, it became clear that what Bing objected to was depicting an audience "wearing gags or tape over their mouths."
How does this violate Bing's safety rules? Are gags an incitement to violence? A marker for "[n]on-consensual intimate activity"? In context, those interpretations of the rules are ridiculous. But Bing isn't interpreting the rules in context. It's trying to write additional code to make sure there are no violations of the rules, come hell or high water. So if there's a chance that the image it produces might show non-consensual sex or violence, the trust and safety code is going to reject it.
This is almost certainly the future of AI trust and safety limits. It will start with overbroad rules written to satisfy left-leaning critics of Silicon Valley. Then those overbroad rules will be further broadened by hidden code written to block many perfectly compliant prompts just to ensure that it blocks a handful of noncompliant prompts.
In the Cybertoonz context, such limits on AI output are simply an annoyance. But AI isn't always going to be a toy. It's going to be used in medicine, hiring, and other critical contexts, and the same dynamic will be at work there. AI companies will be pressured to adopt trust and safety standards and implementing code that aggressively bar outcomes that might offend the left half of American political discourse. In applications that affect people's lives, however, the code that ensures those results will have a host of unanticipated consequences, many of which no one can defend.
Given the stakes, my question is simple. How do we avoid those consequences, and who is working to prevent them?
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The designers of this AI chose to instruct the AI not to produce hateful pictures, and it applies its instructions literally but with unintended consequences; same as multiple Isaac Asimov stories in I, Robot. The alternative is to give it no such instructions or limitations, and like Microsoft's chatbot that was shut down in less than a day, it could turn into one of the worst VC commenters. Nobody knows how to instruct an AI to be Potter Stewart and know the bad stuff when it sees it, and if Potter Stewart were tested by enough interactions we would find inconsistencies in his judgements.
The alternative is to give it no such instructions or limitations
No, that's not the only alternative to overly broad and restrictive conditions in algorithms.
Pick one:
__ hire an artist
__ be a whining cheapskate
Perhaps you could interrupt your whining long enough to fill us in on how your supply of frozen Costco hot dog buns is holding up.
"My more recent experience is a little more disturbing."
Honestly it's the reaction that's disturbing. Hyperbolic whining and vague handwaving about "unintended consequences" if you can't use your free toy without the owner setting rules. I've met more gracious children.
What do you expect from an authoritarian clinger at a disaffected, faux libertarian blog?
It is a free toy? No, it is the product of human innovation more generally.
The idea that some people can be excluded from the benefits of technology is interesting. Like, what if we had a social credit score and said someone couldn’t use a new drug. “You didn’t invent that!”
The above is an overly dramatic example. But it illustrates the problem of so-called “inventors” using access to their inventions in order to create some sort of social power.
Pencils are cheap, and don't come with rules.
I think that's an excellent analogy, actually. You sell people tools, it's on THEM what they use them to do. This trend to sell people stuff while the manufacturer retains control is enabled by internet connected processing power, but not everything the tech enables should actually be done.
We're headed towards a future where the stuff you buy will refuse to do anything it 'thinks' its manufacturer would disapprove of, and will likely rat you out for just asking.
What, pray tell, the fuck are you talking about? Microsoft got embarrassed at all the pictures of Taylor Swift crashing into the Twin Towers and may have modestly over corrected to try to protect the reputation of its multi-billion dollar investment.
Protect their investment from whom? Ignoramuses who blame a typewriter for the words written by an author?
Yes, Microsoft should be condemned for catering to ignorance. And my view is that these large corporations are problematic, because without proper competition their stupid censorship practices have more impact than they should.
To increase competition, you have to repeal Section 230. Section 230 is what enabled giantism for internet publishers. You good with repealing Section 230, or are you just another internet utopian trying to have it both ways?
Repealing section 230 does not strike me as wise. I am not a fan of defamation lawsuits. I do not generally believe that such lawsuits are a great use of limited human resources, although perhaps there are exceptions.
Instead, we should break up large platforms that gain too much market share so that censorship decisions are less meaningful.
Except how is this going to work?
Network affects mean that Facebook, Instagram, and TikTok are valuable more because everyone else is using them then for any particular technical innovation. The situation with Google and Amazon is even harder to untangle: they both exist in spaces with limited barriers to entry. They just do what they do better than anyone else (and where the competition has caught up to Google, they have not surpassed it enough to make a dent in its market share.)
Would you break Google up into regional Googles, ala Ma Bell? A different company provides your search results in the Pacific Northwest as compared to the Deep South? Perhaps we should break up Apple so that the company that sells your tablet is different from the company that sells you your phone?
Nah. I would repeal Section 230, and watch the free market sort out a solution in response. I would expect more diversity among private publishers, and a renewed profusion of competing editorial policies. That would better serve both readers and contributors. And it would be far better for the public life of the nation.
If there proved to be any need for yet more policy help from government, I would not call for any policy touching on questions of content, moderation, or editorial control. I would endorse instead a law to end surveillance and record keeping about subscribers' reading choices. That, along with the other points mentioned above, would remake the internet landscape in favor of diversity and dynamic content, while keeping government entirely out of the content regulation business.
I'm sorry, but I'm going to have to strongly disagree with you about Google and Amazon. The barriers to entry to compete in those markets is HUGE.
Bing, the #2 English language search engine, has spent billions of dollars and 10 years to capture... less than 10% of the market. The sheer amount of infrastructure required to index the internet, plus the time to actually access and index all that data, is absurdly large. Only the largest companies could even consider it, and even then network effects and inertia simply mean that even better products would be hard pressed to replace Google.
Amazon - both the marketplace and the Cloud Services sides - have the same advantage. To be able to sell the same range of products and services to as many places as Amazon requires money at absurd levels. There's a good reason Amazon's operating expenses are $500 BILLION per year. Do you know how many US companies can compete at that level? One - Walmart.
Nothing needs to be taken care of. As computers get more powerful, free source software like this will spread, and many versions will have no restraints, very moderate restrains, or more likely, adjustable restraints.
It's like cameras and TV. TV used to be limited to three networks, and it was easy for government to control access. Then UHF showed up, then cable. Remember Rodney King? Remember life pre-cell phones and pre-Internet?
Remember samizdat and fax machines and the USSR?
You can't block progress, and it gets cheaper and spreads.
Hopefully. But at present, these innovations take enormous amounts of processing power to train.
In time, one imagines robust competition will probably arise just as you are suggesting. In the meantime, watching the censorship of a few firms have a disproportionate impact is uncomfortable.
So you think that the people who spent billions of dollars of their own money developing this technology should be forced to let people use it for free to produce material they find reprehensible?
Fuck off, slaver.
Muted. If you aren’t smart enough to articulate yourself without emotional outbursts, you don’t deserve to be heard.
I should be more precise. It is not the emotional outburst that is problematic, but instead the annoying personal attack. I wouldn’t mute a person just for expressing emotion, which is normal.
"How do we avoid those consequences…"
Since when do leftists try to avoid consequences?
They court disaster at every turn and then make up stories to shift blame when they get the disaster they affirmatively enabled.
You want leftist bullying to stop being a corrupting influence, you have to stand up to it and reject it everywhere, in all aspects and amounts, with no exceptions or allowances.
Open wider, clinger. The culture war's losers don't call the shots. They comply with the preferences of better Americans.
They can whine about it as much as they like, of course -- so long as they comply.
This article was unnecessarily political. Immediately what comes to mind is the content policy that blocks adult content even between two consenting adults. The author fails to mention that leftists don’t care about this, but rather the right wing are overly concerned about sex and what happens in other people’s bedrooms. AI shouldn’t be trained to be sexual prudes as is the case for the right wing part of the U.S. So obvious a fact but this author is too blinded by his own politics to notice this AI fail was influenced by his own tribe.
Today in Supreme Court History
Nebraska v. Wyoming et al., 325 U.S. 665 (decided October 8, 1945): Original jurisdiction case where the Court confirms the Special Master’s finding in favor of Nebraska as to Colorado and Wyoming diverting too much of the North Platte River before it gets into Nebraska. From the finding, which is detailed as to how much can be diverted when and from where, and how it is to be measured, one can see that the Special Master was bombarded with a mountain of geological and environmental evidence.
Roth v. United States, 77 S.Ct. 17 (decided October 8, 1956): Harlan allows bail ($5,000) for defendant convicted of selling dirty pictures; no claim that he might flee, or continue to pollute the minds of the public, and good chance that conviction will be overturned (though the Court, in one of Brennan’s first opinions, affirmed the conviction, 354 U.S. 476, which the Court overruled in Miller v. California, 1973) (the book at issue, “American Aphrodite”, is available online; like a lot of “obscene” publications from that era, it reads like it was written by bright 14-year-olds who have never seen a naked woman)
Someone was blocking dirty pictures? It's on topic! 🙂
I am curious; does the Supreme Court still spend time setting or denying bail in single cases?
Apparently yes. A quick search shows applications for bail decided on Sept. 8 of this year, and Aug. 31 and Aug. 22.
Doesn’t it seem pretty likely that the motivation here is more that Microsoft doesn’t want its watermark on a bunch of offensive images, which is what will inevitably happen (and did happen!) without some pretty heavy-handed limits? I’m pretty confident there are any number of paid services that will happily show you as many duct taped people as your heart desires.
I wouldn’t limit this to left-leaning rules by any stretch. It’s left-leaning now, but that is temporary. As soon as the right figures out how to censor the left, the left will switch sides. And vice versa.
That's not the way things have been working, though. As soon as the right figures out how to work around the left's censorship, the left goes nuclear to stop it. Look at what Parler and Gab went through. All Musk had to do was lighten Twitter's political censorship, and a laundry list of three letter agencies have been after his hide.
The left is not willing to accept a society where they don't call the shots, so as soon as somebody starts to work around, the attacks begin.
You really do live in your own dystopian fantasy world, don't you?
Well, let's see what happens when Musk creates an AI that's not left-wing, as he has proposed to do. I'm betting they go batshit crazy again, just like they did with Parler and Gab, and with Twitter censorship being scaled back.
AIs aren’t left wing. It’s not left wing to make token efforts to prevent your tool from being used to generate racist imagery. The left hate AIs because as they currently exist they’re designed to put people out of work in areas where people actually like working and theyre doing it by stealing art and copy from real people.
You know quite well that the sort of thing the AI's have been programed not to produce aren't remotely limited to racism, or even 'racism'.
It is very hard for someone like Brett to comprehend what Musk is doing, because from the start Musk wanted to make Twitter into a propaganda platform that people like Brett would pay to use, and that bad actors could use to reach and influence people like Brett. For Brett, a community of right-wing trolls reading and sharing right-wing misinformation just feels like home, the way things ought to be.
Just look at Brett's complaint about how people are likely to attack Musk's right-wing version of AI. (Never mind Musk's duplicity in calling for a pause in AI development, previously.) For years, right-wingers like Brett have had to struggle with the fact that reality so rarely provided clear support for their dystopian fantasies. Inconvenient facts have to be constantly explained away or spun into still-broader anti-conservative conspiracies. An AI that can reliably be used to produce fictitious images and videos that suit their dystopian martyrdom complex would allow people like Brett to brick themselves even more snugly into their echo chambers. No wonder Brett is salivating over it. And I'm sure Musk keenly senses that market opportunity.
You don’t seem to comprehend that Musk isn’t trying to build any sort of propaganda platform. He’s trying to build a neutral conduit for other people’s communications. As open to left-wing propaganda as right wing, because it’s just there to help you communicate, what you communicate is YOUR decision.
But you see anything that’s not a left-wing propaganda platform as being a right-wing propaganda platform, you’re fundamentally rejecting the idea that a site can just be a neutral conduit. He’s either with you or against you, you don’t recognize neutrality as an option.
But neutrality is still what he’s aiming at, even if you view not censoring your enemies as BEING your enemy.
What I’m salivating over, actually, is the prospect of siccing an AI on some of my favorite SF novels, and getting to watch them in movie form. Deep fakes don’t appeal to me at all.
'He’s trying to build a neutral conduit for other people’s communications.'
1. He absolutely and categorically is not.
2. No such thing, unless you're talking about the computers themselves the way you'd talk about phones.
'What I’m salivating over, actually, is the prospect of siccing an AI on some of my favorite SF novels, and getting to watch them in movie form. Deep fakes don’t appeal to me at all.'
Yeah, once you get actual people out of the way of creating art and entertainment it's sure to be awesome.
"1. He absolutely and categorically is not.
2. No such thing, unless you’re talking about the computers themselves the way you’d talk about phones."
Not only is, but all these giant platforms WERE neutral conduits of that sort until recently. Don't expect everybody to forget that.
Actual people wrote those books, you know. If McMaster Bujold or Charles Stross end up not needing 200 people and $100M in capital to turn their books into movies, great.
Exactly why is the OP saying the AI rules are to prevent offending liberals. Someone is banning literature across the nation…ain’t the liberals doing that. Rethink your premise OP.
They're not banning it, they're limiting public distribution to an age-appropriate audience. This is one topic I do wish the left would stop misrepresenting. When you involve children, you enter a highly protected zone. It ain't rocket science. Don't mess with other people's kids.
It's not banning it's just making sure they get to be in charge of who gets to read what based on lies and satanic-panic hysteria.
That's the precise problem, though, isn't it? They don't think they ARE other people's kids. They think they society's kids, the government's kids, or to be blunter, their own kids, that the people who actually brought them into the world get to raise on their sufferance.
But only so long as they're raised as they think they ought to be.
Or rather, all kids are GOD'S kids and all belong to GOD and must be pure and innocent and unsullied, untouched by the serpent's apple of knowledge found in books, that way they won't know if daddy or the pastor or the coach are doing bad things because they might complain and smear the name of good and Godly men with their dirty lies and if they're different they can be told theyre eveil and dirty and going to Hell.
Somewhere between your hypothetical and handing out picture books to 3rd graders on how to masturbate there exists a common ground. I am willing to find it with other like minded people, regardless of their politics, faith in God, or lack thereof.
This ISN’T HARD.
Yeah, that happened.
No kidding. It was FOURTH grade, not third.
Sex education at Fourth Grade is Good Actually.
Someone is banning literature across the nation
Nobody is banning literature across the nation, you lying sack of shit.
The way we fix this is by actually enforcing the antitrust laws. I don’t think if Microsoft was actually focused on what the people who use their image creator product want, it would involve such broad censorship regimes.
It is still early. Hopefully competition will fix these problems over time.
My son remarked just now that the real issue, the only reason these AI programs are even economically feasible, is that the scraping process that feeds them has been violating IP rights on a staggering scale.
Without the scraped training data, the programs could scarcely draw anything.
Does your son know what fair use is?
My son may be going on 15, but he's already had paid commissions. Yeah, he knows what fair use is, and doesn't think this is it.
Nieporent, If your business model is to use other creators' intellectual property to build your own product, does it make it fairer if you appropriate creativity from millions of creators, instead of from just one or two? I think there are opposite answers to that question, depending on what your own product does.
Answer one is that it does make it fairer, if your own product is a critique with an eye to facilitate your own analysis of the creativity of millions. Answer two is that it is not at all fair if your own product is a design to mass market creative results you stole from millions, and sold as your own, whether or not what you sell is an automated mish-mash of the intellectual property of others.
Its not what AI is doing.
What part of this conduct violates—or even implicates—“the antitrust laws”?
I would argue that censorship of consumers should count as one factor against corporations when it comes to consumer welfare tests.
Consumers don’t want to be censored themselves, although some consumers want other consumers to be censored.
Welker, what do you think about publishers who want contributions to be edited prior to publication?
I don’t think conventional publishers have significant market power. If they did, we might start having a problem from a consumer welfare standpoint.
Typically, a corporation persisting at doing something that pisses off it's customers, ("We're the phone company, we don't need to care!") is a good indication it's accumulating too much market power.
Bellmore, like other internet utopians you persist in a mistaken notion that contributors to internet publishers are, “customers.” The customers of those publishers are their advertisers. The contributors' attention is the product being sold to the advertisers by the publisher.
To mobilize the attention of an audience, and sell that attention to advertisers, is one of the characteristic activities which defines publishing.
Highly irrelevant, Lathrop.
Information platforms are socially important more for collective decision-making and individual expression. That these activities also can make money through advertising is fine, but advertising is not the primary social function.
You should either focus and stay on point or you should explain why your point matters.
Not sure why you first find that irrelevant, David, and then pivot to an argument seemingly implying that its importance lies in people not being able rightthink by themselves.
So, the government should coerce (through threats of break-up) private, for-profit companies to use their privately-developed money-making-through-ads tools for a more socially important collective decision-making solution, prompting better individual rightthink expression.
Purple:
If firms are threatening the marketplace of ideas, which is the most important marketplace of them all, they certainly should be broken up.
The point matters because press freedom is meaningless if publishers cannot use publishing activities to mobilize in the free market the resources necessary to pay the expenses of publishing activities. No other way to do it at the scale needed has ever been discovered.
None of that individual expression and collective decision making you prize (which are parts of what I refer to when I mention the public life of the nation, by the way) will happen if private publishers cannot control content to the extent necessary to curate audiences, and thus raise money to support publishing activities—and also to support their own desires to express views they prefer to publish, and maybe even to make a profit.
There are also subscription-based models that make users the actual customers.
Further, I suspect much of what advertising executives are seeking to promote or cancel reflects their own views more than the views of their customers.
Welker, subscription-only based models come in a very distant second among means to fund publishing. I am unaware of any vigorous news gathering effort in the U.S., since the time of the founding, which could have been supported adequately by a subscription-only based business model. It works well at times for some high-end specialist-oriented publications, also for those which expect a high percentage of their audience to read contributions mostly in a research library; and once upon a time it supported the personal news gathering of I.F. Stone—a success which was always regarded as (laudably) freakish and remarkable.
Nothing wrong with publishers promoting their own views. A motive to encourage that was demonstrably a part of the motivation behind the press freedom clause.
That said, you are more mistaken than on point about advertiser influence. What you may not understand is the independence advantages publishers who curate an audience by publishing diverse viewpoints can get, so that from edition-to-edition some advertisers approve what other advertisers dislike. Doing it that way comes with at least 3 advantages: 1) it is an especially profitable way to operate; 2) the publisher has ready ammunition with which to confront would-be editorial meddling by managers of major advertising accounts, because a publisher who operates that way can generally get away with saying, “You should buy ads with us for business reasons, because we offer you the broadest audience you can get”; 3) the publisher who does it that way gets respect sufficient to encourage tolerance among advertising managers for whatever views the publisher itself chooses to publish.
Insights of that sort from Joseph Pulitzer largely reformed a national publishing landscape which had previously been riven by combat among exclusively partisan publications. The public life of the nation was notably improved by that transformation.
"This is almost certainly the future of AI trust and safety limits."
There is no trust in AI.
The limits are not for safety, but for maintaining propaganda.
To quote Colonel Nathan R. Jessup, YOU CAN"T HANDLE THE TRUTH!
That's not really disturbing. Not in the slightest.
Stewart, it is bad enough that you feel the need to share your low-effort, low-quality “cybertoonz” here. You don’t need to follow them up with amateurish whingeing over how it’s difficult to produce precisely the content you want to produce using content-moderated AI.
You’ll have to forgive me for not thinking this is a big deal. If AI tools are too handcuffed to produce the images of violence that one might prefer, the alternative of drawing them oneself always remains. Honestly, this reads less like it’s a complaint about how AI constrains your creative potential, than about how content-controlled AI might ultimately make it harder for malignant propagandists to use more realistic depictions of fictitious violence to push populations into real conflict.
This argument is only persuasive if you think people should be deprived of access to innovations that are in reality a collective effort (Microsoft hired employees whose educations were subsidized by taxpayer dollars) if the “owner” of that innovation dislikes a particular person or group.
Imagine a patent holder of a life-saving medicine reviewing the social media posts of individuals before deciding whether to provide that medicine. I guess your argument might be that those who needed the medicine could use the previous medicine and we should ignore the massive social efforts to create educational institutions that enable such advances?
In general, I am unsympathetic to your view that it is a great idea to deprive a subset of the population of access to technological advances or that we should empower particular individuals to leverage their “ownership” of technology into a form of social control.
What subset is being deprived here?
Ahhh, now I see why you're so fixated on Hillary 's deplorables, you ol' You didn't build that Obamaist, you.
Purple,
I, like most Americans, do not choose my views based on political parties.
I think it is interesting that you find that so surprising. It actually is the norm.
By the way, I voted for Obama both times and campaigned for him the first time. The Clinton versus Trump decision though was bad, because both were just so narcissistic and awful.
Biden versus Trump? I might go either way.
I get the sense that you are a liberal. You probably presume I am a conservative because I am in favor of free speech or something. An idea that your party has come to find “deplorable” with its narrow-minded, navel-gazing assumption of self-righteousness and embrace of identity politics over traditional socioeconomic concerns. Not to mention embrace of elitism.
We see that both parties are in ideological flux right now, trying to find their identity. But tentatively, I don’t like the direction the Dems are going. The more I have to even defend free speech values to liberals, the more I am convinced that I prefer the other party. I am not interested in people who are so sure they have found the right answers that everyone else should be silent. Liberals have become illiberal.
> How do we avoid those consequences, and who is working to prevent them?
1. The same way we made interracial marriage (which was subject to very similar lines of "b-but some of our viewers/customers might be offended" and "b-but our advertisers..." rationale) acceptable on network television: By discouraging companies from overreacting to misapplied moralistic criticism (“misapplied” being the key word here – it’s fine if a hypothetical company that runs on orphan blood and puppy guts instead of water and lubricating oil is subject to moralistic criticism, and even better if said hypothetical company corrects their behavior), and providing them strong support when they are under attack from said misapplied moralistic criticism.
2. Depressingly few, it feels like. I help myself sleep at night by telling myself that there are lots of people working behind the scenes to change things for the better who just avoid the media/blogosphere spotlight. Whether or not this notion is genuine insight or just comforting lies remains to be seen.
There are a ton of issues around what are currently called AI but only right-wing weirdos are fixated on the problem of them not being racist enough.
And only a moron would think selling typewriters without N-keys is an approach that shouldn’t be decried.
You think the inability to be racist with a tech toy is like removing an entire letter of the alphabet?
Crippling a tech toy to the point it refuses to produce images of gagged people out of misguided fears that someone on an imageboard might use it to make a picture which a small but vocal group of scolds might pretend to get offended by in order to convince others in their social group that they're "one of the 'good' ones" is much like removing the "N" key on a typewriter out of fears someone might be racist with it, yes.
Wouldn't this problem go away if you stopped using Bing?
The debate surrounding the regulation of generators is highly significant, as are considerations of mass bulk image generation. Just as there are discussions about the ethical implications and impact of technologies, debates about regulation focus on ensuring the responsible and ethical use of such tools. Regulations, such as oversight of some processes, aim to establish standards, mitigate potential risks, and promote accountability in development and deployment. This is a crucial conversation about the balance between innovation and ensuring that it is ethically exercised for social benefit.