The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Mastodon's Content-Moderation Growing Pains
[I asked Prof. Alan Rozenshtein (University of Minnesota) to write a post about Mastodon and one particular recent controversy related to it, and he very kindly agreed. -EV]
Ever since Elon Musk purchased Twitter, Mastodon, a decentralized microblogging platform, has seen millions of new users. I've written elsewhere about the architecture that make Mastodon unique—specifically, each Mastodon server (known as an "instance") can choose its own content moderation standards, blocking whatever content, users, or even other instances that it wants. This leads to what I've called "content moderation subsidiarity" and allows users to tailor their experience while still generally being able to follow and be followed by users on other instances.
Mastodon thus represents a novel solution to the "content moderator's trilemma": the challenge of (1) running a social media platform with a giant and diverse user base, (2) using one set of content moderation standards, and (3) managing user dissatisfaction with the content moderation they experience. Centralized platforms like Twitter respond to the trilemma by accepting that user dissatisfaction is inevitable; decentralized platforms like Mastodon try to satisfy users by giving up on centralized moderation standards.
But it will take time—months, maybe years—for the millions of new Mastodon users to find, and in some cases create, the instances that best suit their needs. In the meantime, we should expect a difficult period of growing pains.
An illustrative example is the ongoing controversy at journa.host, an instance set up for journalists. Mike Pesca, host of The Gist (and formerly at Slate) and a member of the journa.host instance, posted a link to a recent New York Times story about the potential negative effects of treating transgender children and teenagers with puberty blockers, describing the story as "careful, thorough reporting." Parker Molloy, another journa.host user and herself a transgender woman, responded by strongly criticizing the piece and calling Pesca a "bigot" and an "anti-trans ghoul".
Pesca defended his original post and criticized Molloy's actions, writing, "you as a member of the activist community attack the article [because] of bad bedfellows, insult me and mischaracterize the story as Republicans Versus Doctors. That's plainly inaccurate." Journa.host suspended Pesca for "demeaning the professionalism of a journalist, based on their identity" (hence why the links above are to Pesca's Twitter feed, to which he posted screenshots of the exchange).
Journa.host also suspended Molloy for "attacking the integrity of another [user], who is a moderator on the server," apparently referring to Molloy's calling Evan Urquhart, himself a transgender journalist who has criticized the New York Times article, as a "bootlicker." It's unclear whether Molloy's insulting of Pesca also factored into her suspension.
Either way, she has since apologized to both Pesca and to journa.host's moderators. Molloy has moved to a new instance, while Pesca remains suspended. (Although Pesca has described being "suspended from Mastodon," he has only been suspended from journa.host. This conflation between the Mastodon network and individual Mastodon instances is a common, and understandable, confusion among new Mastodon users.)
Journa.host has been criticized from all sides. Some (myself included) believe that journa.host should not have suspended Pesca for his comparatively restrained response to Molloy's attack. Others criticized journa.host for suspending Molloy and for not doing more to fight transphobia. Indeed, it has (it seems to me unfairly) developed a reputation with some instances as transphobic generally, to the extent that several large instances have either limited or outright banned access for their users to journa.host (a move which Molloy has urged against).
The journa.host controversy illustrates how Mastodon's rapid growth is forcing it to confront one of its founding contradictions. On the one hand, simply by dint of Mastodon's decentralized nature and the reality that no content or user can be banned from the network entirely, it is categorically more protective of speech than is any other platform. In this sense, Mastodon, not Twitter, is (as Twitter once called itself) the "free speech wing of the free speech party."
On the other hand, much of the dominant culture of Mastodon—at least before the recent influx of Twitter users—has been supportive of fairly heavy moderation, especially of users and content that is viewed as far-right. As Mastodon's creator Eugen Rochko noted, "One of the things that gave impetus to the creation of Mastodon was a lack of moderation on Twitter against hate groups. The 'no nazis' rule of the original mastodon.social server … continues to serve as a major attraction of the project." The norm of aggressive moderation is also a reflection of the fact that Mastodon, unlike most other social media platforms, originated not in the United States but in Europe (specifically Germany, where the Mastodon nonprofit is based), which has more permissive legal and cultural norms around speech restrictions.
Another example of Mastodon trying to accomplish through norms what it has foreclosed as a matter of architecture is the "Mastodon server covenant," which was introduced in 2019. The covenant is a set of requirements that instances have to meet to be listed on the Mastodon organization's website.
These requirements include "active moderation against racism, sexism, homophobia and transphobia" such that users "have the confidence that they are joining a safe space, free from white supremacy, anti-semitism and transphobia of other platforms." But the covenant does not—indeed cannot—require any particular Mastodon instance to abide by these requirements. A non-compliant instance may not be listed on the Mastodon organization's website, but it remains a full-fledged member of the Mastodon network.
As Mastodon grows—and especially as it incorporates users who may be unsatisfied with its existing norms—this contradiction will remain a source of tension. In some cases, the network has managed to quarantine instances that deviate from the dominant norms, as happened when the alt-right platform Gab migrated to Mastodon. But enough new users might overwhelm Mastodon's dominant culture, especially if those users leave or refuse to join instances that they view as overly censorious.
The question, for which we'll simply have to wait to find out the answer, is whether the system achieves equilibrium around a large core of instances that are willing to communicate despite calls for defederation, or if instead the network balkanizes into an ever-shifting circular firing squad of mutual blocking that is difficult to navigate and unrewarding to be a part of. If the latter, Mastodon may turn off too many users to ever truly be a viable Twitter competitor.
Either way, the fights will be messy and public. But I suspect that Churchill's observation about democracy is true also of content moderation on a global internet: the decentralized option may well be the worst—except for all the rest.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"The 'no nazis' rule of the original mastodon.social server … continues to serve as a major attraction of the project."
Which is great until you start defining "nazis" as anyone who disagrees with you. That's basically the fundamental problem here, with it variously being "Nazi", "transphobic", you name it. In the end "it" is just disagreement.
So Journa gets labeled "transphobic" for not being sufficiently aggressive in banning people who dissent from the pro-trans orthodoxy.
The thing is, the more personally tuned moderation becomes, the less exposure people have to dissent, and the more shocking such exposure becomes. So, it's something of a positive feedback loop, isn't it?
In the end, every time you decide something is beyond the pale, the pale moves closer.
IIUC the person was banned for refusing to denounce a NYT article about puberty blockers as bigoted.
So for being a Nazi?
This is a good analysis
‘You can define ANYONE WHO DISAGREES WITH YOU as a Nazi’ might be a problem, but so are actual Nazis.
For example, defining transphobia as dissent from the pro-trans orthodoxy doesn't make it any less transphobic.
Being a Nazi and/or transphobic might be dissent, but it’s hard to argue that people haven't been exposed to enough of both to take the attitude of ‘fuck off’ to them. There’s no virtue in entertaining either.
Nazis have been beyond the pale for nearly a century now. Transphobia isn’t beyond the pale at all, it’s quite popular in some circles. Being trans, on the other hand, has been beyond the pale for as long as trans people have existed and are still beyond the pale in those same circles. So as being transphobic is gradually pushed beyond the pale, being trans is pushed within the pale, same as with gay peple and homophobia. The pale moves both ways.
"For example, defining transphobia as dissent from the pro-trans orthodoxy doesn’t make it any less transphobic. "
Yeah, but if that's the definition, "transphobia" ceases to be objectionable to the 95% of the population who aren't members of that very esoteric orthodoxy.
Now you're just redefining minorities. Of course majorities don't always care how minorities are treated unless the minorities fight for their rights and respect, though part of the majority can of course be riled up by populists every now and then to support treating minorities very badly, at which point the minorities are demonised for daring to defend themselves.
"…unless the minorities fight for their rights and respect…"
People who don’t offer respect shouldn't expect it to be afforded to them.
Activists tend to be the nastiest, worst-behaved people outside of a prison cell. They’re not owed any better behavior in return.
If that were true then the Mattachine Society would have been successful and Stonewall would never have happened.
I’m not sure why you are talking about something from a half century ago. Behavior is not the same today as it was in ancient history.
It's almost like those who fail to study history are doomed to repeat it or something.
Which is a quippy way of saying I gave an example that spans from 1950s to 2010s. But the referenced pattern is seen throughout all human history up to today.
Respectability politics doesn't work, and it's promoters (such as you) are not to be trusted.
"Respectability politics doesn’t work…"
That’s a new way to say anything goes and the ends justify the means.
We already have that - remember the bomb threats to the children's hospital?
(A) That's a whole different sentence
(B) The failure of respectability politics isn't new. But again, if you studied history rather then dismissed it, you would know that.
Maybe someone had an objection to mutilating children for profit.
Or maybe someone similar to Nige wanted a talking point.
The “failure of respectability politics”, if such a thing even exists and can fairly be judged to have failed, doesn’t justify bad behavior. Nor does it obligate anyone to respond to bad behavior kindly.
We don’t have a society where one group can act horribly and the rest are obligated to respect the horrible people. If you think you can have that, you should expect an escalating cycle of worse and worse behavior from more and more people.
When you're accusing people of mutilating children for profit, you've certainly left respectability, and honesty, far behind.
Tell me you're unwilling to read a book without telling me you're unwilling to read a book.
My statement is not as controversial as you think it is.
If your argument is "read [some book]" then you’re saying you can’t support what you say.
People who don't have respect afforded to them don't have anything to lose, so they might as well.
If some BS about respect is their worst problem, then yeah, they have lots more to lose.
Well they have one whole political party in the US who have decided to make it their business to make life hell for them, so, yeah, lots to lose.
Versus the other political party that does everything they can to make most Americans' lives worse.
I dunno. They pulled out of Afghanistan, forgave student debt, reduced prescription drug prices and are trying to get access to health care for people, instead of the opposite.
Un-clutch the pearls my dude, and be safe in the knowledge that this fear is unfounded: there is sufficient transphobia and homophobia in the world that no queer person --even one that curates their online experience to avoid such things online-- will be insufficiently exposed to such evil.
In shot, I’m starting to Think that moderation isn’t the solution, but instead the problem! It’s causing the capacity to cope with disagreement to atrophy.
Thanks to it's light moderation policies, reason is full of folks that are *amazing* at coping with disagreement…
Well, yeah. They are. Obnoxious, sure, but coping.
Well, not really. It can be downright toxic and hostile in here. Which for people who know what they're getting into, it's their choice. But Reason is a dark little corner of the right-wing internet that most will never visit and sufficient numbers of advertisers will avoid.
There is a difference between polite disagreement and trolling for "libtard" tears. People seek out the former and choose to avoid the latter. There's no need to "cope;" One doesn't spend their personal time in search of something they have to cope with.
Sarcastr0: What moderation policy would you recommend for comments on this blog? I know there are a few commenters who are outright racist or anti-Semitic; I don't block them because I generally don't like to restrict them based on their viewpoints, but I can see how on other platforms they might be blocked, and you might not miss them. I also know that sometimes commenters engage in vulgar insults of each other or of third parties; I do sometimes delete their comments for that, or (if the situation is repeated) block them outright.
But my sense is that the commenters who most annoy people (and understandably so) don't violate those rules. Rather, they are schmucks or fools or cranks. Would it make sense for a forum operator -- or a social media platform operator -- to try to block such posters (as opposed to making it easier for readers to block the posters they dislike)?
Prof. Volokh, you (or Reason?), got it right; you and the other bloggers don't normally block commenters or delete comments (I've been deleted twice by Prof. Bernstein*), and instead leave it to the individual comments to mute others as they see fit.
BTW, it's hilarious that a blog about/for journalists (which I'm sure is all for journalistic freedom, 1A protection, etc.), blocked a user for "bigot" and "anti-trans ghoul" comments.
* I (heavy) sarcastically said Prof. Bernstein relishes his "victimhood," but nothing anti-semitic or hateful.
Reason has more than a fair share of trolls, but does not suffer from the juvenile crap that plagues general interest platforms.
Generic race-hate posts became the "frist post" of choice for internet vandals in the 90s as online platforms proliferated. Most dealt with these through keyword moderation which proved to have modest effect.
Slashdot provided a roadmap with their crowdsourced moderation/metamoderation model, but it doesn't seem to have caught on. Unfortunately, it relies on an overarching culture of accepting all legitimate posts, including those you disagree with.... so probably not a cure for today's hyper-political moderation demands.
I was recently introduced to a software package called "Polis", which it appears Musk is looking at using at Twitter. It does something really interesting; Instead of ranking comments by popularity, it does a deep analysis of clustering of the users, and then ranks things according to their degree of cross-cluster approval.
That is to say, being in the majority doesn't benefit you if you have no penetration into minority groups. What advances your comment is that people agree with it who tend to disagree with each other.
I wonder if something like this couldn't be leveraged to train automated moderation?
Would it make sense for a forum operator — or a social media platform operator — to try to block such posters (as opposed to making it easier for readers to block the posters they dislike)?
Professor Volokh, would you please wake up and understand that the desire to block outlandish commentary is not limited, or even principally motivated, by fear of hurt feelings. It has a more important source, one that publishers are wise to heed. There are opinions which opinion consumers correctly judge to be personally dangerous if published, or personally damaging if published, or damaging to the public life of the nation if published.
It does such opinion consumers no good to block comments, and thus keep themselves in the dark about harms going on around them—including harms which may in fact target them personally. What such users want instead is a publisher who minimizes harms by blocking (by which I mean editing prior to publishing) outlandishly harmful content.
You apparently are of the utopian view that cost-free, world-wide, publishing without prior editing is a realistic possibility for internet publishers. It is not realistic for most of them. Most of them need to stay profitable as businesses activities, or their publishing will not continue. That should not be a trivial concern, even for someone who puts an unusually high priority on expressive freedom. Expressive freedom which is beholding to others, or even to the generosity of its own publisher's pocketbook, is always on shakier ground than it would be if it earned whatever it takes to keep the enterprise going.
Unprofitable publishing will never become a reliable provider for free expression. Completely unedited publishing tends quickly toward unprofitability, because advertising revenue diminishes in proportion to public agreement on how broad and damaging the unedited harm becomes.
All that of course leaves unanswered what I am sure you consider to be a most-pressing question: How is a publisher supposed to determine which opinions to block (edit or discard prior to publication), either in the interest of personal well-being among its consumers, or in terms of a thriving and constructive public life?
The very best answer is to stop trying to solve that problem individually. No publisher can hope to answer that question on its own. Instead, pick an opinion niche, narrow or wide according to taste, and edit contributions prior to publishing, Do so with an eye to best serve your chosen niche. And then, crucially, try to make that niche profitable, so that your efforts can continue, or even potentially be handed off to someone else to manage if you lose interest, or pass from the scene.
Then rely on opinion disagreements among competing publishers to supply the diversity of opinion which best serves the public interest. Each publisher serves a niche it defines according to its own lights. All the publishers together cover almost all the niches. Any useful point on the opinion spectrum which goes unserved attracts a new competitor into the publishing marketplace to fill the niche.
The question of what opinions to serve gets answered collectively, by a myriad of self-interested competitors. Opinion consumers can choose publishers which suit them. Would-be opinion contributors will find ready publishing opportunities for all but the most outlandish commentary. But private editing prior to publishing will greatly diminish potential personal harm, or harm to public life.
Best of all, a publishing system so structured can continue and thrive on the basis of independent earnings, and is thus formidably positioned to scoff at would-be government censors. It can also prove ruggedly impervious to private blandishments, to the extent those seek to push expressive content to destructive extremes.
We know a system of that sort can work, because we have seen it work, for many decades prior to invention of the internet. What deranged that previous system was not the internet itself. From the outset the internet promised economic efficiencies to greatly advantage the previous system, and to enable unprecedented proliferation of it.
What created the damage—and raised all the questions which continuously vex policy makers—has been interest giantism. And that was created by reinforcement with a legislative blunder (Section 230) of a natural tendency inherent in online activity—the tendency toward so-called network effects. Those posit practical advantages which increase disproportionately with increased scale. Legislation to take advantage of that tendency by encouraging it was never needed. More the contrary. Legislation to keep it in check is what experience has taught as the actual requirement.
An obstacle complicating correctives is found in a widely-held public attitude, the one I term interest utopianism. Its distinguishing feature is a view of expressive activity narrowed to exclude informed consideration of the publishing activity necessary to accomplish widespread expressive freedom. The field mark of internet utopianism is a view of the issue narrowly focused on optimizing the liberty of opinion consumers and opinion creators—with no thought given to practical means of how to accomplish that and keep it going.
That thinking delivers one after another proposals for regulatory schemes which would either destroy the practical means to support publishing activity, or undermine political acceptance and support for expressive freedom. In short, internet utopians want solutions structured to foreclose practical accomplishment of the goals they imagine they can enjoy, but which in fact are destined to remain beyond reach. There will probably never be, cost-free, world-wide, unedited, expressive freedom for everyone, because no practical means will be discovered to accomplish it and keep it going.
Instead, a publishing regime structure by public policy to optimize diversity and profusion among a myriad of competing private publishers could be made practical. It could be made far more accessible for everyone—and especially for would-be contributors—than the ink-on-paper and broadcast publishing which preceded it. And that was notably successful already. In the US it was accounted an ornament of civilization, even while constrained by limitations which internet technology could greatly reduce.
Professor Volokh, I urge you to cease with internet utopianism, and to concentrate instead on practical improvements.
Or you could try the Reddit method.
A Users creates a sub-Reddit and then moderates it.
Commenters can then join (or not) each sub-Reddit.
So it's not Reddit overall doing the moderating and commenters can pick and choose which sub-Reddit they enjoy.
" What such users want instead is a publisher who minimizes harms by blocking (by which I mean editing prior to publishing) outlandishly harmful content. "
Except you're ignoring that we're all here, and not on some other site that pre-screens comments.
tkamenick — Not ignoring it. Just noticing that many of you demand government intervention to prevent private editing. Private editing is usually indispensable to keep a private publisher in business.
Why you are here remains mysterious, at least to me. I do not understand the financial basis which keeps the VC in existence. If it is a profitable business, it manages that with little visible indication how it is done. If it pays its bills by some means other than profitable business, it is unwise to suppose the VC supplies a general model for internet publishing.
Charity, subsidy, or support from the private pocket of the publisher are not sustainable business models which generalize and scale. As a publishing enterprise, the VC is tiny, so that may not matter.
If the VC were to scale up 100-fold, it would still be tiny compared to the internet publishing giants about which so many VC commenters are so unhappy. Can you imagine any practical means by which the VC could scale up 100-fold, except on the basis of profitable business activity?
If the VC did become 100 times larger, and profitable, do you suppose it could continue that way without pre-screening comments, and refusing publication to some of them? If you do think that, I doubt your judgment as a prospective publisher.
By the way, do you have any basis to suppose the VC does not already deny publication to some would-be commenters? If it does not, can you explain how moderate the tone of the commentary remains, compared to other sites which do not moderate?
Perhaps EV could oblige our discussion, and explain how the business experience of the VC has progressed over the years. That would be genuinely useful information, but also likely a valuable secret to be kept private.
When are we going to see a direct exchange between Stephen Lathrop and Jonathan Affleck? I think such a dialogue might collapse under its own weight into a rhetorical black hole.
I never got the hang of Twitter-length commentary.
I was more making a point about Brett condemning moderation as causing fragility.
But long as you ask, I do find myself thinking about this as I visit various online communities.
I think your focus on offensive viewpoint is wrong. Back when the Conspiracy comentariat was a bit more vibrant, it wasn’t lacking diverse viewpoints brushing up against one another.
There are relatively easy ways to keep the tone such that those with not much but hateful nonsense don’t find this place an undefeated nest ripe for befoulment.
While it is somewhat less than a science, it’s easy and viewpoint neutral to spot posters interested only in empty anger, not engagement. In practice, you used to reach out to people to try and ask them to moderate their tone. I would guess the response would be telling,
I also think a quick consult with some other Conspirators to validate your own take that someone is basically being abusive of others and nothing more.
"it’s easy and viewpoint neutral to spot posters interested only in empty anger, not engagement."
It's certainly possible, I'm not disputing that. But that's not what happened in the case at hand. This wasn't viewpoint neutral moderation, it was viewpoint enforcement moderation. Transparently so.
Moderation tends to evolve in the direction of viewpoint enforcement, because the people who want viewpoint enforcement are attracted to moderator positions. Moderation is one of those jobs where the very fact you want the job is evidence you're unfit for it.
No argument that the case at hand was bad behavior all around by a bunch of self-righteous jerks. I like Mike Pesca, and his treatment was just childish.
But you made a much broader statement.
Moderation may tend to drift from tone/substantiveness to viewpoint, but that's why you do a sanity check.
Plus, of course, the cure may be worse than the disease.
Finally, you seem to have abandoned your thesis about moderation making commenters fragile. I don't think that is borne out at all, except maybe folks who spend all of their time on one forum, and don't have any outside interests.
Haven't abandoned it, haven't entirely adopted it. I don't think it's the whole story, even if it is true.
Moderation PLUS self-sorting, by the way.
It’s not moderation.
It’s non-ideological ties no longer being needed in life.
I have like half a dozen friends who voted for Trump. We just don’t talk current events. More and more people can’t have that kind of flexible identity.
I don't think it's that easy. You're assuming that everyone can agree on the line between tone and viewpoint. But they don't, especially around hate speech.
Both sides consider certain of the other side's views to be hateful. As here, the left thinks questioning gender medicine is hateful, also for example immigration policies like building a wall. The right thinks questioning Zionism is hateful, also for example CRT.
And, in all those cases, the underlying motivations are often hateful. Not always, but often.
How do you solve for that?
Because the Volokh Conspiracy seems to monetize "eyes on a page", this website provides "work for transport" message common carriage. If I am granted cert, there is a good change that denial of common carriage will become a serious problem for any website that is set up for monetization.
The public is guaranteed right of non-discriminatory common carriage by the Ninth Amendment.
A common carrier has no First Amendment right to deny common carriage.
A common carrier is a person who holds himself out as willing to serve any shipper who offers him a reasonable fee to transport the kinds of goods he professes to carry to a place he professes to serve, provided they were not unfit and his conveyance was not already full. See Lovett v. Hobbs, 2 Show. K.B. 127, 89 Eng. Rep. 836 (K.B. 1680); Jackson v. Rogers, 2 Show. K.B. 327, 89 Eng. Rep. 968 (K.B. 1683).
By definition, a common carrier has to serve all comers. If he wrongfully refuses to accept a consignment, he is suable in tort. See Jackson v. Rogers, 2 Show. K.B. 327, 89 Eng. Rep. 968 (K.B. 1683).
Is the Conspiracy actually engaged in any monetization? Maybe the occasional promotion of a publication yielding royalties, but that's about it.
Reason does some monetization, to be sure. But you're not monetizing your trip on the buss, even if it does have an advertisement printed on the side of it.
First -- I state that I am not a lawyer.
I used to provide technology expertise to the pre-breakup AT&T legal department, whose members understood all the issues related to message common carriage, telecommunications common carriage, and the technology used for these forms of common carriage -- usually after I explained the technology.
I am petitioning SCOTUS with respect to Section 230
1. because I know that these lawyers -- if still active today -- could have brought sanity to Internet law,
2. because my lawyer, who died just before I filed my original complaint, left me a strategy document, and
3. because I am royally pissed that online Zionists hounded my wife and me off of social medium platforms for violating community standards when we tweeted that we loved each other.
I am Jewish and practically Zionist royalty while she is a Palestinian, whose family was subjected to genocide in Ein Karem in 1948. This genocide has yet to end, and practically every social medium platform removes a user and his content when he tries to discuss this ongoing genocide.
Wrt work for transport message common carriage, I see two ads right now: Schwab and Xumo/Xfinity. As I understand, a judge reviews the evidence and then applies common carriage doctrine to determine as matter of law whether a carrier is a common carrier, a private carrier, or a contract carrier.
The judge will ask how an alleged common carrier holds out carriage to the public, what the fee is, and what the standard terms are.
Does 'eyes on the page' translate into a valuable item that makes an advertiser want to place an ad and pay money to Volokh Conspiracy?
Does Volokh Conspiracy trade in information that the website collects when someone posts a comment?
Does Volokh Conspiracy utilize user comments to attract more eyes to the page?
Message common carriage is not the only issue. I am also making arguments to demonstrate
1. that a 2022 social medium platform does not come under Section 230 and
2. that a 2022 social medium platform routinely commits violations under civil rights law, public accommodation law, and public forum doctrine.
I also point out that the government (the US federal government and state governments) is inextricably intertwined with many a social medium platform and is violating state action doctrine.
At the following URL, you can read the draft petition, which is currently being typeset.
https://tinyurl.com/DSCOTUS
I have a thumbs-up from a top constitutional scholar. In addition, I have shown the draft to patent lawyers, who understand the technology issues. They have all agreed with me.
I have various degrees of acquaintance with Justices, who studied at Harvard or Yale. Four Justices may support a grant of cert.
As for turning a website in a cess pool, the threat is ridiculous. A social medium platform can offer multiple standard tiers of service while a user can employ filters. As far as I can tell, moderation is only a mechanism of discrimination and distributor libel, which is not a First Amendment right.
Below is a direct link to my draft petition. I transformed the link to a tinyurl. The long URL seems to cause the comment to be filtered out that contains it.
https://tinyurl.com/DBOOKLET
Keep in mind that Reason is the owner of the site here, the Conspiracy merely has an arrangement with them to have their blog hosted at Reason. As I said, so far as I know all the monetization, plugs of publications excepted, is carried out by Reason.
If they get a cut of the ad revenue, I stand corrected. But I've never seen anything to indicate this is the case; Their primary compensation is just free hosting.
If a carrier receives something of value in exchange for hosting, it may become a common carrier. You might consider the 1869 Massachusetts common carriage statute. It is defined -- one might say -- recursively.
MGL c 159 s 1. "Every common carrier of merchandise or other property shall receive, transport and forward all property offered for such purposes by other such carriers as promptly, faithfully and impartially, at as low rates of charge, and in a manner and on terms and conditions as favorable to the carrier offering such property, as he on the same day and at the same place receives, forwards and transports, in the ordinary course of business, property of a like description offered by persons other than such carriers. Such carrier shall not discriminate against any particular person or subject him to any undue or unreasonable prejudice or disadvantage. The supreme judicial or superior court shall have jurisdiction in equity to enforce this section."
In Massachusetts the telephone local loop comes under the above statute. A subscriber's common carrier is the local company which provides local service -- even if the call is placed to a California telephone subscriber. The local voice carrier depends on the interstate carrier (usually AT&T), which depends on the end loop (California) local carrier to complete the carriage (call). Billing is a process of settlement regulated by the FCC and by local state regulatory authorities.
If you want the comment section to be a cesspit that anyone left of DeSantis feels unwelcome in, then your policies are great.
If you want the comment section to be a place where anyone --regardless of viewpoint-- feels they can discuss the topic so long as they remain polite, then you have entirely failed.
If you want the comment section to be a place where conservatives feel they can discuss the topic, then you still need to delete the personal attacks and slime because it kills any real discussion and turns it into a slugfest.
So to answer your question, you first need to decide what kind of comment section and community you're building here, and choose policies that promote that kind of thing.
But I have no reason to believe you aren't a smart guy, so I suspect you knew that already, and your moderation policies are in line with the kind of comment section you desire.
Escher, the fundamental problem here is that a great many left-wingers are incapable of feeling minimally comfortable, to say nothing of "welcomed", at any site that doesn't systematically exclude viewpoints they disagree with. And worse, tend to confuse politeness with agreement, so that they consider merely disagreeing with leftist viewpoints to be rude.
Conservatives, OTOH, live in a society where most media platforms are controlled by the left, and unavoidably are used to exposure to viewpoints they don't share. This leaves us a bit thicker skinned.
Reason, as a matter of ideology AND limited resources, polices neither ideology, nor any but the most offensive instances of rudeness.
I could wish that people were a bit more polite here, but given the choice between politeness coupled with enforced ideological conformance, or rude freedom to dissent, I'll go with the latter every time.
'Conservatives, OTOH, live in a society where most media platforms are controlled by the left'
You live in paranoid bubbles filled with anger, resentment and spite; not the same thing. Aren't there at least three conservative social media platforms out there right now? What's your problem with them? Is it the all the data leaks and thievery and the Nazis?
You know what? You keep on thinking this darling. It's hilarious. Also tragic, but you gotta laugh.
The Volokh comment section is a rare example of a place where people left and right actually co-exist and debate each other. The level of debate isn’t always of the highest standard, but SOMETIMES it comes close. I suspect that’s why a lot of the regulars actually like it. Was this acheived through moderation? Perhaps? Or through pure accident? Also perhaps? Only you know what, if any, policies you implement. If people starting posting pornography and extracts from the Protocols Of The Elders Of Zion and Mein Kampf, would you step in as moderator to preserve that and not let Nazis and anti-semites and porn-bots destroy it? Up to you.
Moderation is definitely the problem.
Mastodon and the Internet are not as new as many seem to think.
An instance may escape common carriage issues as long as it does not monetize eyes on a page or trade in user info, but the issues, which relate to public accommodation, to civil rights,to state action doctrine, and to public forum doctrine all remain.
See my draft petition to SCOTUS for a writ of certiorari. Start at the bottom of p. 40 et seq.
https://www.facebook.com/Joachim.CS.Martillo/posts/pfbid02sYgv3YY9qNkXeYhQxGt8AYtXX1JLWjiHGvJ7834sqppHnXRvVdxjHojjZYUesk9Rl
A grant of cert is unlikely, but four Justices seem to want to hear a case like mine.
No it isn't.
Decentralizing opinion publishing is the only safe harbor ever found for freedom of expression. But it is not by itself enough. Also necessary is a practical means for opinion publishers to succeed as self-sustaining businesses. Only that independence of means enables freedom from undue influence on published content.
The OP is curiously free of any discussion about the practical part. How will Mastodon's architecture affect business prospects to support publishing by means of advertising sales? I don't know enough to have even an inkling of the answer to that question.
Does Mastodon come with hooks to get money from users? Of course the phobophilic Mastodons could be cut off from money, but payment processors are less vulnerable to pressure campaigns than forum moderators.
Mastodon discourages ads but federated servers can host them if they like. Just as Mastodon is decentralized, so its its revenue stream. The main company is non-profit. However, individual servers can charge memberships, request Patreon donations, etc. All of these servers will be vulnerable to upstream providers (ISPs, DDOS protection, etc) withdrawing support should they get too out of hand.
Mastodon's documentation says you can also block entire instances if their content is unwelcome. https://docs.joinmastodon.org/user/moderating/
Note that non-profit does not mean expense-free.
If one (incorrectly) believes that a (1996) ICS of Section 230 includes a (2022) social medium platform, a social medium platform is not a publisher.
In my analysis, a 2022 social medium platform is a message common carrier. A message common carrier like the USPS, a telegraph service, a telex service, Gmail service, or FedEx is also not a publisher and must be regulated by means of common carriage law. Such regulation is compatible with a sane reading of Section 230 even if a 2022 social medium platform is not a 1996 ICS.
The petition can be accessed from the following URL.
https://tinyurl.com/DSCOTUS
Starting on p. 32, I provide the grammatical and syntactic analysis to explain what a Section 230 ICS is.
A Section 230 ICS is not a 2022 social medium platform.
Somebody should keep and update a graph of which servers have blocked which other servers. We could have an amusing animation in a year or two, like a game of life or a bacteria culture.
You may need a left and a right account, one for each of two big disconnected clouds. Harmless stuff, if any, can be posted to both.
I'm not a Mastadon user and probably never will be. Having said that, based on my reading (because internet tech is my job), it looks like Mastadon servers can block other servers from their users. Mastadon users can individually block servers and other users. You won't need two accounts; all you will need is a server that isn't overtly partisan and no friends who insist on joining White Supremacist sites (no nazis rule) that might get blocked. (I suppose there's an edge case where an actual far-left site could create issues but that's a far lower risk based on the volume of pre-existing right-wing sites versus left-wing today. If so, the same rule should apply.)
If I only want to consume I could find or create a quiet instance that promises to censor everything because it never posts.
Let's say I also want to quote the occasional Babylon Bee story and share vegan recipes, while following diverse thought leaders. Babylon Bee's joke number two gets me banned for transphobia on sites that think that's a real word. But there are no vegans in Foxville, I have to go to the feminist yoga instance to find them and they can't take a joke. I need two identities.
I mean yes, if you piss someone off and they block you, you’ll need to create a new account if you want to keep reading what they say.
That you’re circumventing their attempt to set boundaries makes you an asshole.
Which is to say, you see other people being able to effectively avoid you as a bug, but it is a feature. And if you really want an internet that is free of actual censorship, then you should not be whining about the need to circumvent such features, but embracing them as the correct way to curate one’s online experience.
This is going to resemble Plato's cave with each user having a different perspective depending on which moderation filters happen to stand between them and the original post.
Everything will be subject to a minimum of the origin server rules, and the user server rules. Plus whatever unintended consequences of the distribution model.
So, sorta like watching FOX news? If FOX doesn't report on it, it never happened.
I'd adjust your last sentence to read:
"Everything will be subject to a minimum of the origin server rules, the destination instance server rules, and the user server rules. Plus whatever unintended consequences of the distribution model."
The one thing I haven't been able to determine yet is whether Mastodon uses any post promotion tools that recommend unsubscribed users to new audiences. This seems like it's been the biggest issue with social media sites like Facebook, Twitter, et al. Once you stumble into a nationalist corner of a social media site, the software inundates you with similar, polarizing content until you're chanting "you will not replace us" and carrying a tiki torch.
But MSNBC (and unfortunately the NYT, WaPo etc) are exactly the same thing on the other extreme. They are all, including Fox, no longer purveyors of information. Some of them never were. They are narrative factories.
All of it serves all of us very poorly.
Are you sure, though? What was FOX's response to the Jan 6th hearings? They barely even mentioned them. Hours of surprising, news-worthy information and FOX pretended like it wasn't happening. FOX news viewers were left in the dark if they weren't out looking for alternative sources, which FOX has spent a great deal of time building distrust for as part of its successful business model.
Can you provide a similar example for CNN/NBC/CBS/ABC or NYT or WaPo? I'm not claiming a bit of bias here, but a near blackout on information about a specific topic. Or even the opposite, presenting misinformation (the "Big Lie") to viewers without any critical analysis. Dominion Voting is suing FOX over that. Any equivalent in the moderate to left media?
Bothsidesism is nihilistic and, where it's obviously lopsided, also misinformation in its own right. I don't think false equivalence serves us well either.
Just to pick one without thinking about it too much how about the guy who drove into the Christmas parade last year and killed several people. It was mentioned that it happened, then they got bored because it didn’t fit the narrative and wandered off. Imagine the difference had it been a white guy driving into, say, an MLK day parade.
Same with the NYC subway shooter. Nobody even knows the name of either of these assholes, but everyone knows Dylan Roof.
I’m not shilling for Fox. I’ve really never watched it because at the beginning it looked like bullshit to me
"Can you provide a similar example for CNN/NBC/CBS/ABC or NYT or WaPo?"
Hunter Biden's laptop, which they didn't cover at all, and when they did, they falsely claimed that it was Russian disinformation.
"Or even the opposite, presenting misinformation (the “Big Lie”) to viewers without any critical analysis."
Here's a Fox News article I dug up from around the time of the election.
Seems fairly balanced. Where are you getting your information about what Fox News reports? Perhaps you are being misled.
Either they didn't cover it at all or they covered it as misinformation. Pick one.
The NYT covered it in 2021 in conjunction with new information on Trump's own Ukrainian scandal. Given the evidence at the time, their coverage seems reasonable. As the story developed and more evidence came to light, they modified their reporting to reflect that. Some of the information relied on by journalists at Breitbart, who were early reporters of this, came from sources like Giuliani--not the best resource for factual information and a resource tied directly to Trump's attempts to get the Ukrainian government to lie about Biden prior to the election. The Wall Street Journal, NTY, and WaPo all reported on it in some fashion, however.
If you think their coverage was reasonable, you are hopelessly blinded by politics. There is no possible way to spin that one.
The same goes for Waukesha. We have a racially motivated mass killing of primarily old women and children during a Christmas parade. All of those outlets passionately covered it as a right-wing response to the Kenosha trial.
Then, 24 hours later they discovered the true motivations and race of the perpetrator. They dropped the story instantly.
24 hours of a false narrative that was made up out of whole cloth, followed by an instant memory hole.
Pretending that this is anything other than intentional political propaganda is silly.
I think when people say "Fox News" they're generally referring to the cable network, not the print outfit which isn't nearly as bad (although, I've noticed it's getting worse).
As I've noted, based on a media review by the Shorenstein Center back in 2016, Fox gets labeled "right-wing" on account of just not being reliably left-wing. They look right-wing compared to most media outlets, but that's only because almost all media outlets in this country are well to the left of the general population.
Also because of their political commentary shows.... they have mostly right wing evening hosts who do right wing political commentary. It would be understandable to conflate the two if you are a layman and not a media critic.
But even there they have a mix of commenters, it’s just that they don’t exclude right-wing voices. And they're not a mirror image of the "Republicans who can be relied on to attack other Republicans" most media outlets rely on for a patina of balance.
lol
I've heard James Carville gets on Fox news on a fairly regular basis.
Usually the only conservatives on left-wing stations are Lincoln Project types.
To a flounder the entire world is what's above them, and nothing is below them.
Let a thousand censors bloom.
How is that any different than Facebook today? You can block people. That's censoring them.
It’s very illustrative - and disheartening - to see how many people that consider themselves to be journalists be so adamantly opposed to the 1A. You know, the one thing that enables their livelihoods.
The First Amendment is about how the government treats speech not how private publishers decide what to publish.
Being able to filter what they publish to focus on a specific group or topic is what enables their livelihoods.
In the upper right corner of every comment is the "Mute User" option. That's "censorship" but not in any way that violates anyone's free speech. People can say whatever they want but no one else is forced to listen.
Ok, fine. Adamantly opposed to free speech. These people sure seem to want all viewpoints contrary to their views to be shut down.
There have been at least three major stories (I guess you could debate if one of them quite rates as major) that were shut down/shouted down by the misinformation police in the media and SM companies that turned out to be actually correct. That’s horribly unhealthy.
You're right, of course, but as long as Democrats keep winning elections, these people don't care.
And those stories are...?
1. Biden laptop. This is the one I’m not sure quite qualifies as “major”. But still the Misinformation Police shut that shit down to the point that they deplatformed a major newspaper. And they were the ones spreading misinformation. Russian source my ass.
2. The fact that the Covid vaccine stops transmission. People that said otherwise were ridiculed and again deplatformed. The fact is that those people were correct.
3. The lab leak theory. As an engineer this one offended me the most. A viral pandemic starts with ground zero within a mile of a laboratory that does research with the type of virus that made everyone sick. Wondering if the lab was actually the source of the outbreak is common sense. But not to the Misinformation Police. It’s RACIST to say that because China or something. But now the community of experts are thinking that their might be something to that. Well fucking duh.
Turns out the Misinformation Police were actually the ones dishing out the lies and bullshit. No consequences. And now someone doing something to pull back their power a bit and they howl like shortchanged merchants about it.
Yeah, these stories. Our media is appalling.
You are, of course, wrong on all three counts.
Look, your not wanting them reported on doesn't make them not stories. The NYT really DID get deplatformed for a while for daring to report on this story. Polls after the 2020 election make it pretty clear that if it had gotten a normal amount of coverage, Trump would likely be President today.
You're still mad that for once most media didn't fall for a Republican ratfucking. 'It's not fair! That ratfucking might have actually worked!'
Actually, I'm mad that the right-wing have know for a decade or two that the left were taking over the media, and not lifted a finger to stop them. They just complain about outlets run by their enemies not giving them a fair shake.
Sure, they're not, but how stupid is it to rely on your enemies being fair to you? Either do what's needed, or stop pretending to be a serious movement!
The right wing has officially taken over the media to a much greater extent than the left. News Corp? Sinclair? There’s nothing like that on the left.
You guys are confused about what your own point even is. Getting “deplatformed” is evidence that the stories were published but were pushed bach on by the public. So… it’s bad for the public to push back on the media? What? You sure as shit attempt to “deplatform” the NYT every chance you get… usually by alleging bias! This time your grievance is nonsensical in addition to being a fantasy.
“The normal amount of coverage.” What are you even talking about? Even if that were a thing… the laptop got a ton of coverage, before the election. More grievance fantasy.
The left haven't taken over the media - it's just even a debased media has standards that many weird right-wing beliefs and assertions don't quite come up to, and even then they're AMAZINGLY deferential to the right. You showed us exactly what you think media should be when you elected a reality show host.
Wrong about what? Everything I wrote there is absolute fact.
I gave you specifics, you’re waiving your arms. Specific things that are incorrect or STFU.
No you didn't. You've presented three stories with a right-wing political slant. Media reporting does not have to conform to your preferred political spin. In fact it's generally better when it doesn't.
Uh, are you daft? You need to learn how to incorporate by reference. Ok, specifically, you’re wrong about these stories being “shut down:”
1. Hunter’s laptop
2. Covid transmission
3. The lab leak theory.
There, now my level of specificity matches yours.
Ok. The NY Post being locked out of their Twitter account isn’t shut down.
People urging not overpromising as to vaccines being blocked from Twitter and being ridiculed and and people being called racist for saying “hey, you know it could be the lab” aren’t being shamed into silence.
Politics has broken your brain.
No, it isn't.
I saw plenty of people cautioning about overpromising on the vaccines on twitter.
'Hey it could be the lab' and 'hey it was the lab, it was bioengineered, it was deliberate and it was Fauci' were not the same thing. Lots of people who made the latter claims and got rightly roasted for them conflate them with the former in hindsight.
I knew it! You think being "ridiculed” counts as being “shut down” and therefore is anti-free-speech.
This has always been the core of the right’s grievance. It’s not about access — they have plenty of access — it’s about acceptance. They want to be believed, and even more importantly, liked and respected. They have a voice, they’re just super sad that it’s no longer a respected voice in America.
Pathetic. Ridicule is also free speech. Try having less ridiculous opinions, if you want respect.
"Mute user" is not censorship; it in no way prevents others from seeing your post - it's still published on Reason.
When you block someone, you prevent yourself from seeing that person's posts. When some sort of moderator blocks someone, he or she prevents lots of other people from seeing that person's posts.
Exactly.
When a children's book publisher rejects an adult manuscript, they are preventing children from reading that author's ideas. So what? Mastodon's instances appear to be, in some cases, devoted to individual communities or topics. When publishers/moderators ban trolls, they are preventing lots of people from being trolled.
So if I understand the Reason Commentariat take on this correctly...
A private actor deciding not to host someone's words because they don't like those words is censorship.
A public actor deciding not to host someone's words because a "concerned parent" can't be arsed to actually check what their kid is checking out of a public library, so the library has to pull the book from everyone is absolutely not censorship at all, why would you ever think that, you must be a libbie-commie-socialist.
That about right?
Yes, it’s all censorship. Even when the private entity does it. The distinction you’re trying but failing to make is that the private entity can’t violate the 1A.
As to the second point, does our historical reluctance to show movies starring Johnny Wadd Holmes to second graders (or let anyone under 18 watch them) constitute censorship?
Incorrect: I'm insulting Reason.com, the Volokh Conspiracy, Volokh himself, and the majority of the Reason commentariat by calling y'all a bunch of dirty hypocrites.
"A private actor deciding not to host someone’s words because they don’t like those words is censorship."
Exactly.
In my opinion it should become actionable censorship when the platform approaches the government in power, so Twitter and Facebook but not Reason or Mastodon. Interstate commerce powers, activated. Jack-booted thugs kicking down the doors of Silicon Valley temples and defiling their gods. Moderators staring in horror like Alex during his aversion therapy. Billions of people at risk of having to change settings. Feeds, rearranged. All these algorithms will be lost in time, like tears in the rain.
You think Twitter and Facebook are approaching the government in power? Seriously?
Well of course you can see the difference between a platform doing the censorship like Facebook and each and every Mastodon instance, and the user putting screens on their own feed.
In the example above Pesca was suspended by the platform, which is where the controversy arose.
If it had merely been the other users that blocked Pesca on their feeds, there would have been nothing to write about, but he was suspended by Journa.host imposing the platforms sensibilities, or actually their most sensitive users, on everone using the platform.
You don't seem to understand Mastodon. Pesca wasn't suspended from Mastodon. Pesca is free to continue to post to Mastodon, and anyone who wants to is free to listen.
Think of the different instances as promising something like, "we mute people so you don't have to." They say what sort of people they'll mute on your behalf in their terms. The people on journa.host presumably want people like Pesca and Molloy to be muted.
So no, I can't see a significant difference between a bunch of users all muting Pesca, and Pesca being muted by an instance whose users all said that they'd like him to be muted.
The users by and large didn't say they wanted or didn't want him muted. A vocal minority did and that's all it takes.
I vaguely remember a science fiction short story, possibly by Greg Egan, where a religious right type was trying to rally enough votes to get somebody offensive banned from real life. This was an option in a possibly dystopian future United States. It didn't take a unilateral moderator action to enact the death penalty, or a finding that he violated the country's terms of service. It took a big number of votes. (If anybody remembers the story, please name it.)
The users by and large didn’t say they wanted or didn’t want him muted.
They said they want him muted by signing up to an instance that promises to mute people like Pesca. If anyone on journa.host is upset about Pesca being muted, they can realize their mistake and un-mute him by switching to a less-censorial instance.
Mastodon is not at all like your short story. The thing that makes your short story scary is that there's only one "real life," so being banned from it would suck a lot. You could maybe analogize that to being banned from the Internet.
Being banned from one instance of one service is nothing like that. That's not being banned from "real life," it would be like being banned by one Catholic church. Not even the Catholic church as a whole, which is a thing that happens. Just one local church being like, maybe you should find a more suitable church.
well what to these words mean then if he, and Molloy wern't suspended:
"Journa.host suspended Pesca for "demeaning the professionalism of a journalist, based on their identity"
"Journa.host also suspended Molloy for "attacking the integrity of another [user], who is a moderator on the server,"
And of course I didn't say Mastodon suspended them, I said the platform did, which of course is the Mastodon instance "Journa.host".
And of course I didn’t say Mastodon suspended them, I said the platform did, which of course is the Mastodon instance “Journa.host”.
This is your problem. There’s no reasonable sense in which journa.host is “the platform.” The platform is Mastodon, or perhaps ActivityPub if you’re technically minded.
Being suspended from journa.host is like having your letter to the editor rejected by WaPo. There are plenty of other editorial pages. And worst case, you can self-publish by setting up your own Mastodon instance.
I'm mad that Darth Elon is pulling back censorship/deplatforming on Twitter so I am going to join a platform which inherently makes censorship/deplatforming more difficult!
Works for me.
This journalist is very wicked: When attacked, he defends himself.
As Brett pointed out above, to a lot of these people, the cardinal sin is disagreeing with them.
So you can't point out a user is spreading misinformation? Sounds like a terrible rule.
[Look - I moderated myself. I was going to use a bad word beginning with "h" and rhyming with "fizz."]
Mike Pesca was let go from Slate because of using the n-word at work on multiple occasions. His positions on transgenderism and race appear to come from a position of privilege and without the tempering one might do if they realized how un-privileged trans and non-white people are. As one person put it (paraphrased) it was "having her humanity publicly debated" by her coworker.
Just some context for why he might have been uninvited from yet another collection of professional journalists for making a controversial point about a deeply emotional topic without also considering its impact on real people.
Will the underlying New York Times article be banned from social media, or restricted?
Or will they only ban those who praise the Times' "careful, thorough reporting"?
I read the Time's article. I didn't find anything of issue with it. I don't believe the article was necessarily what started the confrontation as much as Pesca's take on it and the way he chose to present it. I don't see any evidence that he was kicked off his Mastodon instance due to what other people at the Times wrote. He may think that (he appears to in my reading) but he was exiled due to his own words. Since I don't have a Twitter account any more, I read what I could of the exchange and from others' reports on it. I wouldn't base my opinion of Mastodon or the folks on this instance, in general, on this particular exchange. It looks like two people with twitchy twitter fingers and agendas that found each other ready for a fight.
“Mike Pesca was let go from Slate because of using the n-word at work on multiple occasions.”
Ah, apparently this was in the context of discussing whether it’s ever appropriate to say the word, and using the word itself during the discussion.
If the woke version of Around the World in 80 days is any indication, it’s not even appropriate for characters who *belong to the Ku Klux Klan* to use the word. They may be racist killers, but they are very careful in their use of language.
(If you're going to modify Jules Verne's plot that much, at least have *real* Klan, not characters who kill people but draw the line at racial slurs and even wearing hoods and cloaks.)
It's like having the Nazgul ride around in civilian clothes and speaking in normal voices.
Black (now-former) coworkers of his found the discussion by all white people on the company Slack channel offensive. A discussion like that in a context that doesn't consider how a deeply hurtful slur, even when used without malice, can offend people is bound to be problematic. Pesca had used the word at least twice before at work, according to news articles on the topic, prior to this discussion. And while the discussion was ostensibly about another person having done this, the co-workers understood the conversation as actually being about Pesca's own past use.
I can see how an employer would want to ban the word, but there’s one caveat – under a literal reading of the employment-discrimination law (a literalness which was proclaimed in Bostock), if they ban it for one race, they would have to ban it for all races.
(In real life, I as a person of paleness avoid the word, and refuse even to listen to it (except in gangsta rap of course)).
As a white, gay man. I've used the word "fag" a few times around straight friends. I thought it would come out in a jocular fashion but instead seemed jarring and my friends didn't know how to interpret it. I've come to the conclusion that some slurs are far more difficult to reclaim than others, like "queer."
So it might be with the n-word, which I am uncomfortable hearing as it often means I'm about to experience something unpleasant. I cannot imagine anyone of any race using that word in a professional setting except in the edgiest of edge cases. I don't listen to gangsta rap I'm a bit more "old school" than that.
"I cannot imagine anyone of any race using that word in a professional setting except in the edgiest of edge cases."
I hate to say this to someone on the "other side" on the Internet, but you have a point. It doesn't seem professional for an employee of any race. Anyway, nondiscrimination laws would seem to require a single racial standard in the workplace (as opposed to social occasions where people in the targeted group try to "appropriate" a slur).
"I cannot imagine anyone of any race using that word in a professional setting except in the edgiest of edge cases."
People discuss the use word "nigger" in professional settings all the time, and there is plenty of debate over whether it is necessary or desirable in some cases to say the word.
All the time? Seriously, how often does it actually need to be discussed? Are there other racial, ethnic or sexual slurs that receive so much debate?
There are three workplaces where I can imagine this being a regular debate.
Reporters, who have to decide whether to censor what people actually say.
Teachers and the common debate about use of the word in context of older works.
and rap artists who get off on vulgarity.
I can't think of any other professional setting where this is even a debate.
How about law, where what people actually said is relevant?
Given the long history of judges punishing vulgarity in court, I think this one is settled. Not necessary.
This gets back to my point up at the top: If you push something beyond the pale, the pale moves. The more you block content you find offensive, the more content you will find offensive, because your perception of offensiveness rescales.
It's a positive feedback loop.
Yeah, the n word gets put beyond the pale, actual black people get to be within the pale (lol).
1) Trans people and non-white people might be unprivileged in many places, but the journalism industry isn't one of those places.
2) Disagreeing with someone's views is not "debating their humanity." That's just rhetorical gibberish used to shut down disagreement.
3) His "controversial point" was praising a news story from the NYT. Think how far far far far far left you have to be to think that the NYT is "transphobic."
1) LGBT and non-white people live in the broader world where politicians routinely debate and sometimes pass laws targeting minorities in order to score political points with their CIS white voting base. Bigoted stereotypes used to attack minorities or deprive them of their rights or humanity don't change just because they are uttered in the context of the journalism industry.
2) Promoting information that questions the benefits of trans-healthcare for young trans persons at a time when states are trying to ban this healthcare while also going after trans children and their parents in general makes it a tad more serious that just "disagreeing with someone's views." One can have a disagreement about whether In-n-Out is worth the hype or about whether sodomy should be outlawed. One is a burger and the other is oppression. It depends on the nature of the disagreement.
3) Context and delivery matters. This has nothing to do with the NYT's political bias. I also think Molloy's kneejerk response was equally reprehensible.
Define “young” in this context.
Minors. It was the topic of the NYT article. Puberty blockers would largely be limited to minors for whom puberty hasn't fully arrived.
My point. This is a class of people who we have never allowed to make any decisions for themselves because they’re too immature emotionally and intellectually to decide important things. Now we’re going to allow them to make THIS ONE decision alone? And asking that question makes you transphobic. C’mon.
The NYT article in question was not about them making the decision alone. The article was whether everyone — including doctors — has been too hasty to jump on the puberty blocker bandwagon without consideration of — or understanding of — the long term effects. (The c.w. in the transgender care industrial complex in the U.S. has been that they're entirely reversible, and therefore it's no big deal to use them.)
Which is still a fair question and isn’t anything phobic. Are the opinions of someone that young really that well founded?
I think it's a fair question and not transphobic too. But it's different than the question you raised.
I understand.
‘The c.w. in the transgender care industrial complex in the U.S. has been that they’re entirely reversible, and therefore it’s no big deal to use them.’
They’ve been used for longer and with more frequency for cis kids, which doesn’t suggest they’re wrong.
Yes, which means that they've accumulated enough experience to know that they're not actually harmless. For instance, they're known to cause serious problems with bone density, that show up as early osteoporosis, if used to stretch out puberty in children who are short, so they get some extra growth in.
But the normal 'cis' use is in treating precocious puberty, and yes, it does seem to work for that. That's a fairly limited use, to allow puberty to happen on a normal schedule.
OTOH, it's starting to look like if you delay puberty too long, you've missed your window, and it's never going to happen normally.
I happen to think that if we did understand the biology of this well enough to actually delay puberty without causing problems, that would be great. But we don't, and having information on this sort of thing embargoed as "transphobic" would be absurd if it weren't so dangerous.
The risk to bone density is not outside the norms for risks associated with many other medicines, suddenly acting as if it's a profound and deadly danger when used on young trans people is transphobic. Every relevant medical expert I've read on the sibject says they're reversible. Every random transphobic yahoo on the internet acts as if they turn people into Quasimodo.
Brett, people aren't sending bomb threats to children's hospitals because they're worried about bone density.
"The risk to bone density is not outside the norms for risks associated with many other medicines"
Which get taken for valid medical reasons, normally.
Here it's being used as a 'treatment' for gender dysphoria, which is positively ironic, given that they're delaying the very thing, puberty, that normally resolves such cases.
But setting that aside, and assuming for the moment that this is a reasonable treatment, you're not asking that the risks be balanced with the benefit. You're asking that knowledge of the risks be suppressed.
Treating gender dysphoria is valid, your prejudices have nothing to do with actual medicine. Knowledge of the risk is only being suppressed if doctors aren't informing patients before undrgoing treatment - we've already seen your idea of 'suppression' is people calling out the utter hateful dishonesty with which you want the information presented.
'who we have never allowed to make any decisions for themselves'
This is just completely and utterly untrue.
'Now we’re going to'
It's actually been happening for a while, the scaremongering just reached hysterical pitch now. Yes, 'asking the question' makes you transphobic because you couldn't be arsed to give a shit about what actual trans people think and need.
Your point #2 seems to at least indirectly criticize the Times for publishing what you call “information” which allegedly harms trans people. By saying “information” instead of “disinformation,” are you indicating that the Times’ information is true?
And if information is in the Times, then I think that generally constitutes promotion (unless they bury it on the bottom of page B12, or the Internet equivalent of p. B12). So the Times would also be guilty of promoting anti-trans information.
Is this "news not fit to print"?
Yeah, that's not what I'm saying nor what I've done. Read the original exchange between these two persons. Neither of them look terribly good by the end of it.
OK, that’s why I tried to express myself tentatively instead of saying “what you’re *really* saying is…” which would have been a total dick move on my part.
The *Times* may have gotten away with this article because it has a good deal of leeway to publish stuff which would would have been considerd “phobic” and “disinformation” if published by, say, the NY Post. But based on past incidents, I think the Times staff are monitoring their employer to make sure it doesn’t stray too far from wokeness.
“Promoting information that questions the benefits of trans-healthcare for young trans persons at a time when states are trying to ban this healthcare…”
And when people say that Gay and Trans rights advocates want to censor truthful information that runs counter to the “narrative”, this is an example. People like Shawn_Dude don’t want people to have the facts, and don’t care how many children get mutilated by this bullshit pseudo-religious gender ideology.
Your idea of truthful information is LibsOfTikTok targeting persons or institutions for harassment and bomb threats.
Lol. You comment is untruthful information.
You would say that.
Surely the question should be whether the NYT is reporting accurately, not whether some people might find accurate information helpful or harmful to their political agenda.
If the treatment in question isn't beneficial — or at least if it's unclear whether it is — then one should question its benefits.
" Promoting information that questions the benefits of trans-healthcare for young trans persons at a time when states are trying to ban this healthcare while also going after trans children and their parents in general makes it a tad more serious that just “disagreeing with someone’s views.”"
Read that again. You're telling us it's impermissible to question the benefits of these treatments? Really? Suppose they really have horrible medical side effects? You're saying finding that out is forbidden?
Listen to yourself! Is this even remotely reasonable?
"Bigoted stereotypes used to attack minorities or deprive them of their rights or humanity "
Again, nobody is depriving anybody of their humanity.
You know "deprive them of their rights or humanity" is just a slogan, right?
It roughly means speakers should be censored or arrested.
The actual literal meaning of the words "deprive them of their rights or humanity" is not intended. The implied threat/justification to use government or non-government violence or coercion in response is the intended meaning.
"It roughly means speakers should be censored or arrested."
No, it roughly means speakers' claims are not being uncritically accepted.
Some middle rank swimmer desperate for wins declares himself to be a woman, so that he can enter women's swimming competitions and dominate them. You deny that he's a woman, you're told that you're denying his "humanity".
No, you're just denying that he's a woman. He's a human male idiot.
I think "deprive them of their rights or humanity" is a way of saying violence is thereby justified against the speaker.
It’s the same plan we saw on college campuses with people claiming words made them "unsafe" (therefore violence is justified against the speaker). They followed through on that and assaulted speakers. Expect the same for "deprive them of their rights or humanity".
Any such language should be heard as a call to arms.
'is a way of saying violence is thereby justified against the speaker.'
No. The two of you are feeding each other's hate.
Just the example given about how moderation and mediation works with Mastodon seems the antithesis of free debate.
In a world where you can’t feed hormones to beef or chicken it shouldn’t be controversial to debate the use of hormones and hormone blockers on children. Yet somehow some people think even questioning the use of drugs on children which are considered to be too problematic to put in our food supply should be completely shutdown because ... well I don’t really understand why it can’t be questioned, other than the fact children aren’t currently in our food supply (although I’ll note that that question has long been considered as a “modest proposal”).
chooit's really more a question of "choose your associates carefully" this person chose an activist hub and got exactly what they advocate for.
I love how casually you equate the far right with hate groups and Nazis. Nice!
Because you’re a terrible person.
You meant, of course, to say that Alan is a terrible person for casually equating the far right with hate groups and Nazis.
No, I meant you. Because you celebrate falsehood
Oh?
In general, user-directed preferences are the way to go. Decentralized instances are an interesting move in that direction.
I often marvel at the censorious impulse.
Some guy sitting on a stool in a hole in the wall, down at the end of the bar ranting about this or that, is no different than some person ranting in their little corner of the internet on their little page or account. You don't have to listen, nobody has to follow. Nobody is paying any attention unless they want to for some reason.
Yet some people are consumed with rage at the thought that somebody, somewhere is expressing ideas they don't like.
And those are the people who will crawl across broken glass to get moderator positions...
The thinking that every moderator is a censor is the same as the leftist thinking that every cop is a thug and bully.
They're mostly poorly-paid kids in foreign countries who need PTSD counselling to cope with their work but don't get it.
The "balkanization" the author speaks of here has already happened at least once. Gab (in one of its multiple past incarnations) was a Mastodon instance. It isn't any longer. The problem for Gab was that Mastodon is open-source, and so many people wanted to force Gab to shut down that they kept changing the Mastodon API in ways that forced all installed instances of Mastodon to rebuild their code. Frequently.
It wouldn't surprise me if that situation happens again should any Mastodon instance carry enough dissident material that the Deep State decides they want it shut down.
Not even the deep state, just well positioned control freaks.
Did this target Gab somehow, or were the mastodonic powers-that-be so determined to injure Gab that they equally and indiscriminately injured themselves?
Cool, more pearl clutching.
Look, this is simple. Mastodon is like a neighborhood, and each house in the neighborhood has it's own tenets.
A house might kick out a tenet if they think that tenet is an asshole. That former-tenet can this go find a new home, or if they've sufficiently pissed off everyone in the neighborhood, make their own.
To extend the metaphor, each house also publishes a newsletter, things their tenets have said/done. Each other house can decide for themselves if they're going to accept newsletters from specific houses. If House A thinks House B is a bag of limp donkey dicks? They can say "nah, we're good" and keep on trucking.
Similarly, if House C thinks House D is a bag of smelly farts, they can choose to not send their newsletter to that house.
If a tenet thinks their house is blocking too many newsletters, or not sending their newsletter to the right people? They can find a different house that better matches their preferences or --failing that-- make their own.
This isn't censorship of online experiences, it is curation of them. Find the curators that match your preferences and you're good.
And if the idea that not all social groups are going to accept your homophobic, racist, Nazi-defending ass really chafes your nethers? Then go to Gab already, you'll be welcome there.
So much pearl clutching.
"Mastodon ... originated not in the United States but in Europe ... which has more permissive legal and cultural norms around speech restrictions."
That's pretty Orwellian.
You misinterpreted.
That is saying that Europe is more permissive of speech restrictions, not that it is more permissive of speech.