The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Thoughts on Internet Content Moderation from Spending Thousands of Hours Moderating Volokh Conspiracy Threads
Of possible relevance to the Fifth Circuit's recent decision in Netchoice v. Paxton.
Reading the Fifth Circuit's decision in Netchoice v. Paxton brings me back to the old days of the Volokh Conspiracy. A little bit of context: Back when we were at volokh.com, we introduced open comment threads. For a few years, I spent over an hour a day, every day, moderating Volokh Conspiracy comment threads. I stopped after we moved to The Washington Post in 2014, where comment moderation was up to them. I'm very glad I don't do comment moderation anymore. But my comment moderation experience at volokh.com left a lasting impression.
I think three of those impressions might be relevant to thinking about Netchoice.
First: It is a strange rule of human nature that most people who are moderated in an online forum feel, with great certainty, that they are being censored for their beliefs. Few people think they just went too far, or that they broke the rules. Moderation is usually seen as the fruit of bias. So liberal commenters were positive I deleted their comments or even banned them because this is a conservative blog and we were afraid that liberal truths would pierce through the darkness and show the false claims of conservatives. And conservative commenters were completely confident that I deleted their comments or even banned them because we are liberals trying to prevent conservative truths from exposing liberal lies. It just happened all the time. Moderation led to claims of censorship like day following night.
Second: Content moderation always reflects a message of the moderator. My goal in moderating Volokh Conspiracy comments was just to keep discussions civil. My thinking was that if you can keep comments civil, you will not only encourage better comments but also entice better commenters. And I think experience proved that correct. For a few years there, moderated Volokh comment threads were pretty insightful places to go to look for perspectives on our posts. But moderation always implies some some sort of message. It implies some value or judgment that the site has (or maybe just the primary moderator has) that they want to advance. For example, when I was moderating out uncivil comments and commenters at volokh.com, I didn't care if an opinion was liberal or conservative. But my moderation still expressed a value: A belief in a marketplace of ideas, where we wanted the ideas to be expressed in a way that might persuade. That was the value we (or I) had. It's a process value, but still a value. Moderating was always an effort to further that underlying value we had.
Third, perfect comment moderation is impossible, but you can't let the perfect be the enemy of the good. I wrote above that many moderated commenters believed that they were being censored for their beliefs. A corollary is that many commenters had examples of comments from the other side that had remained up, apparently unmoderated, that to them proved the bias. If you deleted a comment as uncivil, it was common to hear howls of outrage that months ago jukeboxgrad had a substantially similar comment somewhere that is still up, so that under the principles of due process and the Magna Carta it would be despicable to moderate this comment now. The problem was scale. We might have 20 posts a day in those days, as there were a lot of short posts. An average post might get (say) 100 comments, with some getting many more. That was around 2,000 comments to wade through every day. You'd need full time moderators to try to moderate them all, with some sort of legal-like process for adjudicating individual comment moderation decisions. Moderated commenters often seemed to want that—and in some cases, to demand it. But it was just impossible given our day jobs. Moderation was needed to make comment threads worth reading, but the sheer scale of comments made imperfect moderating the best you could do.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Moderation to keep discussion civil is generally considered good, few of us would have any objection. Secondly, this is your blog, so you can set the editorial rules. You are running the blog privately and are hosted via an internet provider. (hint in the early days we connected to the internet via a telephone line)
Though that is very different from what is (or appears to be happening ) on various internet providers.
Your terminology and statements strongly suggest that you do not understand how someone accesses the internet, how websites are hosted, and what the differences between the two are.
I know, right?
Though that is very different from what is (or appears to be happening ) on various internet providers.
Social media platforms are not internet service providers. They do not enable or facilitate, in any way, a user's general access to the internet. They only have any control over that user's access to their own platform. There is no difference in function between facebook, Twitter, YouTube, or Instagram and the Reason website or this blog. The only difference is the scale of the user base.
Now, if you are claiming that the internet service providers (AT&T, Cox, Verizon, Spectrum, etc.) are somehow really behind the policies of those social media platforms, then that is another story that would need its own evidence.
Neither Reason nor this blog give users a platform to make arbitrary speech. They provide a comments section on posts or articles that Reason or the Conspirators create.
That's a difference that a lot of people think is important.
And, quite obviously, neither do any other social media websites that moderate content. Just because you can post your thoughts on a blank slate rather than on a blank slate underneath an article doesn't really change the analysis.
Or are you suggesting the Thursday Open Thread could be regulated by state law to require moderation in a manner directed by the state?
I'm assuming not, which obliterates what "a lot of people" think.
"They do not enable or facilitate, in any way, a user’s general access to the internet."
That's an interesting claim. Would you say that Google Chrome, for example "facilitates" a user's general access to the internet?
I think I would. A web browser does facilitate access.
Would you say an "App" such as the Facebook App "Facilitates" a user's general access to the internet? Especially if it doesn't go through a through a web browser?
Again, I think the answer there is yes.
Well you’d be wrong.
A web browser’s does exactly what the simple English words suggest.
Without an internet service provider, it does nothing.
Without a web browser, an internet service provider is still providing access to the internet.
These are simple terms and basic internet concepts which have not changed in more than two decades. There’s no reason to be getting these concepts wrong at this point.
The issue is...one of your definitions.
The definition of "facilitate" is "to make easier : help bring about" (Merriam Webster)
The definition of access is " freedom or ability to obtain or make use of something" (Merriam Webster again).
So does an internet browser make it easier to make use of the internet?
Undoubtedly yes.
Oh, I was not aware that we were pretending that web browsers exist in a vacuum where they don’t require connections through an ISP to do anything.
Chrome is not an “internet browser.” It facilitates access to websites which are a feature of the internet, but it does not facilitate access to the internet.
By all means, call up your ISP and cancel service. See how well your browser works then.
Hell, maybe we’d have a brief period of reasonable and honest arguments around here.
Chrome of a facebook app facilitate a users' existing connection, in that they make it more useful. They both require an existing connection, though. Technically, a dedicated person could get the same functionality just using telnet, although it would be extremely unpleasant.
Neither facilitate general access. Facebook only supports its own servers, and chrome only the http(s) and ftp protocols. If you want to do something else, you have to use something else.
Since the definition of access is "to make use of something" and since a web browser facilitates making use of something (ie the internet), then...
So...my mouse and my office chair are ISPs because they make my use of the internet easier?
And the local electric utility.
Two points:
1) As Professor Kerr notes in his first point, everyone always thinks that the moderation is political rather than based on civility regardless of why the moderator is taking the action. In the case of the Texas law, this would mean that even if you were just moderating for civility you'd constantly be getting sued which would make any attempt at moderating for civility completely impractical.
2) You say moderating VC is cool because it's "your blog", but why isn't Facebook "Meta's website" or YouTube "Google's video platform?" They are private entities as well, which is the general objection to trying to force them to carry content that they don't want to.
Sometimes when I see appeals to custom I think of the International Code of Zoological Nomenclature, which attempts to explain how to determine the correct name for a species or other group of animals. The Code explicitly disavows reliance on custom and precedent:
If you want to credibly pass judgment on moderation you have to leave your own posts out of it. Look only at others' posts and look for patterns rather than instances. For example, how many sites let you call Russians "orcs" while nuking posts with equally "dehumanizing" insults of other groups?
I also spent time moderating physicsforums.com. It is one of the few public forums centered on science that is "heavily" moderated. It is precisely because of the moderation that most users find the forum worth of their time. They don't want to waste their time on what they consider garbage.
Obviously, there are huge differences between science and politics. But there are also similarities when it comes to forum moderation.
One thing we find useful is a mission statement for the forum. [Our mission is to provide a place for people (whether students, professional scientists, or others interested in science) to learn and discuss science as it is currently generally understood and practiced by the professional scientific community. ].
Citing the mission statement silences many protests.
I like the idea of a mission statement as prerequisite for moderation.
This is my personal favorite moderation policy, seen on a music review site: "Be a dick and we'll ban you." Only works for a small site.
" It is one of the few public forums centered on science that is “heavily” moderated. It is precisely because of the moderation that most users find the forum worth of their time. They don’t want to waste their time on what they consider garbage."
And that works great, so long as the moderators stick to just keeping things on topic. Physics, for a physics site. Knitting, for a knitting site. The problems set in when the moderators themselves decide to be political, on a site that was originally non-political. You can share patterns for gay pride flags, but not for MAGA caps, say. That seems to have been the tipping point for Ravelry; So many leftwingers joined to knit pussy hats that there were enough of them to purge the Trump supporters.
Indeed. But it's when the higher ups seek to suppress and limit conversation that it's problematic.
Here's a good example. A simple person on Facebook, using it to contact like minded people, about a recent whistleblower on government activity. Suddenly...the account is suspended
https://nypost.com/2022/09/25/facebook-silencing-activity-related-to-fbi-whistleblower-steve-friend/
Why? What's the effect? Essentially it's like cutting off a dissident's telephone line or e-mail service, seeking to cut lines of communication. And its worrying.
At this point I've moved beyond being worried, and am starting to be scared spitless. Censorship and social credit score like functions are accumulating at a rapidly accelerating pace, platforms that refuse to cooperate are subject to coordinated tech and financial deplatforming. There are legal activities that are barred from a lot of the IT/financial infrastructure, and the recent decision by ALL credit card processors in unison to adopt special labeling for gun related transactions indicates that we can only expect that to go from 'a lot' to "entirely" soon.
I think we hit a knee in the curve back in 2016, and can anticipate police state levels of censorship and repression soon.
You sound a lot like Artie Ray Lee Wayne Jim-Bob Kirkland sometimes . . . But Prof. Volokh seems to like you better, so censorship probably won’t be a problem for you at the Volokh Conspiracy.
That article is fairly duplicitous. Not very trustworthy if you ask me.
"Moderation was needed to make comment threads worth reading"
A shot across the bow!
Oops
This all seems obviously true. But then why did EV himself not learn these same lessons? Did he not help with the moderating?
Oh, I'm a big believer in the value of moderating discussion threads, which people tend to experience as a coherent product -- we often read them beginning to end, for instance. This is what I call the "conversation management function," and I think people who run blogs or Facebook pages should be free to exercise it. Indeed, there may even be value in the platforms (such as Facebook itself) exercising it, at least to block spam and the like (I so note at pp. 410 & 452-54 of my Treating Social Media Platforms as Common Carriers? article).
But the matter may well be different (as I suggest in my article, though only tentatively) for items that readers consume separately from other items, and that they directly seek out. Your experience of Volokh comment threads, and the value you get from them, flows from all the comments you see on those threads -- a few bad comments can derail the conversation, and make the threads less enjoyable. But your experience of Volokh.Blogspot.com (back when we were on Blogspot) didn't flow from all the blogs that were hosted on Blogspot. If there was an ILikeToKillPuppies.Blogspot.com, you wouldn't even have noticed it if you were just visiting Volokh.Blogspot.com (unless Blogspot chose to recommend it to you, and I think it should have been entitled not to recommend it).
Likewise with, say Facebook pages or Twitter accounts. If Facebook or Twitter has to host all such pages and accounts on a viewpoint-neutral basis, that would simply enable people who want to see those items to view them. Because no-one just reads all of Twitter beginning to end the way people might read a comment thread beginning to end, such moderation of pages/accounts is much less valuable (and, I argue, less likely to be constitutionally protected) than moderation of comments.
Again, I do think that platforms have the right to decide what to recommend (e.g., "Pages You Might Like ..."), see pp. 409-10.
Interesting. A counterargument as to Twitter, I think, is that the retweet, quote-tweet, and reply functions on Twitter make reading them much more like a single product. Reading Twitter is like reading a comment thread; people are brought in and out of the conversation, often whether they want to be or not, making different voices part of the conversation.
Right. Also #hashtags, which could be seen as posting to a Twitter-hosted page (hosting) or as contributing to a coherent comment thread (conversation management). I think it’ll be difficult in practice to distinguish hosting from conversation management.
What if I want my posts on my page (which according to EV may be must-carry) to have their comments go unmoderated? Seems odd that the platform could ban identical content based on the identity of the poster i.e. whether or not they “own” the page being posted to. If a platform could condition carriage on allowing comments to be moderated, why couldn’t they condition carriage on allowing posts to be moderated as well? I’m having a hard time following the constitutional distinction here that could support (and presumably help to define) those lines.
Here's an idea. What if the distinction isn't between hosting, conversing, and recommending but between speech intended for the public vs. speech intended for a specific individual or audience.
I think you could make a case that the distribution of speech intended for the public can never be an appropriate target of common-carrier rules. We usually think of common carriers from the sender / speaker's perspective, but the listener is just as important. Part of the point is to ensure that all listeners / recipients can be reached on non-discriminatory terms.
The sort of speech we're talking about here doesn't have a well-defined recipient. Even though I'm commenting on Orin's post, it isn't intended solely or even primarily for him, or else I would've just emailed.
As a listener, I might be interested in anything specifically directed to me. But I'm obviously not interested in everything directed at the public. To consume such speech requires curation, i.e. search / recommendations, to make it navigable at all. This is also true from the sender's perspective. If I want my post to get read by some specific people, maybe I could assume that they would seek it out explicitly, because they know me or whatever. But if I want my post to be consumed by the public at large, I must depend on a distributor to make that happen via curation (or by paying for distribution, but let's leave that aside). Just putting a post on the Internet with no way to discover it other than typing in the URI would be totally ineffective as a way of reaching the public. A lawn sign would be better. Curation goes hand-in-hand with speech directed at the public.
I'm not an expert in all of them, but I think this approach aligns well with the cases. Take Rumsfeld. In common-carrier terms, that was about ensuring military recruiters access to the specific audience of college kids, not about the recruiters' speech to the public at large -- so mandates are ok. But the newspaper and parade cases involved speech directed at the public at large -- so no mandates. The mall case is an outlier, but maybe that's explained by the quirk that the mall owner wasn't making a free speech argument. (I get that the reasoning of these cases wasn't explicitly public vs private, but maybe there are rational connections between that and the outcomes nonetheless.)
I think this would be an easier and maybe better-calibrated line than hosting vs. conversation management. For example, it means that Facebook might not be allowed to moderate posts that are visible only to my friends, for common-carrier reasons, but could moderate the same post if made public. There's some intuitive logic to that outcome.
Retweet has been blamed for poisoning the environment of Twitter. It also boosts engagement metrics so it looks like a success from inside the company.
I do not participate in Twitter and I do not have any firsthand experience with the evils of retweet. When I follow a link I am often driven away by the screen full of obnoxiously flashing images and the breaking of messages into 280 character chunks.
"You have to check out Tolstoy's new thread!"
"Thanks for putting the word 'thread' in there so I know to avoid it."
Likewise with, say Facebook pages or Twitter accounts. If Facebook or Twitter has to host all such pages and accounts on a viewpoint-neutral basis, that would simply enable people who want to see those items to view them.
Much more than, "simply enable." It would also force unwilling platform proprietors to publish them. Professor Volokh, I can hardly suppose you are capable of overlooking that. So why leave it out? Do you think compelled speech for publishers does not matter? Do you suppose a content-neutral requirement to host Alex Jones will do no damage to the business which is forced to host it? When crazed political partisans decide they need to gin up online threats against election workers, and dox the workers to get that going, all okay with you?
If they were businesses rather than champions of progressive values the big tech companies would find a way to let Alex Jones in as long as he generated advertising revenue. "Exile" him to a separate top level domain with a different skin. (We're back in the subdomain dot blogspot model now, not the facebook slash user model.) The arrangement would be neither secret nor screaming at you from your browser's address/search bar.
The word "advertising" reminds me.
In the traditional common carrier model customers pay for transport of tangible goods or information. In the social media model customers buy access to users. A "must carry" policy for advertisements would fit better into the tradition of common carrier regulation than a "must carry" policy for users or comments.
In a more regulated social media environment, is an advertiser boycott legitimate grounds to censor a non-paying user? How does one judge the sincerity of a company's attempt to find alternative advertisers?
Do we kill off the "you are the product" model by regulation?
I'm not sure how you do that consistent with the 1st amendment. EU style privacy laws? Banning paid placement? Every regulatory approach I can think of has 1st amendment problems.
The best I can come up with would be to amend Section 230 to limit the safe harbor for moderation to legally mandated moderation, and require, not simply encourage, cooperation with 3rd party moderating services.
230 had a good model there, if the 3rd party filter provision had been taken seriously, and "or otherwise objectionable" hadn't been interpreted to mean "anything the platform feels like".
Once again: that's repealing Section 230, not amending it.
"If they were businesses rather than champions of progressive values the big tech companies would find a way to let Alex Jones in as long as he generated advertising revenue."
Ah, but here you identify the crux of the problem: the tech companies are businesses, and they want to make money, and the primary reason they engage in content moderation is because there's lots of content that advertisers have no interest in running their ads alongside.
Note that a common complaint on YouTube is not that content is being taken down, but that it's being demonetized. In this case, YouTube is willing to host the content despite finding that it runs afoul of at least some of their policies, but the speaker demands not only that YouTube host the content but that they help build a business model around it as well. I think it would be hard to find any sort of analogy outside of tech that would support this sort of demand.
Yes, the communications common carrier model can work when an end user pays the bills and the platform has no business need to look past their ability to pay. It doesn’t fit when the money comes from advertising and the platform’s business depends on assembling an audience those sponsors will be willing to pay to reach.
That model has more in common with broadcasting, with social media platforms playing the part of broadcast networks. Each “network” has millions of channels instead of just a few, but they still have a legitimate interest in establishing and maintaining a network brand that attracts viewers.
Of course, there’s no rule that says a platform must be ad supported. It can rely on subscriptions and swag sales instead, like a common carrier, and not care about advertisers. MeWe took that route, but since it is privately held we won’t know how financially secure it is unless it goes under.
“Exile” would satisfy no one. People would just say “Alex Jones has been unfairly exiled due to bias.”
Trump has been exiled from Twitter. He’s still hosted. It doesn’t really matter if it’s Twitter hosting him on exiled-from-twitter.com or not-Twitter on truth-social.com, either way he’s not part of the Twitter conversation, which is what really matters.
Making hosting alone into a common-carrier function might be goodness, but it does nothing to address Texas’s concerns.
What is frustrating about the distinction that you keep drawing, here, is that it ignores that the "product" that any Facebook or Twitter user actually sees is precisely that - a "managed conversation," full of "recommendations" implicit in how things are presented.
You can "follow" a page, but what Facebook presents to you may or may not be that page's most recent updates, it may or may not include content from other pages you haven't followed, it may or may not mix in ads. It can shake up the content it presents you, present it in different ways, find new ways to engage you in the content. It's just not a simple aggregation of user "subscription" choices, and has not been anything like that for years.
At the same time, your theoretical framing of comment threads reveals just how much work you're doing in the conceptualization of the issue, and not in the analysis. You describe a VC comment thread as though it's a VC product - a "conversation" that can be "derailed" by a few bad comments. But at the same time it is just as clear that a VC comment thread - and the content that is allowed to remain there - in no sense reflects the views of VC or Reason. Right? It's right there in the "Editor's Note" - commenters "own" their own comments. And now we have the ability to "mute" users whose content we no longer wish to see. Strangely, on your analysis, it seems that giving users more control over what they see weakens any argument that you have a First Amendment right to manage the comment threads in whatever manner you see fit.
It seems to me that - for your analytical framework to be viable - you need to be able to look at something like a VC comment thread through the same multi-function lens that you apply to Facebook, et al. There's a "conversation management" function - but a "hosting" function, as well. (I don't see any "recommendation" happening here, but conceivably one could be added in the future, if comments were made "upvote"able, for instance.)
It would seem to me that you can't avoid an argument that your ability to delete comments you find to be off-topic, antithetical to your own views, or insulting can be regulated under the First Amendment by just waving at comment threads as a whole as a kind of "unified product" produced by "managing" a "conversation." That would be begging the question. The fact of the matter is that you have made something of a public forum here, where people are invited to leave comments responding to posts or one another, that other commenters might wish to read. You're hosting those comments. The First Amendment might protect your right to "manage" the conversation, but it seems to me - under your "common carrier" argument - that you'd have to acknowledge that, at least insofar as the "hosting" VC is doing of those comments, you don't have a First Amendment right to discriminate against commenters based on their viewpoints.
But what you're describing about FB isn't the site as it was when it grew. You're describing what the site became once it was large enough to get away with annoying the users.
The users mostly signed up for a site that would let them see what they'd chosen to see. Not a site that would constantly reshuffle their timelines so they had trouble finding things, hide posts by people they were following, shove their way content they hadn't chosen to follow.
What you're describing is how FB altered the user experience to improve monitization, not the user experience that caused them to grow so big in the first place.
It's like suggesting that people watch TV to see the commercials...
While that's all true, Brett, why should people's rosy goggles of how FB used to be control over how it actually is - specifically how its owners have chosen to make it?
I'll agree there are reasons FB became what it is today, but it's still important to recognize that they are self-regarding reasons on the part of FB's management, NOT user driven. Like I said, that's like pretending that people watch TV to see the commercials, or that Bell Systems became popular for having terrible user service. Accepting false premises results in false conclusions, after all.
What we're experiencing today with FB is classic monopolist behavior. The sort of behavior they only get away with once they're secure, and don't have to care what the users think.
I don't think that's true (except with ads of course, but that's not what this is about).
I don't want to have to manually click into each of my friends' or groups' pages to see what they've posted. I want it all on one screen. And since it won't all fit, I want the most interesting items sorted to the top. With just that, you've got a curated experience.
Whereas I don't want that curated experience, because I find FB's curation SUCKS. That's why I've mostly moved over to MeWe, which refrains from doing that sort of thing.
Brett, that's perfectly reasonable — but it reflects your personal preference, not what 'users' want.
All you're doing here is the classic early adopter complaint: "This business/product/service has changed from what it was when I got into it. Now they're trying to appeal to different customers than those of us who helped it get started." So? That the trendy restaurant changed its original menu that you loved when it became more popular may be a reason for you to go elsewhere, but it's not a legitimate grievance you have.
Maybe Facebook isn't for you anymore. Fine. That doesn't mean it's a conspiracy or Facebook is doing something wrong. It does not mean it's "monopolist behavior" (it's not a monopoly) with the company screwing customers because it doesn't have to care. It's providing a better experience for them — revealed preference — if not for you.
Of course, government regulation never actually solves monopolies. It enhances and enforces monopolies, by increasing barriers to entry.
No; he's describing what the site became once it was large enough that it had to do this to not annoy the users.
"If there was an ILikeToKillPuppies.Blogspot"
There is a role playing game titled "kill puppies for satan" where a player's character sheet should begin with the declaration "I kill puppies for Satan".
All good points.
Moderation must be a thankless job.
I’ve noticed a (slight) improvement here recently but it’s nowhere near as good as in the volokh.com days. In those days we had thoughtful conversations and now I know that it was due to Prof. Kerr that there were few (if any) ad hominem or crude partisan potshots.
I very much miss the Volokh.com days. The transition to WAPO introduced a new audience that is...less interested...in nuanced discussion. Moderation currently would seem an overwhelming task.
True but the real deterioration occurred with the switch to Reason. Though as I said it’s a little better recently.
“Though as I said it’s a little better recently.”
As I age my senses degrade, I will defer to your superior perception. How do you define "recently?"
Determining whether the greater decine should be attributed to WAPO vs. Reason is like arguing whether prime rib is better medium-well vs. well-done.
To clarify, I have attributed the recent improvement to
- less Trump
- the return of missing conspirators who have been absent a while.
The commentariat has as far as I can tell not improved.
I think the availability of muting may have caused one or two really bad offenders to self-exile, though.
I do expect things to get ugly again when the 2024 Presidential campaigns resume. I think you're right that Trump becoming less topical has lowered the heat... But now that that dynamic has started, I don't think Trump is necessary to keep it going. Just a presidential candidate Democrats don't like, which is to say, a Republican one.
IMO the deterioration occurred incrementally in steps: Blogspot to Disqus (Disqus' atomized threads made moderation more laborious and time consuming); Disqus to WaPo (the desertion by many core, long-time commenters, and the WaPo moderators' unfamiliarity with VC's culture and commentariat; and WaPo to Reason (the gates of Hell opened, drowning the remnants of old VC culture in a cesspool of unmoderated vulgarity and extremism).
I think the move to WaPo was devastating to the commentariat for two reasons: one, the commenting system there was (and still is) absolutely hostile to any kind of prolonged exchange of ideas, and two, because it was all behind a paywall.
It was incredibly frustrating to me when Eugene tried to mitigate the damage by announcing that the blog would still be accessible for free by anyone with a .edu or .gov email address - myself not owning either. It was just such a contemptuous and preposterous way of defining who his desired audience was.
I am not the most civil interlocutor here, in the Reason iteration, but I would gladly moderate my tone if only we could get the level of discourse from the old site back. Sometimes the only manageable response to some of these commenters is vituperation.
"It was just such a contemptuous and preposterous way of defining who his desired audience was."
Now, I wasn't enthusiastic about that either, being in the same boat, but he IS entitled to his own preferences about what sort of audience he has. Even if enforcing them would reduce his audience to a relative handful of people.
" but he IS entitled to his own preferences about what sort of audience he has"
My recollection is that EV wanted to allow all readers/commenters, but .gov/.edu was the best deal for nonsubscribers that he could squeeze out of WaPo.
That, and WaPo reneging on their promise to not censor the Conspiracy was, I believe, why they migrated to Reason. And so far as I know, Reason has kept their end of that deal.
I never bothered to get one, but considering how freely universities hand out .edu emails to alumni, I would think most people would not find it difficult to get one. Maybe I live in a sheltered and privileged world.
" by announcing that the blog would still be accessible for free by anyone with a .edu or .gov email address . . . a contemptuous and preposterous way of defining who his desired audience was "
On the other hand, this blog's target audience consists of people who regard "professor" and "mainstream" as slurs. I strongly doubt this blog's downscale fans are concentrated at .edu and .gov addresses. It's more likely .aol, .myspace., .stormfront, .ruralconnect, and .proudboys.
I am surprised that you say that. WaPo started the steep downhill. Its conflation of news and opinion in what should have been straight news pieces was an effective trolling of less that civil behavior in its comments section which seemed to grow worse with time. For myself, that behavior led me to cancel my WaPo subscription. The past year here has seen worsening behavior with frequent or at best schoolyard style name-calling and ad hominems (but still better than comments in other parts of Reason). The matter is not where the slope steepened but that we we already going downhill pretty fast.
As in, "No more Behar"?
Thank you Orin for your service. Even if you were compensated I can’t imagine it was anywhere near the value provided to others.
Re: “… most people who are moderated in an online forum feel, with great certainty, that they are being censored for their beliefs.”-
Do you really mean ‘most’, and if so do you mean that more than half the moderations you did resulted in you seeing evidence that the moderatee felt as you describe? Or is your denominator instead just those cases when your moderation resulted in some moderatee reaction?
"Thank you Orin for your service. Even if you were compensated I can’t imagine it was anywhere near the value provided to others."
This x 1,000.
I'm very glad I don't do comment moderation anymore.
Amen!
I'm very glad I don't do comment moderation anymore.
Amen!
(I hated doing it)
Anybody who wants to weigh in on today's content moderation "controversy" should spend at least a day making content moderation decisions themselves.
Orin rarely posts front pagers here, but they're usually among the most interesting. Reminds me that I owe him a beer.
No.
Ok, but:
A. YouTube is more-or-less a monopoly. Facebook and Twitter exercise monopoly-like power also, even though they aren’t quite as dominant as YouTube.
B. The Texas law is a law against discrimination, not a law against speech. All discrimination communicates a view. Let’s have a single rule that such communication is always protected or never protected, not one rule for Facebook and another rule for other discriminatory communications actions.
A. YouTube is more-or-less a monopoly. Facebook and Twitter exercise monopoly-like power also, even though they aren’t quite as dominant as YouTube.
You are kind of contradicting yourself here. Facebook is more or less a monopoly, except for these other highly popular social media platforms that it competes with. Each of those is only anything like a monopoly within its own niche of the internet.
The other issue with the monopoly argument is that we've seen so many tech businesses in the internet age rise and fall. Any regulation or declaration that a particular company is a monopoly might not survive the years it would take to make a legal framework around it. The anti-trust case against Microsoft is an example. At the time, it seemed like Microsoft would be able to dominate every aspect of the home computer software market using its size. But the browser wars that got that anti-trust investigation going ended up with users having significant choices without any regulation. (I still use Firefox, mostly, which didn't even exist yet in those days, I think.)
Monopolies by definition are only monopolies within a certain domain. Otherwise they'd be a totalitarian government.
And eventual turnover among monopolies does not mean abuses by current monopolies should be tolerated.
Subway has a monopoly over Subway goers.
That's a pretty dishonest reply, frankly. Subway isn't a monopoly in the relevant domain, which includes such giants as Firehouse, or Jersey Mikes.
FB IS highly dominant in the domain it occupies, unless you deliberately define that domain absurdly expansively.
Even accepting that, why should we care about a monopoly in such a frivolous area of life? So easy to just walk away if you don't like it (and it's a great way to improve your life).
Because it's NOT a frivolous area of life. A large amount of our communications and commerce are now routed through FB. And Twitter. And located through Google.
They're replacing newspapers and the public square.
You want to see newspapers and the public square come back strong? You need public policy to encourage diversity and profusion among private publishers. Section 230 is the opposite of that. The first step is unconditional repeal of Section 230.
It's a shame you can't understand that. The reason you can't is because you are an internet utopian, who supposes it would be possible to win public support for an online publishing regime regulated to provide cost-free, world-wide publishing to everyone, without prior editing. That will never happen, because the regime you want would dismantle the means to build and manage it, and it would wipe out public support for speech freedom and press freedom along the way.
That contradiction—where indispensable practical requirements forbid idealistic imaginative ambitions—is what makes the notion of a successful and happy-for-all Section 230 publishing regime a utopian pipe dream. Sorry, you can't have it.
" who supposes it would be possible to win public support for an online publishing regime regulated to provide cost-free, world-wide publishing to everyone, without prior editing."
Nah, I actually think it would be best if we could move away from the current system where services are nominally free, but actually paid for by monetizing the users screen time and private info, to a more subscription based system. The actual problem with the internet is that not enough of it is operating like a normal, healthy market.
Basically all the current dysfunction online is due to the users being distinct from the people footing the bill, so that the sites operate in the interest of the latter, not the former. Any time you separate the payer from the user things go to hell.
I actually think it would be best if we could move away from the current system where services are nominally free, but actually paid for by monetizing the users screen time and private info, to a more subscription based system.
How well do you think an ad-free subscription based social media company would do?
I personally think the current arrangement wins out easily, indeed, unless you can point to some distortion it looks a lot like the market's choice.
Thinking it would be best to move away from this model doesn't imply that doing so would be easy, Bernard.
Nah, I actually think it would be best if we could move away from the current system where services are nominally free, but actually paid for by monetizing the users screen time and private info, to a more subscription based system.
We have an example of this already. Radio and TV were once entirely free for listeners and viewers, and were paid for through advertisements. Now, broadcast networks and radio practically don't exist, since even they mostly depend on subscription services to carry their content to people.
Basically, some services people will pay for via subscription, while others they will not and would rather pay for via advertising. Personally, I don't think that most social media platforms could exist on a subscription only model. People like using them, but probably not enough to pay for them directly. At least, not enough people to match the widespread popularity that they have, which is a large part of why people want to use them in the first place.
How much revenue does FB make per user, anyway? Typically about $30 a year, I guess?
So, would I have been willing to pay $2.50 a month for Facebook minus all the crap they pull now to monetize me, minus the privacy violations and manipulation? Yeah, I think I would have.
So you want to do violent surgery to the market, with no evident market failure, because you want to radically alter a business model you have deduced is the issue??
Congrats on being a lot ,ore big government than trust busting.
So, would I have been willing to pay $2.50 a month for Facebook minus all the crap they pull now to monetize me, minus the privacy violations and manipulation? Yeah, I think I would have.
Would you? I don't care enough about Facebook now to use it for free. I'm certainly not going to pay $2.50 a month for it.
Sigh. Once more: being a former publisher of a glorified newsletter in Nowheresville, Idaho does not give you any idea of how the Internet works.
I’m all for beefing up antitrust but under current law Facebook Twitter et al are both not dominant (network effects only) and not over a domain, unless you narrow the definition of domain to customers of the product.
The domain Facebook is in is selling advertising space. It's not a monopoly in that space.
Monopolies are defined by their *business*, not random other facets of what they do.
Isn't a better analogy to the Fairness Doctrine back when there were essentially only 3 TV networks. Sure, there were plenty of alternative sources of information (radio, newspapers etc.), but as a matter of actual fact, the Big 3 had (or at least were perceived to have had) an outsize impact on information dissemination.
Then, when cable TV became popular and available, the facts on the ground changed as channels were no longer limited by technology to a few frequencies, and there was no longer a need for the Fairness Doctrine.
So with FB or TWTR, there is a similar perception of outsized influence. That does not mean it will always be the case, but until facts on the ground change, I can see the argument.
Monopolies by definition are only monopolies within a certain domain. Otherwise they’d be a totalitarian government.
What is facebook's domain then? I see it as a social media platform. It allows people to post and share information about themselves and/or their interests. Is Twitter in a different domain than that? YouTube is a place for sharing video content, but if that is its domain, then what about TikTok? Vimeo? Or any other method of embedding video into a webpage?
The point here is that they are only monopolies if you define their business model so narrowly as to exclude anything else that does something similar. The other issue is that monopolies aren't inherently bad for consumers. It is only if the monopoly can abuse its position to limit consumer choice in order to increase its own profits that it becomes a problem for the government to solve.
Even if we take it as given that facebook, Twitter, etc. are censoring certain points of view on their platforms, that does not mean that other similar businesses are being prevented from offering platforms for those points of view.
Why should anyone take 10 seconds to consider arguments that pretend these companies aren’t monopolistic? Everyone knows they are.
When did you guys decide but you’re only 99% correct about this thing! was worth posting?
You don’t know the law, and are working hard to keep it that way, so you can be angry at the things you want to be angry at regardless of whether you should or not.
Bull headed ignorance is your jam more and more.
Companies by definition can't be monopolistic. See, the word "companies" is plural. The prefix "mono" is singular.
Moderation also becomes exponentially more difficult with popularity. What worked in the halcyon days of volokh.com (when I was a mere law student, then judicial clerk) is hard to manage on reason.com! Where all the [insert your partisan opponents here] come out with partisan certainty and monomaniacal vengeance.
TBH I like the old version. I also liked Jonathan Turley's "Res Ipsa Loquitur" blog, and now that's a cesspool of crazy too. Times, they move on.
Zarniwoop — Consider whether one constructive function of moderation might be to suppress a too-general popularity. I suggest that is exactly what made the pre-WAPO VC the unusual refuge it became, and to a lesser extent still remains. Thanks to Orin Kerr for that.
Gotta (respectfully) disagree here. What made the pre-WAPO VC the place it was, was that it was little known outside of fairly rarified circles. The issues discussed mostly weren’t partisan, and the commentariat was, to (not) coin a phrase, smarter than the average bear.
After Stephen says "suppress a too-general popularity", your "little known outside of fairly rarified circles" sounds like agreement not disagreement.
“… most people who are moderated in an online forum feel, with great certainty, that they are being censored for their beliefs."
How can it be otherwise, when a commenter gets moderated (would, "edited," be a better word?) after the commenter has already posted. First the comment goes public, then someone—who knew it was Orin Kerr?—takes it down. Of course that feels like content-based censorship.
If would-be commentary were edited prior to publication, the complaints would be about, "gatekeeping," not about censorship. Censorship's commonplace association has always been with official repression of certain points of view. Gatekeeping—in the form of private editing prior to publication—is generally recognized as a more-generally-motivated activity, and not necessarily as a content-based activity, although that can also be part of the mix.
The great (utopian) appeal of the internet was (and mostly still is) a supposition that it ought to furnish cost-free world-wide publishing to everyone, without any constraints, as a matter of right. I call that utopian because the methods necessary to put that in place would destroy both the practical ability to accomplish it, and the political ability to support it.
If most of the populace becomes convinced that speech freedom and press freedom mean impunity for the likes of Alex Jones, they will conclude speech freedom and press freedom are not worth it. Even before that stage was reached, a flood tide of unmoderated swill washing through all platforms without exception would fatally undermine the business activities and organizational inputs necessary to keep the platforms going.
Internet commenters tend to view the infrastructure which supports their activity as a god-given natural resource. That misjudgment of reality bolsters the aforementioned utopianism.
Now they let everyone be their own moderator.
Come to think of it, everyone was already their own moderator, they could avoid forums where the comments are not to their taste.
But now each participant can have personally curated content where some people are muted.
I guess the next step would be to mute responses to muted people, even if the responses come from otherwise-unmuted people. If you see what I mean.
Margrave — Somehow, your comment overlooked that a forum's proprietor must be numbered among the, "participants."
You remind me of the gravedigger in Hamlet with your literalism.
Yep. Muting responses to muted users seems logical to me. If I don't want to see the Rev or Behar for example, I generally don't want to see threads spawned by people responding to their (from my perspective) trolling
Careful what you ask for, I already see long stretches of grey scale. Muting responses to muted commtors might eliminate whole threads....
I'm OK with that outcome. Or perhaps it should be a configurable choice? But if I perceive someone as a troll, do I want to see responses to said trolling?
A "hide thread" button would help.
I have gone back and forth on that. My current thought is that if I have you muted, I am 99% certain that reading your nonsense will be a waste of my time. That includes even the opportunity to use something stupid you say as a foil, to make a point of my own.
But it does seem efficient to let others double check that guess. If I see an interesting or worthwhile thread developing from a muted comment, that gives me a chance to check whether something remarkable has happened. Also, if I want to respond to stuff in the replies, I don't want to take the chance to miss that the comment that got the thread going colored the context in some way I should understand.
Indeed, and I wish that, instead of requiring you to globally mute/unmute a commenter, Reason let you just temporarily display a given muted comment.
It would be useful to put threads in context, or for the rare occasions where somebody like the Rev actually says something substantive instead of raving.
I'd note that Section 230 as written anticipates the existence of 3rd party filters that the users could pick, something FB at least actively works against. (They're continually rejiggering their backside to break such filters.) Having user selected 3rd party filters instead of platform based moderation would still tend to create bubbles, but would at least avoid the censorship aspect; It's not that bad if you can pick your own censor, instead of having one foisted on you.
Ideally most moderation would occur in this 3rd party way, which would be more congenial to the users, and platform moderation would be limited to legally driven issues and matters where there was overwhelming consensus. As the authors of Section 230 expected it to be!
I wish that, instead of requiring you to globally mute/unmute a commenter, Reason let you just temporarily display a given muted comment.
I agree with this, but there is something simpler that would be helpful.
When I unmute someone to read a comment because the response interest me I lose my place. That is, the system takes me off into some other part of the comments on that OP, and I have to scroll my way back to where I was.
I would like to stay at the comment I wanted to read.
(Do others have this problem? I use Firefox on a Mac.)
Bernard,
I also have that problem using Firefox on a mac
"For a few years, I spent over an hour a day, every day, moderating Volokh Conspiracy comment threads."
Moderation, heh.
What Orin was really doing was thinking 'I wonder if I got that right, I better check Kazinski's take.'
I had Eugene email me a while back. Before I had Kirkland muted he once annoyed me so badly that I leaped over the line and posted 3 or 4 sentences of just abuse towards him. I apologized because I did feel remorse and muted that moron soon after. Kirkland, not Eugene.
I wish the committed ideologues who comment were better at estimating qualities in others they dislike. For instance, Kirkland's repetitive bad comments make him a target for many. I think his style shows he is lazy and brash, but also maybe an accurate judge of the futility of trying to converse respectfully with many of those he excoriates. Someone with a different temperament might choose just to ignore those. Not Kirkland, and that provokes some of the less thoughtful to respond with ill-considered hostility, when better-estimated hostility might do the responders more credit.
Less frequently, Kirkland makes comments which demonstrate insight, discerning judgment, and expressive style. Some comments also suggest professional experience doing responsible jobs, which other occasionally-manifest Kirkland qualities suggest could actually have happened. That is the guy behind all his comments. When you call him a moron, it makes me question your judgment, not his.
My problem with the Rev. is that he's just such a bore. When I'm being skewered and made fun of, I at least like it to be done with a a bit of creativity and flair. The Rev. is as predictable and repetitive as clockwork.
If Kirkland could lay off the raving about our "betters", and the oral rape allusions, the residue would actually be not that bad. The problem being that the residue would only be about 1% of his current comments.
lol. I have had him muted forever. Now you have me wondering if there is something juicy to read.
" Someone with a different temperament might choose just to ignore those. Not Kirkland, "
No free swings. I enjoy calling a bigot a bigot. Appeasing or disregarding the bigots is often counterproductive. I have concluded it is better to confront them. It's a marketplace of ideas. May the better ideas continue to prevail as America continues to progress.
Kirkland is pure troll. (Okay, like Ivory soap, 99 and 44/100ths percent pure.) Less insightful or interesting than Behar. He just posts the same things over and over and over and over and over again. Like all trolls, when he isn't getting enough attention he posts his shtick in random threads and says, "Why aren't people talking about this?"
Strangely enough, VC has already hit upon the ultimate answer to moderation issues: let the READER DECIDE. This blog and slashdot.org provide the best self-moderation controls that I know of. VC allows one to mute individual authors based on personal preference, while slashdot.org allows one to mute individual comments based on group moderation. Of the two methods, the VC wins hands down, in my opinion.
Here is why the VC model is so much better:
Well, I see there's still room for improvement. Upon editing the parent comment, the software cut off everything in the bullet list I had used in my comment. (Note to software devs: the HTML cleansing you're doing in a comment post is not the same as what you're doing in an edited comment post. A legal HTML tag in the one is stripped in the other.)
The reasons why the VC model is so much better are:
I largely agree, especially with point 1: The single biggest improvement to the current setup would be to allow individual muted comments to be viewed without having to globally unmute the commenter.
And they need to improve their handling of html. Revert the edited text to the way it was submitted, with all html intact. But I suspect they're not storing that, just the parsed version.
yep, 100% this, much better written than my post.
" Strangely enough, VC has already hit upon the ultimate answer to moderation issues: let the READER DECIDE. "
Wrong. The Volokh Conspiracy does not let the reader decide with respect to comments from Artie Ray Lee Wayne Jim-Bob Kirkland. In that context, the Volokh Conspiracy Board Of Censors is the decider.
See what I mean about trolling? This never happened, and would be irrelevant to the discussion if it did.
Utterly wrong.
A community curated for engagement and politeness is a pleasure to be part of.
I have fun here to be sure, but turning everything into an open bazaar is not actually freedom.
Sure, but your problem is that you tend to confuse politeness and agreeing with you; You routinely denounce people as "assholes" for calmly, politely, refraining from validating any lies you commit to.
I know some can’t believe it, but some communities include both disagreement and politeness.
Because their moderators are able to set aside their partisanship. Like Prof Kerr described.
Heck, you are polite. And awful.
DMN and I disagree on a lot. And Noscitur. We manage to keep it civil.
Beyond that, there are some subjects a well curated community may ban. Freedom does not require every online political forum countenance Holocaust denial, or the 5000th mention of Bernie was robbed in 2016, it’s nicer to not have certain circular debates.
Do I want careful everywhere? No. But I’d like it some places. you and Texas want uncurared everywhere.
How anti-freedom of you, and what a lack of choice you would countinence in the name of your liberty.
Heck, you are polite. And awful.
There's an important point there which I think too often gets lost in all the maximalist tribal firebreathing. Content and form are independent vectors and they both matter. Each impacts the quality of discourse. There are commenters whose opinions I find persuasive, but the value of their comments is largely offset by their rudeness and animosity. Brett, conversely, is a font of paranoid, conspiratorial nuttery, some of which I find genuinely loathsome, but he's also unfailingly polite, and that's not nothing. If one wants to engage ideas such as Brett's, his good manners make that possible without having to endure the unpleasantness one is guaranteed in any exchange with, say, a WuzYoungOnceToo. That's commendable IMO, so every few years I try to acknowledge it. This is that time.
I agree with this. Brett is a loon — even when I agree with his position, I usually find him loony — but he's 100% not the problem here. I have criticized him many times over the years — sometimes almost certainly too harshly — and I don't think he has ever returned fire. He doesn't get personal; he doesn't spam; he doesn't troll.
No, Brett. He doesn't.
Yes, Leo, he does.
Sarcastro has repeatedly insulted and attacked me for simply point out that his facts are wrong. We're not talking political conclusions, either - I'm referring to things like "how many blacks are in the military" or "what government employee benefits are". Straight up facts that anyone can look up, totally non-controversial... unless they ruin a Sarcastro talking point, and then suddenly I'm a lying bigot racism wanna-be murderer.
He's still more polite than DN or LTG, though.
idk. First of all I think #3 contradicts #2. Because its impossible to read all comments, there is an inherent bias. Maybe the moderation that day merely reflects that the moderator was tired and up all night with the kids. A tired/busy moderator is liable to scan for whatever is on their mind at the moment.
Second, I think its impossible to be objective because “uncivil discourse” is in the eye of the beholder. I see comments being moderated for all sorts of questionable reasons - because of improper use of pronouns (now, we mustn't assume gender dwb68!) or because its inappropriate to speculate about an ex’s mental health. I do not think calling your ex crazy on reddit in a story about a nasty breakup qualifies as moderation worthy – if people want to see some *real* nasty shit read some divorce proceedings.*
Yes, its true, people are nasty to each other – in comments and real life.
I am 100% in favor of “quality” comments and also safe driving. But like all the dumb nanny features in my Honda reminding me to wear my seatbelt or disconnecting my phone**, I think comment moderation starts with good intentions but end up as paternalism.
The average adult can recognize an ad hominem attack. People who are uncivil are also unpersuasive as well. I know from reading the comments on reason.com who is likely to deliver an insult, and I ignore them. Sticks and stones. I would rather see readers have their own tools to moderate what they read (like the mute user).
In theory, if the moderation policy is clear, objective, narrow, and uniformly enforced, then I have no objection. Problem is, like socialism***, the theory never matches reality and people forever blame the execution. Perhaps if the theory has failed so many times, its a bad theory. The moderation and demonetization policy on most platforms now is so broadly written, it can be used to justify removing any post or video, or not! You would have to validate thousands of lines of code along with training data to see how/whether Google is de-indexing or sorting search results to know if they are lowering the ranking of pages based on Environmental, Social, and Governance (ESG) criteria.
And, even when the moderation policy is “clear, objective, narrow, and uniformly enforced” people find a way around it by using code words. It becomes cat and mouse – platforms broaden the rules, people find new ways around it, then bam! the rules are so vague and broad that “we know it when we see it.”
Bottom line for me: If you write a dumb, low IQ, uncivil, insulting, ad homenim attack on my online avatar, I don’t care. I wont be persuaded. I am going to skip you and move on. I don’t need a moderator to identify these posts for me, because I made it through middle school.
* not my story, for the record. I've been married for decades.
** Even when my wife is driving and I am navigating. There is no feature to “accept” the risk because the passenger is using the console.
*** See what I did there, someone will be offended by comparing content moderation to socialism. Uncivil discourse, or just a persuasive technique to make the point that incivility is in the eye of the beholder? I write, you decide.
And also: love the new edit feature.
Completely spitballing, but I've wondered about the effect a large time delay would have on quality of commenting. Say it takes three hours for your post to show up. Not because somebody is approving it (or not), but just an automatic delay. We all know a lot of the trolling is specifically to provoke a reaction. But knowing you won't possibly see any reaction for at least 6 hours might have a significant dampening effect. It encourages everyone to put more thought into their posts and possibly refrain from knee-jerk snarky reaction posts (which we've all I'm sure either done or felt the strong compulsion to do).
Would be an interesting experiment.
My guess is that your proposal would destroy discussion — anyone who has participated in a forum where each comment has to be pre-approved (and where that process takes time) knows how it affects the flow — without hindering trolling.
Content moderation at its finest: https://nypost.com/2022/09/25/facebook-silencing-activity-related-to-fbi-whistleblower-steve-friend/
(sarcasm).
tbf I have no idea if the underlying accusations are true. nevertheless, the history of COVID is that a lot of the things the experts deemed untrue requiring "fact checking" turned to be true.
I miss Professor Kerr's content. (I can't follow the Twitter feed because Twitter blocks access after I have viewed a few tweets, apparently because I am not a Twitter customer. I do not want Twitter to bother my contacts, so I don't join.)
I joined Twitter long ago using an e-mail account set up under the same pseudonym I use here. As far as I know, Twitter hasn't bothered any of my contacts because none of my contacts know me as Leo Marvin. Unless your contacts know you as Rev. Arthur L. Kirkland, I don't see why your experience wouldn't be likewise. Open a Rev. Arthur L. Kirkland email account and link it to your Twitter membership created under that name. And if your contacts do know you as Rev. Arthur L. Kirkland, just invent a new pseudonym for Twitter and do everything under that name.
I suppose one good thing about muting someone is that if they attack you, you won’t see it, which leaves the possibility of not “taking official notice” of the attack. There’s an assumption that an unrebutted accusation is true – and also, meeting an all-out attack with a polite denial seems too milquetoast for me, and may be seen as such by others ("why doesn't he defend himself"?).
But if you can’t see the attack, there’s no room for “he failed to deny” etc. because you saw nothing to deny.
I'd feel insulted if I didn't agree 100%. This comment section is a cesspool.
The only reason I read it is because it's useful to know what conservative ideologues are thinking.
Thanks for this, and for your comment moderation in those early days. The comment section was a delight.