Australia's highest court has upheld a controversial and potentially destructive ruling that media outlets are legally liable for defamatory statements posted by online commenters on Facebook, a decision that could result in massive amounts of online censorship out of fear of lawsuits.
The case revolves around a television program from 2016 on Australia's ABC TV (no relation to America's ABC network) about the mistreatment of youths in Australia's jail system. Footage of Dylan Voller in a restraining chair was part of the coverage. When media outlets covered this program and posted links to the coverage on Facebook, users made comments about Voller, and this prompted Voller to sue the media outlets. The comments were defamatory, Voller claimed, and he argued that the media outlets themselves were responsible for publishing them.
The media outlets countered that, no, they were not the publishers of third-party comments on Facebook and were not responsible for what they said. The outlets have been appealing to the courts to toss out the lawsuits, and they've been losing.
Reason first reported on this case in 2020 as it worked its way up through the courts, and Wednesday's decision from the High Court of Australia is the final stop: The country's top justices determined that media outlets in the country are, indeed, publishers of the comments that users post on Facebook under stories that they link.
The logic here is absolutely terrible and destructive. Facebook has control over the tools for managing comments on media pages. The media outlets themselves do not, and they can't "turn off" commenting on their Facebook pages. They do have the power to delete comments after the fact or use filtering tools that target keywords (to stop people from making profane or obscene comments) and can block individual users from the page.
Using these tools to try to prevent defamatory comments requires constant monitoring of the media outlet's Facebook page and would demand that moderators be so agile as to remove potentially defamatory content the moment it appears before anybody else could see it. Nevertheless, the justices concluded that this is enough control over the comments for media outlets to be considered publishers. Two of the justices were very blunt that simply participating on Facebook made Fairfax Media Publications a publisher of the comments:
In sum, each appellant intentionally took a platform provided by another entity, Facebook, created and administered a public Facebook page, and posted content on that page. The creation of the public Facebook page, and the posting of content on that page, encouraged and facilitated publication of comments from third parties. The appellants were thereby publishers of the third-party comments.
Except, of course, that the extent to which media outlets may monitor or control third-party comments is completely in the hands of Facebook. Facebook decides how much a media outlet can block comments or users or whether people can comment at all.
Not all of the judges agreed. Justice James Edelman dissented from the other judges and noted the potentially dire consequences of this decision:
Merely allowing third-party access to one's Facebook page is, of itself, insufficient to justify a factual conclusion that the Facebook page owner participated in the publication of all the third-party comments posted thereafter. Were it not so, all Facebook page owners, whether public or private, would be publishers of third-party comments posted on their Facebook pages, even those which were unwanted, unsolicited and entirely unpredicted. Indeed, it might extend to cases where a Facebook page is hacked and then has posted on it entirely unwelcome, uninvited and vile defamatory comments, whether by the hacker or in response to a post made by the hacker. It might also render Facebook itself, at common law, the publisher of all posts made on Facebook.
It is easy to assume, as these other justices apparently have, that such a decision could not possibly cause a disastrous amount of online censorship because media outlets should know when a controversial story might lead to defamatory comments. The judges actually note this in the ruling. They seem to think that this is only an issue with certain types of stories and that the appearance of defamatory comments can be predicted in advance.
This is complete rubbish, and anybody with any experience on social media already knows this. Trolls, scammers, and spammers range far and wide (that's the point of them), and it's incredibly naive to think that a story that has no controversial elements can't end up with third parties posting defamatory nonsense under them.
Edelman has the right of it here, and it's why Section 230 of the U.S. Communications Decency Act, which generally protects websites and social media platforms (and you) from liability for comments published by others, is so important. It's not just to protect media outlets from being held liable for comments from trolls. It's to allow social media participation to even happen at all. Some large media outlets or companies might be able to afford around-the-clock moderation to attempt to catch problems. But even if they could, let's be clear that they're going to avoid as much risk as possible and delete any comment that has a whiff of controversy. Why would they allow it to stand if it could get them sued?
But smaller companies and outlets—and there's no reason to think this ruling applies only to media outlets—will either have to hope Facebook gives them better tools to control who posts on their page or just not have social media presences at all.
It's a terrible ruling. The only positive here is that the courts have not yet ruled on whether the contested comments are actually defamatory. Subsequent rulings may clear Fairfax. But if courts decide that these random comments were defamatory, the downstream consequences will be disastrous for free speech. Expect to see commenting start to disappear across media platforms.