Australian Court Rules Media Outlets Are Responsible for Facebook Users' Comments

Online censorship is coming, and it’s going to be bad news for everybody.


What happens when media outlets are held legally liable for defamatory comments visitors post to their Facebook pages? Australians (and the rest of us) may soon find out. The Court of Appeal for New South Wales has just dismissed an attempt by several Australian media outlets to overturn a 2019 ruling that they could be held liable for Facebook comments about Dylan Voller.

In 2016, a television program aired footage of Voller shackled in a restraining chair. The coverage led to a national outrage about the mistreatment of youth in Australian detention systems. Voller then sued some of the media outlets that covered him—not because the news reports defamed him, but because other people writing on the outlets' Facebook pages were accusing him of having committed various crimes.

Australian courts have not yet ruled on whether these comments defamed Voller. This fight is about whether media outlets could even be sued for Facebook comments.

The outlets argue that they're not the "publishers" of what people say on Facebook and, in fact, that Facebook does not let them preemptively stop individual readers from posting comments on their page. They do have the option to delete, hide, or report individual comments, but only after they've been posted. Facebook comments are not like letters to the editor that they can choose whether or not to run.

But the fact that these outlets have the ability to delete comments after the fact was enough for Judge John Basten to declare them publishers: "They facilitated the posting of comments on articles published in their newspapers and had sufficient control over the platform to be able to delete postings when they became aware that they were defamatory." Based on that logic, the media outlets "facilitated" the posting of comments simply by sharing the articles on Facebook.

Another judge suggested that a potential solution would be to use filtering tools that recognize certain trigger words and hide those comments automatically. This "solution" means the media would have to censor content on the basis of certain words appearing in the comment and not because the comment itself was defamatory.

If this ruling stands, it's going to force Australian media outlets to monitor all comments and beef up their social media teams at a time when they're having to lay off staff and even shut down newspapers.

"It's a big challenge to the business model of publishers, because it means there is a greater risk any time you create content which is in any way controversial," the Australian defamation lawyer Michael Douglas told The Wall Street Journal. "There is a risk that users will write something objectionable, which will open up the entity behind the account to being sued for defamation." The Guardian reports that the media outlets are thinking of asking Australia's Supreme Court to consider the case.

Even if media outlets are able to bolster their social media monitoring presence, they'll be asked to make snap decisions on what's defamatory and respond immediately. When media outlets act as publishers and are concerned with possible libel or defamation within news stories, these pieces typically go through several editors (and sometimes even lawyers) before they are published. Sometimes the outlets get sued anyway, and sometimes their internal analysis turns out to be wrong. The most logical result of this ruling is that outlets will delete any comment that says anything critical about anybody in a story or report they've shared on Facebook, regardless of whether or not it's actually defamatory, because they can't prereview them. Who can afford that level of risk?

This is precisely why the attacks on Section 230 of America's Communications Decency Act are so extremely misguided. If you hold social media platforms and/or media outlets legally liable for commenters, it's not going to lead to some sort of politically neutral or "unbiased" moderation that protects unpopular opinions from getting deleted. It will lead to much more censorship. It will probably kill off some comment forums altogether.

Even if these reckless attacks on Section 230 fail, what's happening in Australia can still affect Americans. Countries across the world have been aggressively trying to force websites, search engines, and social media platforms to censor content in a way that crosses virtual borders and affects what people in other nations can see. Last year the European Court of Justice ordered that Facebook must censor any comments posted from anywhere in the world that spoke critically of an Austrian Green Party politician. Canada's Supreme Court has ruled that when Google is forced to remove a link from search results (in this case because the page's content violates copyright laws), Canada has the authority to make this order global.

The European Court of Justice has applied Austria's defamation laws to speech that would be protected by the First Amendment here in the United States. Many Western countries have strong free speech protections, but they often don't go as far as America's. The threat to the First Amendment here isn't coming from private platforms censoring speech because of political bias, but by governments attempting to force their censorship rules worldwide.

This ruling in Australia is bad for the media, but it's even worse for members of the public. They're the ones who'll get censored.