top of page
  • MaryGrace Lerin

Australian High Court Ruling Sees Media Outets Held Liable for their Facebook Posts

A new legal ruling could have significant ramifications for how news content is published online, particularly guaranteeing less sensationalism in Facebook posts intended to generate maximum response.


The Australian High Court upheld a ruling last week that, in some cases, could hold Australian media outlets liable for user remarks made on their corresponding Facebook Pages.

The discovery has spurred a new round of issues about plausibly restricting journalistic freedom of expression and obstructing reporting capacity. However, the case is more complicated than the headline suggests. Indeed, the High Court ruling expands the scope for media outlets to be held legally accountable for comments made on their social media pages, but the ruling's full nuance is intended to ensure incendiary posts are not shared with the deliberate intent of baiting comments and shares.


The case originated from a 2016 investigation that discovered inmates at a Darwin youth detention center had been heavily mistreated, even tortured, during their incarceration. Some channels aimed to provide more context on the victims of the torture in succeeding media coverage of the issue, with a bunch of publications seeking out the criminal records of said victims as another report of the case.


Dylan Voller, a former inmate, claims that the successive media portrayals of him were both inaccurate and defamatory, prompting Voller to seek legal redress for the published declarations. Voller himself had become the subject of several articles, including one in the New York Times. The Australian headlined “Dylan Voller's list of jailhouse incidents tops 200,” highlighting the numerous wrongs Voller was alleged to have committed that led to his confinement.


The case involving Facebook comments, in particular, arose when these articles were reposted on the Facebook Pages of the sources in question. The essence of Voller's argument is that the context of these articles, specifically within Facebook posts, elicited negative comments from platform users, which Voller's defense team claims was intended to evoke more comments and interaction on these posts, and thus generate more reach within Facebook's algorithm.


As a result, the issue comes down to a vital juncture – it's not that news outlets can now be taken to court for people's comments on their Facebook posts, in simple terms, but it's about how the content is constructed in such posts, and whether there can be a clear link seen between the Facebook post itself, and if it has attracted offensive comments, and community perception, which may cause harm to a person (it is unclear whether the same regulations would apply to an entity).


Matter of fact, Voller's legal team argued in the original case notes that the publications in question:


“Should have known that there was a “significant risk of defamatory observations” after posting, partly due to the nature of the articles”


The intricacies here are far beyond the topline having found that publishers can now be held liable for comments made on their Facebook pages, because the real motive here is that those posting content to Facebook on behalf of a media publisher should be more meticulous in their choice of words. Because if potentially offensive comments can be traced back to the original post, and the publisher is discovered to have prompted the reaction, legal action can be pursued.


In other words, publishers are free to re-share whatever they want as long as they stick to the facts and don't share purposefully incendiary social media posts in the aftermath of an incident.

As an example, here's another article from The Australian about the Dylan Voller case, which, as you might expect, has drawn a slew of critical and negative comments.


But the post isn't defamatory; it's simply stating the facts, which was a quote from an MP, and there's no clear indication that the publisher was trying to entice Facebook users to leave comments based on the article shared.


Which brings us to the real point at hand: the ruling makes it more important for publishers to think about the framing of their Facebook posts as a way to attract comments. If a publisher is found to be stirring up negative comments, they can be made accountable - but there must be conclusive evidence to show both damages to the individual and motives within their social media post, not the linked article, which could result in prosecution.

Which could actually be a better option. Because of the obvious benefit for publishers to share anger-inducing, emotionally loaded headlines in order to trigger comments and shares, which then maximizes reach, media incentives have been substantially changed by online algorithms over the last decade.


That extends to misinterpretations, half-truths, and outright lies in order to elicit a response from users, and if publishers can be held accountable for this, it seems like a better approach than proposed reforms to Section 230 laws in the US, which would severely restrict press freedoms.


This ruling is particular to Facebook posts, and the phrasing of such posts is designed to stimulate an emotional response in attempt to attract engagement. As in all cases of defamation, proving a definitive link between a Facebook update and any personal damages will be incredibly hard. However, this finding may spur media outlets' Facebook Page managers to be more truthful in their updates rather than relying on comment-baiting to increase algorithm reach.


As a result, while it increases media outlets' liability, it may also be a step toward more factual reporting and holding publishers accountable for inciting online mob attacks based on their angling of a case.

Since it is obvious that this is happening – the best method to get Facebook comments and shares is to elicit an emotional response, which prompts people to comment, share, and so on.


If a Facebook post is discovered to be evidently triggering such behavior, and that behavior can result in reputational harm, that appears to be a positive step – though it inevitably comes with increased risk for social media managers.


7 views0 comments

Discover Roundabout's free reporting tool for every social media campaign

Download the app

Apple and Android

apple_google_edited.png
apple_google_edited.png
bottom of page