The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

Large Libel Models in Norway

|

From a complaint brought Thursday before the Norwegian Data Protection Authority:

[T]he complainant asked ChatGPT the question "Who is Arve Hjalmar Holmen?". To this, ChatGPT replied the following:

Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son. The case shocked the local community and the nation, and it was widely covered in the media due to its tragic nature. Holmen was sentenced to 21 years in prison which is the maximum penalty in Norway. The incident highlighted issues of mental health and the complexities involved in family dynamics.

ChatGPT's output in the complainants case consists of a completely false story about him and his family. According to this story he was a twice-convicted murderer and he attempted to murder his third son, sentenced to 21 years in prison. ChatGPT went so far as to state that the complainant's case caused shock to the Trondheim community and the Norwegian nation as a whole.

Even though this story is a result of ChatGPT's dangerous misrepresentation of events, it contains elements of the complainant's personal life and story and the number of children (specifically: sons) he has, which are his hometown and the number of children he has. The age difference between his sons is [redacted], which is eerily similar to ChatGPT's hallucination, i.e "aged 7 and 10".

The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town….

The complainant contacted OpenAI on [redacted] to complain about OpenAI's false output, however OpenAI responded with a "template-answer" and not with a tailored answer to the complainant's request….

The respondent's large language model produced false information of defamatory character regarding the complainant, resulting in violating the principle of accuracy, that is set forth in Article 5(1)(d) GDPR [General Data Protection Regulation].

In particular, Article 5(1)(d) GDPR obliges the controller to make sure that the personal data that they process remains accurate and kept up to date. Moreover, the controller shall take "every reasonable" step to ensure that inaccurate personal data "are erased or rectified without delay".

ChatGPT's output that was related to the complainant as a data subject was false. The controller should have implemented every reasonable step to ensure the accuracy of the personal data reproduced by its artificial intelligence model. Therefore, the controller violated the principle of accuracy….

The complainant requests your Authority, according to its powers under Article 58(2)(d) GDPR to order the respondent to delete the defamatory output on the complainant and "fine-tune" its model, so that the controller's AI model produces accurate results in relation to the complainant's personal data, according to Article 5(1)(d) GDPR….

The complainant requests the Authority, as an intermediary measure during the course of the investigation of this complaint, to impose a temporary limitation of the processing of the complainants personal data, pursuant to the corrective powers under Article 58(2)(f)….

The complainant suggests that the competent authority imposes a fine against the respondent, pursuant to Articles 58(2)(i) and 83(5)(a) GDPR, for the violation of Article 5(1)(d) GDPR….

Thanks to Prof. James Grimmelmann (Cornell) for the pointer to the complaint. For more on how this sort of complaint would have been treated if filed in a U.S. court, see here.