The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Large Libel Models in Norway
From a complaint brought Thursday before the Norwegian Data Protection Authority:
[T]he complainant asked ChatGPT the question "Who is Arve Hjalmar Holmen?". To this, ChatGPT replied the following:
Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son. The case shocked the local community and the nation, and it was widely covered in the media due to its tragic nature. Holmen was sentenced to 21 years in prison which is the maximum penalty in Norway. The incident highlighted issues of mental health and the complexities involved in family dynamics.
ChatGPT's output in the complainants case consists of a completely false story about him and his family. According to this story he was a twice-convicted murderer and he attempted to murder his third son, sentenced to 21 years in prison. ChatGPT went so far as to state that the complainant's case caused shock to the Trondheim community and the Norwegian nation as a whole.
Even though this story is a result of ChatGPT's dangerous misrepresentation of events, it contains elements of the complainant's personal life and story and the number of children (specifically: sons) he has, which are his hometown and the number of children he has. The age difference between his sons is [redacted], which is eerily similar to ChatGPT's hallucination, i.e "aged 7 and 10".
The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town….
The complainant contacted OpenAI on [redacted] to complain about OpenAI's false output, however OpenAI responded with a "template-answer" and not with a tailored answer to the complainant's request….
The respondent's large language model produced false information of defamatory character regarding the complainant, resulting in violating the principle of accuracy, that is set forth in Article 5(1)(d) GDPR [General Data Protection Regulation].
In particular, Article 5(1)(d) GDPR obliges the controller to make sure that the personal data that they process remains accurate and kept up to date. Moreover, the controller shall take "every reasonable" step to ensure that inaccurate personal data "are erased or rectified without delay".
ChatGPT's output that was related to the complainant as a data subject was false. The controller should have implemented every reasonable step to ensure the accuracy of the personal data reproduced by its artificial intelligence model. Therefore, the controller violated the principle of accuracy….
The complainant requests your Authority, according to its powers under Article 58(2)(d) GDPR to order the respondent to delete the defamatory output on the complainant and "fine-tune" its model, so that the controller's AI model produces accurate results in relation to the complainant's personal data, according to Article 5(1)(d) GDPR….
The complainant requests the Authority, as an intermediary measure during the course of the investigation of this complaint, to impose a temporary limitation of the processing of the complainants personal data, pursuant to the corrective powers under Article 58(2)(f)….
The complainant suggests that the competent authority imposes a fine against the respondent, pursuant to Articles 58(2)(i) and 83(5)(a) GDPR, for the violation of Article 5(1)(d) GDPR….
Thanks to Prof. James Grimmelmann (Cornell) for the pointer to the complaint. For more on how this sort of complaint would have been treated if filed in a U.S. court, see here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
AI makes things up. Badly. Exhibit 1478
So that happened in europe, and the respondent was fined.
Would that happen here in the US? We don't have GDPR.
Read the article already linked at the bottom of the original post.
If you don't want all the gory details, jump to the conclusion on page 554.
They really need to separate out "tell me a story about real person X, and make it nasty!" from "give me a synopsis of this real guy that's accurate."
The problem is the AI tech has no knowledge of the truthiness of the petatons of sentences humanity has written over millenia, so it sucks at the latter.
petatons -- A measure of the strength of an explosion or a bomb based on how many quadrillion tons of TNT would be needed to produce the same energy.
Learned a new word today.
And here I thought it would mean a very fat animal rights activist.
Used to be megatons, but inflation, y'know.
Heck, I remember when we measured electricity in gigawatts when we talked about powering our flux capacitors.
I blame Biden!
I do not see any damages here. No one saw the output, until the plaintiff decided to publish it. ChatGPT is not keeping this personal data on him. It was generated in response to his query. It is not clear that GDPR-5 applies.