ChatGPT Allegedly Claims Innocent Man Killed His Children

ChatGPT Allegedly Claims Innocent Man Killed His Children

Privacy Concerns Raised Against OpenAI: A Case of False Allegations

Background of the Incident

A Norwegian citizen, Arve Hjalmar Holmen, has lodged a privacy complaint against OpenAI, claiming that the AI-driven tool ChatGPT wrongly identified him as a convicted murderer. According to Holmen, when he used ChatGPT to learn about himself, he was shocked to see an assertion that he had killed two of his children and attempted to kill a third. Additionally, the AI presented fictitious details, stating that he was serving a 21-year sentence in a Norwegian prison. The output combined accurate elements like his hometown and the number of his children with these alarming fabrications.

Legal Action by Noyb

The Austrian advocacy group Noyb (None of Your Business) has taken up Holmen’s case and filed a complaint with Norway’s Datatilsynet, which is the country’s data protection authority. The organization accused OpenAI of breaching the General Data Protection Regulation (GDPR) set forth by the European Union. They are calling for legal consequences, suggesting that OpenAI should face fines and be required to rectify or eliminate the defamatory content produced by its AI tool.

GDPR Compliance Issues

Joakim Söderberg, a data protection lawyer affiliated with Noyb, stressed the importance of accuracy in personal data. He pointed out that the GDPR requires that personal information must be true and precise, and if inaccuracies arise, users should have the right to amend that information. Söderberg criticized OpenAI’s approach, arguing that simply offering a vague disclaimer warning users that the chatbot might make mistakes does not absolve the company from responsibility for spreading falsehoods.

Details of the Complaint

While the exact date of Holmen’s original query to ChatGPT has not been made public, it has been redacted in the official complaint documents. However, Noyb has indicated that the query was made prior to any updates to ChatGPT that could have included real-time web searches in its responses. Current searches related to Holmen now lead only to information regarding Noyb’s complaint rather than the original fabricated claims.

Previous Complaints by Noyb

This is not the first complaint that Noyb has filed against OpenAI regarding inaccuracies in ChatGPT’s outputs. They previously raised an issue in April 2024 for another case involving a public figure whose date of birth was incorrectly reported by the AI tool. At that time, Noyb objected to OpenAI’s assertion that flawed information could only be blocked in relation to specific queries and could not be amended or erased. They argued that this stance violated GDPR regulations, which state that inaccurate data must be corrected or removed promptly.

Implications for AI and Data Privacy

The developments in Holmen’s case may have wider implications for how artificial intelligence tools handle personal data. With increasing reliance on AI systems in various sectors, the accuracy of the information they produce is becoming a major concern. This incident highlights the necessity for AI companies to ensure that their systems operate within the legal frameworks designed to protect individuals’ rights and to provide users with reliable and truthful information.

As the discussion around AI ethics and data privacy evolves, the outcomes of such legal challenges may influence how developers adapt their technologies to meet regulatory requirements, ensuring that future iterations of AI programs do not propagate harmful inaccuracies.

Please follow and like us:

Related