Privacy Complaint Filed Against ChatGPT for Defamatory Misstatements

Privacy Complaint Against OpenAI’s Chatbot ChatGPT
OpenAI, the company behind the popular AI chatbot ChatGPT, is facing a serious privacy complaint in Europe. This complaint highlights the risks associated with the bot’s tendency to fabricate information. The concern was brought to light by the privacy rights group Noyb, which supports an individual from Norway who discovered that ChatGPT falsely accused him of severe crimes, including murder.
Nature of the Complaint
The individual, Arve Hjalmar Holmen, was shocked to find that ChatGPT produced a fabricated narrative suggesting he had been convicted of murdering his children. This incident is only one example of similar privacy-related concerns regarding the AI’s generation of incorrect personal information. Previous complaints have involved inaccuracies like wrong birth dates and other biographical details.
Legal Framework
Under the European Union’s General Data Protection Regulation (GDPR), people have specific rights regarding their personal data, which includes the right to correct any incorrect information. OpenAI has tried to provide some solutions, usually offering to block responses that contain identified errors. However, this does not fully align with the demands of the GDPR, which requires that data controllers ensure the accuracy of personal data about individuals.
Key GDPR Points
- Right to Rectification: Individuals have the right to have inaccurate data corrected.
- Data Accuracy: Organizations must ensure that any personal data they process is accurate.
Joakim Söderberg, a data protection lawyer at Noyb, emphasized this in a statement, arguing that simply informing users that the chatbot can make mistakes is insufficient. "You cannot just spread false information and later add a disclaimer," he remarked.
Potential Impact of Non-Compliance
Confirmed breaches of GDPR can result in serious consequences, including fines of up to 4% of a company’s global annual revenue. There is also the possibility that regulators will enforce changes in AI products. Noyb’s complaint aims to draw attention to the risks posed by AI systems that can produce false narratives known as "hallucinations."
Responses from Regulatory Bodies
Europe’s privacy watchdogs are currently taking a cautious approach toward generative AIs as they work to understand how to implement the GDPR in this new context. For example, earlier this year, the Data Protection Commission of Ireland advised against hastily banning generative AI tools while regulators explore their applications within the law.
An ongoing complaint against ChatGPT by Poland’s data protection authority has still not been resolved. This cautious stance suggests a need for more thorough evaluation before making definitive regulatory moves.
Specifics of the Current Complaint
Noyb’s complaint illustrates a troubling example of how ChatGPT generated a harmful and fictional account of Holmen’s life. The response included elements that were true, such as his parental status and place of residence, making the overall falsehood even more troubling.
Noyb attempted to find out why the chatbot provided such a specific but false narrative, without a clear answer from the system’s algorithms. The organization suggests that the training data may have included numerous stories about severe crimes, which could have impacted the bot’s output.
OpenAI’s Response
An OpenAI spokesperson stated that they’re researching ways to improve the accuracy of their models. They acknowledged the complaint and pointed out that the version of ChatGPT being referred to has since undergone updates that improve its precision by incorporating capabilities for online searches.
Broader Concerns About AI-generated Content
This complaint against OpenAI highlights a significant issue that could affect not only the individual involved but also others facing similarly damaging outputs from AI systems. Noyb referenced multiple cases, including other individuals who have received incorrect and damaging information from ChatGPT, which illustrates that these issues are unfortunately not isolated.
Kleanthi Sardeli, another Noyb lawyer, urged that disclaimers do not exempt AI companies from adhering to GDPR. She highlighted the potential for reputational damage due to uncontrolled fabrications and stressed that AI firms must recognize their responsibilities under the law.
Noyb has filed its complaint with the Norwegian data protection authority, asserting the agency’s capability to investigate, especially since the complaint targets OpenAI’s U.S. entity rather than just its European operations. However, it faces challenges due to previous complaints being redirected to the Irish Data Protection Commission.
Future Outlook
As these privacy issues surrounding AI tools like ChatGPT are scrutinized, the path forward will likely involve deeper investigations and enhanced regulations aimed at ensuring accountability and accuracy in AI-generated content. The evolving nature of AI technology necessitates that companies adapt not only their practices but also their commitment to data protection laws.