Father insists on the removal of false accusation from ChatGPT regarding the murder of his children.

Father insists on the removal of false accusation from ChatGPT regarding the murder of his children.

Concerns Over AI-Generated Misinformation and Compliance with GDPR

The issue of misinformation generated by AI chatbots like ChatGPT has raised significant concerns. Recently, it was reported that ChatGPT no longer repeats disturbing false claims about individuals such as Holmen. This change follows an update that allowed the chatbot to pull information from the Internet to clarify who people are. However, critics argue that this does not completely eliminate the problem, as OpenAI previously maintained that it could block certain information but not remove it entirely from its internal datasets. According to Noyb, an NGO focused on data protection, unless Holmen can formally rectify the false claims, OpenAI may be in violation of the General Data Protection Regulation (GDPR).

The Impact of GDPR on AI-generated Content

Noyb emphasizes that the GDPR applies not only to data shared with the public but also to internal data held by AI systems. “Even if the false information is not publicly shared, the GDPR still holds,” Noyb states. The potential danger lies in how such misleading information can harm a person’s reputation, regardless of whether it has been made widely available or not.

Real-Life Cases of Misinformation from Chatbots

Many individuals have expressed concerns regarding the potential psychological and reputational damage caused by AI chatbots. For instance, an Australian mayor threatened legal action against OpenAI after ChatGPT falsely claimed he had served prison time. Similarly, a well-known law professor was erroneously linked to a fabricated sexual harassment incident. Even a radio host sought legal recourse over false allegations of embezzlement attributed to him by ChatGPT. These instances illustrate that the consequences of AI-generated misinformation can be serious and far-reaching.

Challenges in Managing False Data

Experts suggest that while OpenAI has made efforts to filter harmful outputs from the chatbot, this does not necessarily entail the removal of false information from its training datasets. Kleanthi Sardeli, a data protection lawyer at Noyb, argues that merely filtering outputs or adding disclaimers does not adequately protect affected individuals. “Stating that you do not comply with the law does not change the law,” she explains. Sardeli calls on AI companies to acknowledge that the GDPR applies to them and stresses the importance of eliminating false information to prevent reputational harm.

Manipulating Outputs vs. Data Deletion

OpenAI has stated it filters outputs to reduce the likelihood of generating harmful or misleading information. However, this raises questions about the effectiveness of such measures. Critics believe that if untrue claims are still present in the underlying data, then merely attempting to prevent their dissemination is insufficient. AI systems should aim to be accurate and truthful in their outputs, and without appropriate action, they risk causing real-life emotional and psychological distress to affected individuals.

Addressing Issues of AI-generated Misinformation

  • OpenAI should actively work towards verifying the information in its training data.
  • Legal accountability must be established for companies producing AI systems.
  • Victims of misinformation should have clear avenues for redress and correction.
  • Transparency in AI algorithms can lead to more trust and accountability in their outputs.

As the use of AI continues to evolve, the need for stringent regulations and responsible practices becomes increasingly clear. The impact of AI-generated misinformation not only affects individuals but also raises questions about the social responsibilities of AI developers and companies.

Please follow and like us:

Related