Tech

ChatGPT Faces Privacy Complaint Over False

Introduction Of ChatGPT

ChatGPT

OpenAI is once again in the spotlight for privacy concerns in Europe, as its AI chatbot, ChatGPT, faces a new complaint over its tendency to generate false and damaging information. This time, the case might be too serious for regulators to overlook.

The privacy rights group Noyb has taken up the case of a Norwegian man, Arve Hjalmar Holmen, who was horrified to find ChatGPT falsely claiming that he had been convicted of murdering two of his children and attempting to kill a third. This isn’t the first time OpenAI’s chatbot has been accused of fabricating incorrect personal information, but past complaints have usually involved smaller errors, such as incorrect birth dates or misleading biographical details.

One of the major issues is that OpenAI doesn’t provide a way for people to correct false information ChatGPT generates about them. The company has typically responded by blocking responses to certain prompts rather than fixing errors. However, under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to have inaccurate personal data corrected—a right that Noyb argues OpenAI is failing to uphold.

GDPR requires companies handling personal data to ensure its accuracy. If they fail to do so, they can face penalties of up to 4% of their global annual revenue.

“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, a data protection lawyer at Noyb. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and then cover yourself with a disclaimer.”

Regulators have already acted against OpenAI in the past. In early 2023, Italy’s data protection watchdog temporarily blocked ChatGPT in the country, which forced OpenAI to make transparency changes. More recently, the Italian watchdog fined OpenAI €15 million for processing people’s data without a legal basis. However, across Europe, regulators have been cautious in handling AI-related privacy complaints, likely because they are still determining how GDPR applies to AI-generated content.

How Serious Is This Issue?

Noyb hopes that this latest complaint will push regulators to take stronger action against ChatGPT’s “hallucinations”—AI-generated falsehoods that appear factual but have no basis in reality.

In Holmen’s case, ChatGPT falsely claimed he was convicted of child murder and sentenced to 21 years in prison. While the AI got some basic facts correct—such as his number of children, their genders, and his hometown—the fabricated murder story is deeply alarming.

Noyb investigated whether the chatbot might have confused Holmen with someone else, but they found no evidence to support such a mix-up. Large language models like ChatGPT generate responses based on patterns in their training data, so it’s possible the AI had been influenced by stories about child murders. But regardless of why it happened, the fact remains that these types of falsehoods can have devastating real-world consequences.

Does OpenAI Have a Solution?

Following a model update, Noyb reports that ChatGPT no longer generates dangerous falsehoods about Holmen. The chatbot now searches the internet for information before responding to personal inquiries, which may have reduced its tendency to hallucinate. However, both Holmen and Noyb remain concerned that the misinformation could still exist within the AI’s underlying model, even if it’s no longer visible to users.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” said Kleanthi Sardeli, another data protection lawyer at Noyb. “AI companies can also not just ‘hide’ false information from users while they internally still process false information.”

OpenAI has yet to respond to the complaint.

What’s Next?

Noyb has filed the complaint with Norway’s data protection authority, arguing that OpenAI’s U.S. entity, not its Irish office, should be held responsible. However, OpenAI previously restructured its operations so that its Irish division would be the official provider of ChatGPT in Europe. That means this case could end up in the hands of Ireland’s Data Protection Commission (DPC), which is already handling another Noyb-backed complaint filed in Austria in April 2024.

Unfortunately, progress on that complaint has been slow. The DPC confirmed that it has been formally reviewing the case since September 2024, but it remains unresolved with no clear timeline for a decision.

As AI technology continues to evolve, the legal landscape surrounding it remains murky. But one thing is clear: if regulators fail to act decisively, more individuals could find themselves victims of ChatGPT’s dangerous hallucinations.

ALSO READ THIS BLOG

Related Articles

Back to top button

Discover more from Digismartiens

Subscribe now to keep reading and get access to the full archive.

Continue reading