Fed up with AI slop? Here’s how DuckDuckGo can help
If you’ve had enough of AI-generated images filling up your search results, then the DuckDuckGo search engine is here to help.The Pennsylvania-based c
Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently — yet wrongly — claim that you had been jailed for 21 years for murdering members of your family.
Well, that’s exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI’s widely used AI-powered chatbot.
Not surprisingly, Holmen has now filed a complaint with the Norwegian Data Protection Authority, demanding that OpenAI be fined for its distressing claim, the BBC reported this week.
In the response to Holmen’s ChatGPT inquiry about himself, the chatbot said he had “gained attention due to a tragic event.”
It went on: “He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son.”
The chatbot said the case “shocked the local community and the nation, and it was widely covered in the media due to its tragic nature.”
But nothing of the sort happened.
Understandably upset by the incident, Holmen told the BBC: “Some think that there is no smoke without fire — the fact that someone could read this output and believe it is true is what scares me the most.”
Digital rights group Noyb has filed the complaint on Holmen’s behalf, stating that ChatGPT’s response is defamatory and contravenes European data protection rules regarding accuracy of personal data. In its complaint, Noyb said that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”
ChatGPT uses a disclaimer saying that the chatbot “can make mistakes,” and so users should “check important info.” But Noyb lawyer Joakim Söderberg said: “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
While it’s not uncommon for AI chatbots to spit out erroneous information — such mistakes are known as “hallucinations” — the egregiousness of this particular error is shocking.
Another hallucination that hit the headlines last year involved Google’s AI Gemini tool, which suggested sticking cheese to pizza using glue. It also claimed that geologists had recommended that humans eat one rock per day.
The BBC points out that ChatGPT has updated its model since Holmen’s search last August, which means that it now trawls through recent news articles when creating its response. But that doesn’t mean that ChatGPT is now creating error-free answers.
The story highlights the need to check responses generated by AI chatbots, and not to trust their answers blindly. It also raises questions about the safety of text-based generative- AI tools, which have operated with little regulatory oversight since OpenAI opened up the sector with the launch of ChatGPT in late 2022.
Digital Trends has contacted OpenAI for a response to Holmen’s unfortunate experience and we will update this story when we hear back.
If you’ve had enough of AI-generated images filling up your search results, then the DuckDuckGo search engine is here to help.The Pennsylvania-based c
Over the past four years, the MacBook Air has been the primary driver of my computing duties. My prerequisites for finding a light, powerful, and reli
OpenAI has just announced that ChatGPT received a major upgrade to its memory features. The chatbot will now be able to remember a lot more about you,
There is always going to be a big divide between macOS and Windows. Much of it has to do with the functional disparities that are deeply ingrained at
In 2024, Hollywood was roiled by protests led by the SAG-AFTRA union, fighting for fair rights over their physical and voice identities in the age of
Unless you’ve been living under a rock, you’re well aware that AI has been worming its way into everyday life for some time now, with seemingly new us
Meta delivered an unexpected runaway success with its Ray-Ban Stories smart glasses, and now, it is headed to the runaway for the latest take. At the
AnthropicOpenAI’s o3 and DeepSeek’s R1 models have some new competition. Anthropic announced Monday the release of its new “hybrid reasoning” model, C
We are a comprehensive and trusted information platform dedicated to delivering high-quality content across a wide range of topics, including society, technology, business, health, culture, and entertainment.
From breaking news to in-depth reports, we adhere to the principles of accuracy and diverse perspectives, helping readers find clarity and reliability in today’s fast-paced information landscape.
Our goal is to be a dependable source of knowledge for every reader—making information not only accessible but truly trustworthy. Looking ahead, we will continue to enhance our content and services, connecting the world and delivering value.