In the summer of 2023, it happened to the Girl With the Pearl Earring. At least, on Google. If someone searched for the image on Google, they saw a featured image generated by AI. The mistake, while still notorious and notable, looks like it was fixed.
However, there’s at least one other mistake still out there that’s more worrying. It was spotted by the moderator of the r/mycology subreddit. The moderator realized that when they searched for the mushroom species coprinus comatus, also known as the “shaggy mane mushroom,” the first result wasn’t a real photo of the mushroom, but rather one generated by AI. And the mushroom the AI generated wasn’t very similar to coprinus comatus.
I recently carried out the test from Europe and found that, effectively, the featured image in Google’s search results is still being generated by AI. The subreddit moderator, who goes by MycoMutant, noticed that the image hadn’t even been generated by Google. The search engine had simply shown an image from Freepik, where it’s clearly identified as being generated by AI.
At the time of publication, and the AI-generated image was no longer featured in Google’s search results in Europe.
Back in September, MycoMutant told 404Media that the same thing had been happening with other mushroom species for a while. They help other users identify mushrooms correctly to help them detect which species are safe to eat.
The problem, MycoMutant said, is that when Google uses incorrect images in search results, the “reputation” of the images increases. As such, other bots “trust” Google and end up distributing these images with information that could be dangerous.
“More than once I have [seen] people try to use bots to scrape data to compile a database for mushroom species and the results have been horrifically inaccurate and potentially filled with dangerously wrong information,” the moderator told the outlet.
Other experts added that AI generated images, although they can look like a certain species, can end up having terrible consequences. Elan Trybuuch, the secretary of the New York Mycological Society, stated that Google shouldn’t simply label these images—it should erase them completely.
This is another example of how these types of mistakes in text and images—and in the future, videos—generated by AI can have harmful consequences on users. The speed at which the Internet is filling up with AI-generated content makes solving the problem especially difficult.
There’s hope, though. When it comes to text-based chatbots, it looks like things are getting better. Recent versions are making fewer mistakes. There are now also models that are capable of “reasoning” like OpenAI’s o1 or Google’s recent Gemini 2.0 Flash, which also checks its answers before presenting them to the user.
Images | Hans Veth | Xataka
View 0 comments