Google’s AI Suggested Gluing Cheese to Your Pizza. Its Source Is a Reddit Comment From 11 Years Ago

  • The incident demonstrates the vulnerability of generative AI models.

  • Training chatbots with bad data can result in nonsensical statements, as in this case.

  • Once again, the situation reveals that users shouldn’t rely entirely on generative AI.

Google’s AI search feature suggests using glue for pizza cheese
No comments Twitter Flipboard E-mail

Should pizza have pineapple? The biggest debate in the pizza world has fallen to second place after Google’s latest screw-up. While Google's AI models had gotten the company into trouble before, with this recent mistake, Google once again proves that you shouldn’t rely on it too much.

Using glue to get cheese to stick to your pizza. On Thursday, a social media user shared the shocking results of a recent Google search. “Cheese doesn’t stick to pizza,” he complained. Now, it's important to note that Google's search engine now features an AI Overview, which are results suggested by AI. So, in response to the user’s inquiry, this Google feature suggested: “You can also add 1/8 cup of non-toxic glue to the sauce to give it more stickiness.”

The answer was, obviously, not sound advice.

The answer was a joke on Reddit. The surprising suggestion wasn’t invented by Google’s AI model. The chatbot copied it from a tongue-in-cheek comment a Reddit user named Fucksmith made 11 years ago. Why did Google's AI Overview consider that the comment was legitimate? No one knows.

Google uses Reddit data to train its AI. After an official agreement the two companies reached a few weeks ago, Google’s AI models can access all the comments on the social media platform. Many are helpful, but others are jokes or ironic statements.

Stochastic parrots. AI doesn’t understand what it reads or writes. A 2021 study compares these AI models to “stochastic parrots.” This term, coined by linguist Emily M. Bender and computer scientist Timnit Gebru, refers to AI that mimics language based on statistical patterns of data but doesn’t understand the meaning. The AI may generate convincing text, but it has no understanding of what it’s doing.

It’s dangerously wrong. Probability is king among generative AI models, and if Google's model considers that gluing cheese to pizza is relevant because of how it’s trained, it'll recommend it without knowing if it’s actually a good method. This error is understandable, but it can happen in more sensitive situations.

Another Google AI failure. Google’s AI models’ screw-ups are sadly becoming famous. It started with Bard’s mistake about the James Webb Space Telescope (JWST). Google’s experimental conversational AI service said the JWST took the first image of an exoplanet. However, NASA denied this, stating that the European Southern Observatory’s Very Large Telescope took the first photo.

And three months ago, Google’s image generator was criticized for being too inclusive. Now, it’s making another innocent but overt mistake with its sticky pizza recipe.

​​Just in case, don’t trust AI. Although Google made this particular mistake, all chatbots make mistakes. Even in areas like math, where users tend to rely on computers, ChatGPT makes glaring mistakes. Still, it does as well as other chatbots, whether in math or the many areas where it invents data. The conclusion is clear: Don’t trust everything generative AI models tell you. Always verify their answers.

By the way, long live pineapple pizza.

Related | ChatGPT Wants to Rule the Office. It Can Now Create Charts From Google Drive and Microsoft OneDrive Files

Home o Index