
When queried about adding glue to pizza, Google's AI made a specific error by suggesting that users add an eighth of a cup of non-toxic glue to the sauce to make the cheese stick better. This incorrect and unsafe suggestion was based on a recent internet trend and satirical content, highlighting the AI's struggle with accurately distinguishing reliable information from satire or trolling posts.

Katie Notopoulos, a tech correspondent at Business Insider, inadvertently contributed to the misinformation regarding putting glue on pizza. This began when Google's AI Overview feature suggested using glue to keep cheese from sliding off a pizza, a recommendation that was based on a joke from a Reddit user4. Notopoulos decided to make a pizza with glue and documented her experience, bringing more attention to the misinformation. While her actions were not intended to spread misinformation, they inadvertently helped to highlight the potential risks and pitfalls of relying on AI-generated search results without proper context or verification.

Colin McMillen, a current staff engineer at Google, played a crucial role in highlighting the AI's erroneous response to the query about adding glue to pizza. He spotted the issue and brought it to light on Bluesky, a social network. By doing so, he helped expose how Google's AI was providing incorrect information in response to unusual queries, such as the one about adding glue to pizza. This incident underscores the challenges and limitations of AI systems in providing accurate and contextually appropriate responses to user queries.