Over Memorial Day weekend, Google scrambled to address problematic responses from its AI Overview feature in Search, including bizarre advice and incorrect facts. These AI missteps, along with issues from other Google AI products like the Gemini image generator, are tarnishing the company's reputation as a reliable information source.
Experts argue that Google's rush to release AI products to stay competitive with firms like Microsoft and OpenAI is leading to these errors. The repeated need to correct such errors may erode public trust in Google, despite its longstanding status as a go-to resource for factual information online.
According to Chinmay Hegde, a professor of computer science and engineering at NYU's Tandon School of Engineering, the erroneous suggestions made by Google's AI Overview feature could potentially erode people's trust in Google. As Google is considered a premier source of information on the internet, such incidents could damage its credibility. The continuous release of products with noticeable flaws that require explanation could become tiring for users, impacting the overall trust in Google's products.
Over the Memorial Day weekend, Google's AI Overview feature generated several erroneous suggestions in its search responses1. These included advising users that they could use non-toxic glue to prevent cheese from sliding off their pizza and suggesting that consuming one rock a day was feasible. Additionally, the AI mistakenly claimed that Barack Obama was the first Muslim president4. These inaccuracies led to Google taking down the problematic responses and asserting that these incidents would be used to improve their systems.