

Google's new AI Overviews feature in Google Search has faced criticism for providing incorrect and nonsensical answers, such as labeling Barack Obama as a Muslim president and suggesting it's safe to leave dogs in hot cars. Despite its recent launch, the feature has shown errors in basic queries and controversial responses, raising concerns about the accuracy and reliability of AI-generated content in search results. This issue follows similar problems with Google's Gemini image-generation tool, which also produced historically inaccurate and questionable outputs.

Google's AI Overviews feature has been reported to provide several inaccuracies in its responses. Some examples include:

Google has taken a cautious approach with generative AI development, and has criticized others for pushing too hard, too fast, with their generative AI tools6. Some have viewed this as anti-competitive bias, and Google simply protecting its turf, as more people turn to tools like ChatGPT for search queries6. Google's view is that generative AI needs to be deployed slowly in order to mitigate misuse, which has already led to various issues in a regulatory sense. But today, processes in place to fully protect against this kind of misuse. In this sense, Google's AI generations could be safer, and more reliable in this respect, though OpenAI's tools have clearly taken top billing as the AI tools of choice, at least within this initial push3.
Google is evolving its own offerings, and they could soon become even more valuable, depending on how its systems continue to develop. There are now a wide range of options for generative AI search and creation, but eventually, the market will refine, and the best performing tools will arise, taking the bigger market share. Google's hoping that its more cautious, measured approach will win out in the long run, while it also has more resources and capacity than most of its rivals.