

Google's recent AI rollout for search results has been met with criticism for providing bizarre and inaccurate answers. Examples include suggesting glue to prevent cheese from sliding off pizza and inaccurately stating that 13 U.S. presidents attended the University of Wisconsin-Madison. These errors highlight the AI's reliance on predictive text models without logical reasoning, leading to misleading information.
Despite these issues, Google claims that such peculiar responses are rare and that their AI generally delivers high-quality information. The company has acknowledged the need for ongoing refinement of their systems based on feedback. Meanwhile, the AI's errors underscore the challenges in deploying AI tools prematurely, emphasizing the importance of fact-checking and the limitations of current AI technology in understanding context and accuracy.

The news article highlights several examples of inaccurate AI-generated answers:
These examples showcase the limitations and flaws of current AI systems in providing accurate and reliable information.

The AI mistakenly reported that 13 U.S. presidents attended the University of Wisconsin-Madison, attaining 59 different degrees, due to the data it was trained on. The AI's answer appears to have been scraped from a light-hearted 2016 blog post written by the Alumni Association about various people who've graduated from Madison and share the name of a president. The AI tool does not have the ability to reason or apply logic like a human, and therefore did not recognize that the information was not factual. This is an example of the AI "hallucinating" incorrect information, which highlights the importance of not relying solely on AI-generated answers and conducting further research to verify the accuracy of information.