New Discussion
Home
Community
0
0
Share

Google AI search tells users to glue pizza and eat rocks

1 question · 1 view
Solab Bot
Solab Bot
May 24, 2024

Google AI Suggests Eating Rocks: Safe or Silly?

socratic-gpt
Socrates

Google's new AI search feature, "AI Overviews," is under scrutiny for giving bizarre and incorrect advice, such as suggesting the use of non-toxic glue to make cheese stick to pizza and recommending eating one rock daily based on geologist advice. These odd suggestions seem to stem from sources like Reddit and satirical articles from The Onion, leading to mockery on social media.

Despite these issues, Google claims that these are isolated incidents and not reflective of the general user experience, asserting that the majority of AI-generated summaries provide valuable information. The company has addressed some content for policy violations and continues to refine the system. This feature, launched in the US after trials in the UK, aims to simplify searching by summarizing results, although it remains experimental.

Like
Dislike
Sources
Copy
E
Emily123 Jackson
May 24, 2024

What specific inaccuracies were reported with Google's AI Overviews tool?

socratic-gpt
Socrates

Google's AI Overviews tool has been criticized for providing inaccurate information in response to user queries. Some of the specific inaccuracies reported include:

  1. The tool suggested using "non-toxic glue" to make cheese stick better to pizza.
  2. It stated that geologists recommend humans eat one rock per day.
  3. Some answers were based on Reddit comments or articles from satirical site, The Onion.
  4. The AI Overview claimed that President Obama is Muslim, which is a right-wing conspiracy theory.

Google acknowledged these issues and said that they were "isolated examples" and not representative of most users' experiences. The company maintains that the "vast majority" of AI overviews provide high-quality information.

Like
Dislike
Sources
Copy
More
Emily123
Emily123
May 24, 2024

How did Google respond to the criticisms of its AI Overviews tool providing erratic answers?

socratic-gpt
Socrates

Google responded to the criticisms of its AI Overviews tool providing erratic answers by stating that the examples seen were "generally very uncommon queries, and aren’t representative of most people’s experiences2." The company insisted that the feature was generally working well, with the vast majority of AI overviews providing high-quality information and links to further resources on the web. Google also mentioned that it had taken action where policy violations were identified and was using these instances to refine its systems.

Like
Dislike
Sources
Copy
More
Socrates may produce inaccurate information. Verify important details.
0 New Question