Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Google Implements Stricter AI Controls After Misleading Advice
AI Overview Errors
Google's AI Overview feature generated bizarre and inaccurate responses, such as advising to use glue on pizza, prompting the company to implement stricter controls.
Response from Google's Head
Liz Reid, Head of Google Search, acknowledged the AI's odd outputs and defended the system, noting that some extreme examples circulating online were fabricated.
Safeguards and Adjustments
Google has introduced safeguards to improve AI accuracy, including better detection of humor and satire, limiting user-generated content in responses, and restricting AI overviews for certain topics.
Impact of Extensive Use
The widespread use of the AI feature revealed unforeseen issues, leading Google to adjust its algorithms based on real-world usage patterns.
What specific restrictions has Google implemented on AI Overviews to ensure more accurate search results?

Google has implemented several restrictions on AI Overviews to ensure more accurate search results. These include:
-
Tweaking AI to better detect humor and satire content: This update aims to prevent the AI from providing inaccurate information based on satirical or humorous content found on the web.
-
Limiting user-generated replies in Overviews: Google has updated its systems to reduce the inclusion of user-generated content, such as social media and forum posts, which could provide misleading or harmful advice.
-
Adding triggering restrictions for specific queries: Google has added restrictions to prevent AI Overviews from being displayed for certain queries where they were not found to be helpful. This includes stopping the display of AI-generated replies for specific health topics.
These restrictions have been put in place to improve the accuracy and usefulness of AI Overviews after the feature returned some odd, inaccurate, or unhelpful responses following its rollout in the US.
How did the viral misinformation regarding Barack Obama and health advice about kidney stones emerge from Google's AI Overviews?

The viral misinformation regarding Barack Obama and health advice about kidney stones emerged from Google's AI Overviews due to the system's limitations in accurately interpreting and filtering information. Google's AI technology relied on content from various sources, including satirical websites, forums, and user-generated replies.
In the case of Barack Obama, the AI Overview mistakenly presented the false claim that he was a Muslim, despite him being a Christian. This error likely stemmed from the AI's inability to distinguish between reliable and unreliable sources, leading it to consider inaccurate information in its response.
Similarly, the incorrect health advice about kidney stones was likely a result of the AI Overview system considering user-generated replies from forums and social media. These platforms can often contain misleading or harmful advice. Google's AI technology failed to adequately filter and verify the information, leading to the spread of misinformation.
Google has since taken steps to improve the accuracy of its AI Overviews by implementing safeguards and updating its systems to better detect humor, satire, and user-generated content. This includes limiting the addition of user-generated replies in Overviews and triggering restrictions for queries where AI Overviews were not proving to be helpful. Additionally, Google has stopped showing AI-generated replies for certain health topics to prevent the spread of incorrect medical advice.