Google Increases AI Overview by 2x after telling users to eat rocks and glue pizza

Google Increases AI Overview by 2x after telling users to eat rocks and glue pizza

Google has doubled in its AI-powered search results, saying that the recent spate of strange and unhelpful responses is limited to only a handful of niche queries

The AI overview was announced as part of Google I/O earlier this month, putting a text response powered by a long-form Gemini on top of the results of more complex queries These are questions that require several website visits to answer

Shortly after launch, many problems began to pop up, including one that suggests using non—toxic glue to thicken pizza sauce, one that says eating rocks is good for you, and the 3rd one that suggests smoking during pregnancy is healthy - it's not If you don't want an overview of AI, there's a guide to block them from the results

In a statement, Liz Reid, Google's head of search, said many of the more quirky claims on social media could not be reproduced, including the rocks question, and the actual Qu

Reid says the biggest problem with many of the results came down to how the AI model interpreted irony and humor "We saw an overview of AI that featured sarcastic content and troll-y content from discussion forums," she wrote 

"Forums are often a great source of genuine, first-hand information, but in some cases use glue to get cheese to stick to pizza"This can be a problem for Openai's deal with Reddit

Then you added to the sarcasm and troll-y content in further discussion forums and had people running at this as a joke 

There is also an element of web stress that tests unexpected behavior for Google Reed says: "There is nothing like having millions of people using the feature in search of so many novels" We have also seen meaningless new searches aimed at producing seemingly incorrect results"Generative AI has no way of knowing what's true, only knowing what's popular As such, they often surface responses from untrustworthy sources or parody accounts rather than real facts In some cases, AI is prone to "hallucinations" and only composes incorrect data to cover knowledge gaps

Google has always been a platform where users can find all kinds of information and disinformation But without the ability to correlate information with the reputation of the source, there is no way to know if the answer you get is accurate

Reed says they have already made some improvements to the overview "After looking at the examples from the past few weeks, we were able to determine a pattern that didn't get it right and we made more than a dozen technical improvements to our system," she said

This includes improving the detection mechanism for meaningless queries and blocking the display of AI summaries This includes removing satirical or humorous content from the source material of these queries

They also reduced user-generated content in responses and instead focused on high-quality material sources 

Finally, Google says it is implementing a more powerful guardrail, especially for news and health content "We aim not to display an AI overview on hard news topics where freshness and factual are important In the case of health, we have started additional trigger improvements to enhance quality protection," Reid said

Categories