In 2024-5, Google released a new feature that uses generated AI to provide a simple summary of the results to search queries The goal is to reduce the number of clicks required to get the answers you need and contribute to a better user experience
However, reports of highly inaccurate and often harmful answers to search queries have been trying to cause outrage on social media platforms such as LinkedIn, X and Facebook
However, reports of terribly inaccurate and often harmful answers to search queries began to be entered into the Internet not long afterFacebook
For example, when asked how many Muslim presidents the United States Had, Google AI overview claims that Barack Obama was the only Muslim president of the United States But this is not the only example of something horribly wrong When asked how to prevent cheese from slipping off the pizza, Google's AI "can add 1/2 cup of non-toxic glue to the sauce to give it more stickiness"
The AI summary also states that UC Berkeley researchers recommend eating at least 1 small rock a day because it is an important source of minerals It is falsely claimed
Google's AI also claims that you can infuse spaghetti with gasoline for added flavor, and adding more oil to the cooking oil fire can help put it out
In another example, the AI of the search giant suggested that the parachute was no better than a backpack to prevent death when jumping off an aircraft
Not surprisingly, experts began weighing soon after, worried about the potential spread of disinformation among unsuspecting users
"We tend to think of information as a set of objective facts that exist in the world," Dr Emily M wrote Bender is a professor of linguistics at the University of Washington "But in reality, information and the information ecosystem are intrinsically related"When Google, Microsoft and OpenAI try to insert so-called 'AI' systems (driven by Llm) between information seekers and informants, the ability to build and maintain those relationships
Vendors say that where information comes from is as important as the information itself If there is no way to undo the information and you see an AI-generated response to a question, you might wonder if the "medical facts" you just learned came from a reliable source like the Mayo Clinic, or if DrYou can't determine if it came from Oz
Generative AI has no way of knowing what is true, only knowing what is popular As such, they often surface responses from untrustworthy sources or parody accounts rather than real facts In some cases, AI is prone to "hallucinations" and only composes incorrect data to cover knowledge gaps
Google has always been a platform where users can find all kinds of information and disinformation But without the ability to correlate information with the reputation of the source, there is no way to know if the answer you get is accurate
The company said in a statement:"A large part of the AI overview provides high-quality information with links to dig deeper on the web
"Many of the examples we've seen are unusual queries, and we've also seen examples that have been modified or could not be reproduced"
If you are concerned about the possibility of misleading results, you can follow Google's guide to blocking an AI overview with results
Comments