AI products may not always be error-free, so taking risks when rolling them out is necessary.
- At a recent all-hands meeting, Google's vice president of search, Liz Reid, admitted that the company may not always be able to find everything related to AI errors.
- Reid encouraged employees to keep promoting AI products, emphasizing that they can rectify errors as users and staff discover them.
- Recently, Google faced criticism over its AI Overview tool providing incomprehensible responses to users.
Last week, the new head of search stated at an all-hands meeting that as artificial intelligence becomes more integrated in internet search, mistakes will happen. However, the company should continue to release products and rely on employees and users to identify and resolve the issues.
Liz Reid, who was promoted to the role of vice president of search in March, stated at the companywide meeting that it is crucial not to withhold features due to potential problems, but rather to address them as they arise, according to audio obtained by CNBC.
"Reid stated that we should not view this as a reason not to take risks, but rather to approach them with caution and urgency. We should carefully consider the potential consequences of our actions and be prepared to respond quickly if new problems arise. While we may not always find everything, we should still take the necessary steps to address any issues that arise."
Google is facing intense competition from OpenAI in the generative AI market, as the market for chatbots and related AI tools has surged since the introduction of ChatGPT in late 2022, offering consumers a new way to access information online beyond traditional search.
Google's eagerness to introduce new products and features has resulted in a series of humiliating incidents. Recently, the company unveiled AI Overview, which CEO Sundar Pichai described as the most significant alteration in search in 25 years, to a select group of users. The company intends to make the feature available globally.
Despite Google's efforts to develop AI Overview for over a year, users soon discovered that their search results were often incorrect or irrelevant, and there was no option to opt out. Some of the widely shared results included the false claim that Barack Obama was America's first Muslim president, a suggestion to put glue on pizza, and a recommendation to eat at least one rock daily.
Reid, a 21-year company veteran, published a blog post on May 30, criticizing the "troll-y" content some users posted, but acknowledging that the company made over a dozen technical improvements, including restricting user-generated content and health advice.
At the all-hands meeting, Reid shared tales of glue on pizza and rock eating, as he was introduced by Prabhakar Raghavan, head of 's knowledge and information organization.
Google stated in an email that the majority of results are accurate and that the company discovered a policy violation on less than one in 7 million unique queries where AI Overviews appeared.
The spokesperson stated that they are still working on improving the usefulness of AI Overviews by making technical updates to enhance response quality.
The AI Overview miscues fell into a pattern.
Google executives faced challenges with ChatGPT's viral success and needed to move more conservatively with their AI chatbot, Bard, due to its accuracy issues.
Despite criticism from shareholders and employees, Google proceeded with its chatbot launch, which was perceived as rushed and poorly planned to coincide with a Microsoft announcement.
Google paused its AI-powered Gemini image generation tool a year later due to historical inaccuracies and questionable responses that circulated widely on social media. Pichai sent a companywide email, stating the mistakes were "unacceptable" and "showed bias."
Red teaming
Reid's posture suggests Google has grown more willing to accept mistakes.
With billions of queries daily on the web, it's inevitable that there will be some irregularities and mistakes.
Some AI Overview queries from users were intentionally adversarial, and many of the worst ones listed were fake, according to Reid.
Reid mentioned that people have created templates for increasing social engagement by using fake AI overviews, which is an additional consideration.
The company performs extensive testing and red teaming to identify vulnerabilities in technology before they can be exploited by outsiders.
"Regardless of the amount of red teaming we conduct, we will still have to do more," Reid stated.
Teams were able to identify issues such as "data voids," detect satire, and correct spelling when going live with AI products, as stated by Reid.
Reid emphasized the importance of understanding each passage on a page, in addition to the quality of the site and page, when discussing the challenges faced by the company.
Employees from different teams were thanked by Reid for their contributions to the corrections. Reid highlighted the significance of employee feedback and instructed staff to use an internal link to report bugs.
""Please file any problems, whether they are small or big," she said."
WATCH: Google has proven it's not AI roadkill
Technology
You might also like
- U.S. reportedly considers toned-down China curbs, leading to a rise in shares of key chip suppliers.
- Bitcoin surges above $95,000, with investors aiming for $100,000 mark before Thanksgiving.
- Despite the excitement surrounding generative AI, there is no must-have gadget for the holiday season.
- The UK market suffered a blow as Just Eat Takeaway announced its decision to delist from the London Stock Exchange.
- Reddit aims to increase ad revenue by targeting international users and introducing an enhanced search function.