Why Google SGE Is Stuck In Google Labs And What’s Next


Google Search Generative Experience (SGE) was set to expire as a Google Labs experiment at the end of 2023 but its time as an experiment was quietly extended, making it clear that SGE is not coming to search in the near future. Surprisingly, letting Microsoft take the lead may have been the best perhaps unintended approach for Google.

Google’s AI Strategy For Search

Google’s decision to keep SGE as a Google Labs project fits into the broader trend of Google’s history of preferring to integrate AI in the background.

The presence of AI isn’t always apparent but it has been a part of Google Search in the background for longer than most people realize.

The very first use of AI in search was as part of Google’s ranking algorithm, a system known as RankBrain. RankBrain helped the ranking algorithms understand how words in search queries relate to concepts in the real world.

According to Google:

“When we launched RankBrain in 2015, it was the first deep learning system deployed in Search. At the time, it was groundbreaking… RankBrain (as its name suggests) is used to help rank — or decide the best order for — top search results.”

The next implementation was Neural Matching which helped Google’s algorithms understand broader concepts in search queries and webpages.

And one of the most well known AI systems that Google has rolled out is the Multitask Unified Model, also known as Google MUM.  MUM is a multimodal AI system that encompasses understanding images and text and is able to place them within the contexts as written in a sentence or a search query.

SpamBrain, Google’s spam fighting AI is quite likely one of the most important implementations of AI as a part of Google’s search algorithm because it helps weed out low quality sites.

These are all examples of Google’s approach to using AI in the background to solve different problems within search as a part of the larger Core Algorithm.

It’s likely that Google would have continued using AI in the background until the transformer-based large language models (LLMs) were able to step into the foreground.

But Microsoft’s integration of ChatGPT into Bing forced Google to take steps to add AI in a more foregrounded way with their Search Generative Experience (SGE).

Why Keep SGE In Google Labs?

Considering that Microsoft has integrated ChatGPT into Bing, it might seem curious that Google hasn’t taken a similar step and is instead keeping SGE in Google Labs. There are good reasons for Google’s approach.

One of Google’s guiding principles for the use of AI is to only use it once the technology is proven to be successful and is implemented in a way that can be trusted to be responsible and those are two things that generative AI is not capable of today.

There are at least three big problems that must be solved before AI can successfully be integrated in the foreground of search:

  1. LLMs cannot be used as an information retrieval system because it needs to be completely retrained in order to add new data. .
  2. Transformer architecture is inefficient and costly.
  3. Generative AI tends to create wrong facts, a phenomenon known as hallucinating.

Why AI Cannot Be Used As A Search Engine

One of the most important problems to solve before AI can be used as the backend and the frontend of a search engine is that LLMs are unable to function as a search index where new data is continuously added.

In simple terms, what happens is that in a regular search engine, adding new webpages is a process where the search engine computes the semantic meaning of the words and phrases within the text (a process called “embedding”), which makes them searchable and ready to be integrated into the index.

Afterwards the search engine has to update the entire index in order to understand (so to speak) where the new webpages fit into the overall search index.

The addition of new webpages can change how the search engine understands and relates all the other webpages it knows about, so it goes through all the webpages in its index and updates their relations to each other if necessary. This is a simplification for the sake of communicating the general sense of what it means to add new webpages to a search index.

In contrast to current search technology, LLMs cannot add new webpages to an index because the act of adding new data requires a complete retraining of the entire LLM.

Google is researching how to solve this problem in order create a transformer-based LLM search engine, but the problem is not solved, not even close.

To understand why this happens, it’s useful to take a quick look at a recent Google research paper that is co-authored by Marc Najork and Donald Metzler (and several other co-authors). I mention their names because both of those researchers are almost always associated with some of the most consequential research coming out of Google. So if it has either of their names on it, then the research is likely very important.

In the following explanation, the search index is referred to as memory because a search index is a memory of what has been indexed.

The research paper is titled: “DSI++: Updating Transformer Memory with New Documents” (PDF)

Using LLMs as search engines is a process that uses a technology called Differentiable Search Indices (DSIs). The current search index technology is referenced as a dual-encoder.

The research paper explains:

“…index construction using a DSI involves training a Transformer model. Therefore, the model must be re-trained from scratch every time the underlying corpus is updated, thus incurring prohibitively high computational costs compared to dual-encoders.”

The paper goes on to explore ways to solve the problem of LLMs that “forget” but at the end of the study they state that they only made progress toward better understanding what needs to be solved in future research.

They conclude:

“In this study, we explore the phenomenon of forgetting in relation to the addition of new and distinct documents into the indexer. It is important to note that when a new document refutes or modifies a previously indexed document, the model’s behavior becomes unpredictable, requiring further analysis.

Additionally, we examine the effectiveness of our proposed method on a larger dataset, such as the full MS MARCO dataset. However, it is worth noting that with this larger dataset, the method exhibits significant forgetting. As a result, additional research is necessary to enhance the model’s performance, particularly when dealing with datasets of larger scales.”

LLMs Can’t Fact Check Themselves

Google and many others are also researching multiple ways to have AI fact check itself in order to keep from giving false information (referred to as hallucinations). But so far that research is not making significant headway.

Bing’s Experience Of AI In The Foreground

Bing took a different route by incorporating AI directly into its search interface in a hybrid approach that joined a traditional search engine with an AI frontend. This new kind of search engine revamped the search experience and differentiated Bing in the competition for search engine users.

Bing’s AI integration initially created significant buzz, drawing users intrigued by the novelty of an AI-driven search interface. This resulted in an increase in Bing’s user engagement.

But after nearly a year of buzz, Bing’s market share saw only a marginal increase. Recent reports, including one from the Boston Globe, indicate less than 1% growth in market share since the introduction of Bing Chat.

Google’s Strategy Is Validated In Hindsight

Bing’s experience suggests that AI in the foreground of a search engine may not be as effective as hoped. The modest increase in market share raises questions about the long-term viability of a chat-based search engine and validates Google’s cautionary approach of using AI in the background.

Google’s focusing of AI in the background of search is vindicated in light of Bing’s failure to cause users to abandon Google for Bing.

The strategy of keeping AI in the background, where at this point in time it works best, allowed Google to maintain users while AI search technology matures in Google Labs where it belongs.

Bing’s approach of using AI in the foreground now serves as almost a cautionary tale about the pitfalls of rushing out a technology before the benefits are fully understood, providing insights into the limitations of that approach.

Ironically, Microsoft is finding better ways to integrate AI as a background technology in the form of useful features added to their cloud-based office products.

Future Of AI In Search

The current state of AI technology suggests that it’s more effective as a tool that supports the functions of a search engine rather than serving as the entire back and front ends of a search engine or even as a hybrid approach which users have refused to adopt.

Google’s strategy of releasing new technologies only when they have been fully tested explains why Search Generative Experience belongs in Google Labs.

Certainly, AI will take a bolder role in search but that day is definitely not today. Expect to see Google adding more AI based features to more of their products and it might not be surprising to see Microsoft continue along that path as well.

See also: Google SGE And Generative AI In Search: What To Expect In 2024

Featured Image by Shutterstock/ProStockStudio



Source link

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

We Know You Better!
Subscribe To Our Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
Yes, I want to receive updates
No Thanks!
close-link

Subscribe to our newsletter

Sign-up to get the latest marketing tips straight to your inbox.
SUBSCRIBE!
Give it a try, you can unsubscribe anytime.