Google Search Now Reads at a Higher Level

Google search is advancing a reading grade.

Google says it has enhanced its search-ranking system with software called BERT, or Bidirectional Encoder Representations from Transformers to its friends. It was developed in the company’s artificial intelligence labs and announced last fall, breaking records on reading comprehension questions that researchers use to test AI software.

Pandu Nayak, Google’s vice president of search, said at a briefing Thursday that the muppet-monickered software has made Google’s search algorithm much better at handling long queries, or ones where the relationships between words are crucial. You’re now less likely to get frustrating responses to queries dependent on prepositions like for” and “to,” or negations such as “not” or “no.”

“This is the single biggest positive change we’ve had in the last five years,” Nayak said—at least according to Google’s measures of how ranking changes help people find what they want. Google declines to share the details. Google says it has been testing the upgrade but is now rolling it out widely.

One illustration of BERT’s power offered up by Google is how it helped its search engine interpret the query “Parking on hill with no curb.” The current version of its search algorithm responded to that as if it referred to a hill that did have a curb. The BERT-powered version highlights a page advising drivers to point their wheels toward the side of the road.

LEARN MORE

The WIRED Guide to Artificial Intelligence

Another was the query “2019 brazil traveler to usa need a visa.” To a human, that’s a clear attempt to discover the requirements for Brazilians heading to the US, but pre-BERT Google misunderstood the crucial “to” and returned an article about US citizens traveling to Brazil as the top result. With BERT, the search engine correctly serves up a page about requirements for Brazilian citizens heading north.

Google says it receives billions of searches per day and that the BERT upgrade will affect rankings on one out of every 10. But Nayak says most users probably won’t notice. That doesn’t mean the change doesn’t matter to users, or Google. Anyone who has tried to switch search engines knows that the way Google’s ranking burrows into your expectations of the internet can be extremely powerful.

People outside the US turning to Google for help will see some of the most significant changes. Nayak said that the BERT upgrade helped its system get much better at identifying so-called featured snippets, particularly in languages other than English.

Google’s upgrade is a notable example of recent progress in software that attempts to understand language. It has made machine learning algorithms much better at decoding the subtleties of language by attending to the context around a particular word.

Machine learning has proved to be a powerful way to teach software to sort or interpret data such as images or text. But each program typically has to be “trained” using example data. That’s often been tricky to come by for text documents. Projects would depend on paying people to label specific examples, such as good and bad restaurant reviews.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

In the spring and summer of 2018, OpenAI and the Allen Institute for AI showed a simpler and more powerful method. They taught machine learning programs the differences between words—even homonyms like May the month, may the verb, and May the name—by looking at other words in the text, even if they’re in a different sentence. Models trained that way on very large collections of text picked up a kind of general sense for language and could then be specialized to particular tasks using relatively small collections of labeled data.

Allen AI christened its system ELMo, for Embeddings from Language Models. That caused Google’s researchers to think of Sesame Street in October 2018 when they announced their own still-more-powerful take on the new way for machine learning to learn language, BERT. Like the systems from OpenAI and Allen AI, Google’s software set new records on AI language tests, such as answering questions.

“People are very excited, because progress is so quick,” says Jeff Wu, a research engineer who has worked on OpenAI’s language projects. One side effect: Researchers have had to invent new and more difficult tests for software on tasks such as basic reading comprehension.

That doesn’t mean BERT is ready to critique your college essay. “Language is incredibly subtle and nuanced,” Nayak said. Each time Google improves the search box’s facility with language, he says, people submit more complex and challenging queries, effectively raising the bar for Google’s reading robots.


More Great WIRED Stories

We Know You Better!
Subscribe To Our Newsletter
Be the first to get latest updates and
exclusive content straight to your email inbox.
Yes, I want to receive updates
No Thanks!
close-link

Subscribe to our newsletter

Sign-up to get the latest marketing tips straight to your inbox.
SUBSCRIBE!
Give it a try, you can unsubscribe anytime.