Google’s Search Relations answered several questions regarding webpage indexing on the latest episode of the ‘Search Off The Record’ podcast.
The topics discussed were how to block Googlebot from crawling specific sections of a page and how to prevent Googlebot from accessing a site altogether.
Google’s John Mueller and Gary Illyes answered the questions examined in this article.
Blocking Googlebot From Specific Web Page Sections
Mueller says it’s impossible when asked how to stop Googlebot from crawling specific web page sections, such as “also bought” areas on product pages.
“The short version is that you can’t block crawling of a specific section on an HTML page,” Mueller said.
He went on to offer two potential strategies for dealing with the issue, neither of which, he stressed, are ideal solutions.
Mueller suggested using the data-nosnippet HTML attribute to prevent text from appearing in a search snippet.
Alternatively, you could use an iframe or JavaScript with the source blocked by robots.txt, although he cautioned that’s not a good idea.
“Using a robotted iframe or JavaScript file can cause problems in crawling and indexing that are hard to diagnose and resolve,” Mueller stated.
He reassured everyone listening that if the content in question is being reused across multiple pages, it’s not a problem that needs fixing.
“There’s no need to block Googlebot from seeing that kind of duplication,” he added.
Blocking Googlebot From Accessing A Website
In response to a question about preventing Googlebot from accessing any part of a site, Illyes provided an easy-to-follow solution.
“The simplest way is robots.txt: if you add a disallow: / for the Googlebot user agent, Googlebot will leave your site alone for as long you keep that rule there,” Illyes explained.
For those seeking a more robust solution, Illyes offers another method:
“If you want to block even network access, you’d need to create firewall rules that load our IP ranges into a deny rule,” he said.
See Google’s official documentation for a list of Googlebot’s IP addresses.
In Summary
Though it’s impossible to prevent Googlebot from accessing specific sections of an HTML page, methods such as using the data-nosnippet attribute can offer control.
When considering blocking Googlebot from your site entirely, a simple disallow rule in your robots.txt file will do the trick. However, more extreme measures like creating specific firewall rules are also available.
Featured image generated by the author using Midjourney.
Source: Google Search Off The Record
!function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js');
if( typeof window.sopp != "undefined" && window.sopp === 'yes' ){ fbq('dataProcessingOptions', ['LDU'], 1, 1000); } console.log('load_px'); fbq('init', '1321385257908563');
fbq('init', '513289945501876');
fbq('track', 'PageView');
fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'how-to-control-googlebots-interaction-with-your-website', content_category: 'news seo' }); } });