Why Google Indexes Blocked Web Pages
Google’s John Mueller answered a question about why Google indexes pages that are disallowed from crawling by robots.txt and why the it’s safe to ignore the related Search Console reports about those crawls. Bot Traffic To Query Parameter URLs The person asking the question documented that bots were creating links to non-existent query parameter URLs … Read more