Audit your robots.txt file

Contributors

@brandon-leuangpaseuth @andreea-macoveiciuc-content-expert


Business Benefits

Ensure that there’s no mistake in your robots.txt file restricting search engines from areas you want them to crawl.


Access the Google’s Robots.txt tester using the Google account you use for your Google Search Console and check for warnings and errors.

Google highlights any warnings and errors in the File Editor and displays the number of warnings and errors below it.

Use Bing’s Robots.txt tester or third-party tools like TechnicalSEO’s robots.txt Validator and Testing Tool to check for errors with other search engine crawlers.

Google’s robots.txt tester only tests using Google crawlers like Googlebot. Use Bing’s tester or a third-party tool such as that on TechnicalSEO.com for other search engines that you are concerned about.

Check the Search Console Coverage Report in your Google Search Console for any errors or warnings.

Be sure to tick the ‘Excluded URLs’ checkbox and check error types to determine whether any page URLs have been blocked by your robots.txt file.

Remove any pages that shouldn’t be crawled from your sitemap.

  • See the Additional Resources for more information on how to do this.
  • Remove noindex pages from your sitemap

Fix the issues with your robots.txt file based on the recommendations below:

  • If there are any errors noticed in the robots.txt testers, reach out to your web developer to fix them.
  • If there are warnings, reach out to your web developer for advice on them.
  • If the error is that the file is blocked by the robots.txt file, but it was submitted for indexing, check to see if the page should be indexed.
  • If it should, take it out of the robots.txt file by deleting the line that looks like this:.disallow: /the-page-url/. Discuss with your web developer before you do this.

Last edited by @hesh_fekry 2023-11-14T16:20:35Z