Google will no longer support noindex directive in robots.txt
Tech Google SEO

Google will no longer support noindex directive in robots.txt

Mishel Shaji
Mishel Shaji

As announced on the official Google webmaster blog, Google stopped supporting all unpublished rules on 1, September 2019. This means that directives such as noindex will no longer be valid in robots.txt.

In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we're retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options.

Implementing noindex

Google lists the following ways as the alternative. You can choose any of these methods suitable for your website.

  • Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed. Ex: <meta name="robots" content="noindex">.
  • 404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google's index once they're crawled and processed.
  • Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google's index.
  • Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled usually means its content won’t be indexed.  While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.
  • Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google's search results.

Note: With Search Console Remove URL tool, a page can be removed from search results only for 90 days.