In this article, we will learn about robots.txt and why its required for website's seo?
What is a robots.txt file?
A robots.txt file tells search engines what your website’s rules of engagement are.
A big part of doing SEO is about sending the right signals to search engines, and the robots.txt is one of the ways to communicate your crawling preferences to search engines.
Search engines regularly check a website’s robots.txt file to see if there are any instructions for crawling the website. We call these instructions directives.
If there’s no robots.txt file present or if there are no applicable directives, search engines will crawl the entire website.
The robots.txt file plays a big role in SEO.
You can use it to prevent search engines from crawling specific parts of your website and to give search engines helpful tips on how they can best crawl your website.
The robots.txt file should reside in the root of your website (e.g. http://www.example.com/robots.txt)
What does a robots.txt file look like?
User-agent: * Disallow: /wp-admin/ User-agent: the user-agent indicates for which search engines the directives that follow are meant. *: this indicates that the directives are meant for all search engines. Disallow: this is a directive indicating what content is not accessible to the user-agent. /wp-admin/: this is the path which is inaccessible for the user-agent.
Note : Each directive should be on a separate line, otherwise search engines may get confused when parsing the robots.txt