Sunday, 2 August 2015

       What is robots.txt
The robots.txt is a text file with set of instructions given to the search engine for getting crawled in the web.The robots.txt file can be either left empty or with the instructions.The empty file possibly may not serve any use.So therefore something must be given to that file.That something is explained below.

User-agent: *


User-agent: *
Allow: /

The above two instructions would follow the same procedure that is:

1) User-agent:* => This instruction will allow all the search engine in the world to crawl your webpage as because * is taken as a universal value for a definition.

2) Disallow: or Allow: \ => Well both means the same actually the disallow is used to limit your page in certain search engines but when left blank will possibly allow everyone,Same does the Allow command you can selectively search for the search engines to be allowed or disallowed for your website.

What never write in robots.txt file :

User-agent: *
Disallow: /

If you write that code in your robots.txt your page will never be indexed into a search engine. This wont allow any search engine to index the website don't get confused on "/" symbol rather copy paste it from above.

You can either check it on the google webmaster webpage you first have to connect your website with the google webmaster to make it work.For that you have to run its script page in your domain address.Then you can access the google webmaster account,from there you can test and index your page manually i guess, and you can even check your robots.txt whether is it properly working or not.

Any further questions or queries you can directly ask in the box below.
Thank you   


No comments:

Post a Comment