Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.
It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:
User-agent: *
Disallow: /
The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
There are two important considerations when using /robots.txt:
* robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
* the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
So don't try to use /robots.txt to hide information.
Remember to use all lower case for the filename: "robots.txt", not "Robots.TXT.
Read the full article on
http://www.robotstxt.org/robotstxt.html
Nishan Shanaka Korala Gamage
Colombo, Sri Lanka
Author: Eye Think
Thursday, July 29, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment