To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file. Conversely and importantly, webmasters can also notify the search engines about the existence and importance of pages with a sitemap.xml file. (Both files are placed in the root directory of the domain.)
Fortunately for the webmaster, the major search engines provide various tools to help manage both Sitemap and Robot files.
To gain an understanding of both ‘protocols’, I’ll discuss them briefly below.
Sitemaps (Inclusion Protocol)
The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently. Sitemaps are a URL inclusion protocol and complement robots.txt, a URL exclusion protocol.
The webmaster can generate a Sitemap containing all accessible URLs on the site and submit it to search engines. Since Google, MSN, Yahoo!, and Ask use the same protocol now, having a Sitemap would let the biggest search engines have the updated pages information.
Sitemaps supplement and do not replace the existing crawl-based mechanisms that search engines already use to discover URLs. By submitting Sitemaps to a search engine, a webmaster is only helping that engine’s crawlers to do a better job of crawling their site(s). Using this protocol does not guarantee that web pages will be included in search indexes, nor does it influence the way that pages are ranked in search results.
The following is a cut-down version of the sitemap.xml for this website. WordPress, via a plugin, automatically updates this file each time a new post or page is written.
<urlset xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"> <url> <loc>http://www.simonwhatley.co.uk/</loc> <lastmod>2008-10-08T14:50:16+00:00</lastmod> <changefreq>daily</changefreq> <priority>1.0</priority> </url> <url> <loc> http://www.simonwhatley.co.uk/big-city-little-people </loc> <lastmod>2008-10-08T14:50:16+00:00</lastmod> <changefreq>monthly</changefreq> <priority>0.1</priority> </url> </urlset>
More information about sitemaps can be found on the Sitemaps.org website.
Robots (Exclusion Protocol)
The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorise and archive web sites. The standard complements Sitemaps, a robot inclusion standard for websites.
A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorisation of the site as a whole.
The protocol, however, is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacy. Some web site administrators have tried to use the robots file to make private parts of a website invisible to the rest of the world, but the file is necessarily publicly available and its content is easily checked by anyone with a web browser.
For example, the following tells all crawlers not to enter four directories of a website:
User-agent: * Disallow: /cgi-bin/ Disallow: /images/ Disallow: /tmp/ Disallow: /private/
Exclusion can also be achieved on a page-level basis using a Meta-tag. This is a tag that would be placed in the HTML head of of a web page. The
robots attribute controls whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not.
A common example could be as follows:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en-GB" xml:lang="en"> <head profile="http://gmpg.org/xfn/11"> <title>Simon Whatley</title> <meta http-equiv="robots" content="index,follow" /> </head> <body> </body> </html>
A word of caution though, Meta tags are not the best option to prevent search engines from indexing content of your website.
More information about Robots.txt files can be found on the Robotstxt.org website.
The top 3 search providers all have their own webmaster tools admin interface. The Google offering is the most advanced, but it’s good practice to use and submit information to all three.
Links to their services are provided below:
Ask doesn’t have an interface. However, you can still ping their Submission Service using the URL
http://submissions.ask.com/ping?sitemap= in conjunction with your sitemap URL.