Whatterz


Google, Yahoo and Microsoft Webmaster Tools

by Simon. Average Reading Time: about 4 minutes.

The first step to increasing your site’s visibility on the top search engines such as Google, Yahoo! and MSN is to help their respective robots crawl and index your site.

To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file. Conversely and importantly, webmasters can also notify the search engines about the existence and importance of pages with a sitemap.xml file. (Both files are placed in the root directory of the domain.)

Fortunately for the webmaster, the major search engines provide various tools to help manage both Sitemap and Robot files.

To gain an understanding of both ‘protocols’, I’ll discuss them briefly below.

Sitemaps (Inclusion Protocol)

The Sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. A Sitemap is an XML file that lists the URLs for a site. It allows webmasters to include additional information about each URL: when it was last updated, how often it changes, and how important it is in relation to other URLs in the site. This allows search engines to crawl the site more intelligently. Sitemaps are a URL inclusion protocol and complement robots.txt, a URL exclusion protocol.

The webmaster can generate a Sitemap containing all accessible URLs on the site and submit it to search engines. Since Google, MSN, Yahoo!, and Ask use the same protocol now, having a Sitemap would let the biggest search engines have the updated pages information.

Sitemaps supplement and do not replace the existing crawl-based mechanisms that search engines already use to discover URLs. By submitting Sitemaps to a search engine, a webmaster is only helping that engine’s crawlers to do a better job of crawling their site(s). Using this protocol does not guarantee that web pages will be included in search indexes, nor does it influence the way that pages are ranked in search results.

The following is a cut-down version of the sitemap.xml for this website. WordPress, via a plugin, automatically updates this file each time a new post or page is written.

<urlset xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
<url>
<loc>http://www.simonwhatley.co.uk/</loc>
<lastmod>2008-10-08T14:50:16+00:00</lastmod>
<changefreq>daily</changefreq>
<priority>1.0</priority>
</url>
<url>
<loc>
http://www.simonwhatley.co.uk/big-city-little-people
</loc>
<lastmod>2008-10-08T14:50:16+00:00</lastmod>
<changefreq>monthly</changefreq>
<priority>0.1</priority>
</url>
</urlset>

More information about sitemaps can be found on the Sitemaps.org website.

Robots (Exclusion Protocol)

The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. Robots are often used by search engines to categorise and archive web sites. The standard complements Sitemaps, a robot inclusion standard for websites.

A robots.txt file on a website will function as a request that specified robots ignore specified files or directories in their search. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorisation of the site as a whole.

The protocol, however, is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacy. Some web site administrators have tried to use the robots file to make private parts of a website invisible to the rest of the world, but the file is necessarily publicly available and its content is easily checked by anyone with a web browser.

For example, the following tells all crawlers not to enter four directories of a website:

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /tmp/
Disallow: /private/

Exclusion can also be achieved on a page-level basis using a Meta-tag. This is a tag that would be placed in the HTML head of of a web page. The robots attribute controls whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not.

A common example could be as follows:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
	"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" dir="ltr" lang="en-GB" xml:lang="en">
 <head profile="http://gmpg.org/xfn/11">
	<title>Simon Whatley</title>
	<meta http-equiv="robots" content="index,follow" />
</head>
<body>
</body>
</html>

A word of caution though, Meta tags are not the best option to prevent search engines from indexing content of your website.

More information about Robots.txt files can be found on the Robotstxt.org website.

Webmaster Tools

The top 3 search providers all have their own webmaster tools admin interface. The Google offering is the most advanced, but it’s good practice to use and submit information to all three.

Links to their services are provided below:

Ask doesn’t have an interface. However, you can still ping their Submission Service using the URL http://submissions.ask.com/ping?sitemap= in conjunction with your sitemap URL.

Further Information

This article has been tagged

, , , , , , ,

Other articles I recommend

Optimise Your URLs for Web Crawlers and Indexing

Many questions about website architecture, crawling and indexing, and even ranking issues can be boiled down to one central issue: How easy is it for search engines to crawl your site?

Enabling Search Engine Safe URLs with Apache and htaccess

An increasingly popular technique among websites and in particular, blogs, is the idea of making URLs search engine friendly, or safe, on the premise that doing so will help search engine optimisation. By removing the obscure query string element of a URL and replacing it with keyword rich alternatives, not only makes it more readable for a human being, but also the venerable robots that allow our page content to be found in the first place.

Canonical URLs – What Are They All About?

Carpe diem on any duplicate content worries: Google, Yahoo and Microsoft now support a format that allows you to publicly specify your preferred version of a URL. If your site has identical or vastly similar content that’s accessible through multiple URLs, this format provides you with more control over the URL returned in search results. It also helps to make sure that properties such as link popularity are consolidated to your preferred version.