Search Engine Ranking

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Thursday, 9 November 2006

The number of pages Googlebot crawls

Posted on 16:19 by Unknown
The Googlebot activity reports in webmaster tools show you the number of pages of your site Googlebot has crawled over the last 90 days. We've seen some of you asking why this number might be higher than the total number of pages on your sites.


Googlebot crawls pages of your site based on a number of things including:
  • pages it already knows about
  • links from other web pages (within your site and on other sites)
  • pages listed in your Sitemap file
More specifically, Googlebot doesn't access pages, it accesses URLs. And the same page can often be accessed via several URLs. Consider the home page of a site that can be accessed from the following four URLs:
  • http://www.example.com/
  • http://www.example.com/index.html
  • http://example.com
  • http://example.com/index.html
Although all URLs lead to the same page, all four URLs may be used in links to the page. When Googlebot follows these links, a count of four is added to the activity report.

Many other scenarios can lead to multiple URLs for the same page. For instance, a page may have several named anchors, such as:
  • http://www.example.com/mypage.html#heading1
  • http://www.example.com/mypage.html#heading2
  • http://www.example.com/mypage.html#heading3
And dynamically generated pages often can be reached by multiple URLs, such as:
  • http://www.example.com/furniture?type=chair&brand=123
  • http://www.example.com/hotbuys?type=chair&brand=123
As you can see, when you consider that each page on your site might have multiple URLs that lead to it, the number of URLs that Googlebot crawls can be considerably higher than the number of total pages for your site.

Of course, you (and we) only want one version of the URL to be returned in the search results. Not to worry -- this is exactly what happens. Our algorithms selects a version to include, and you can provide input on this selection process.

Redirect to the preferred version of the URL
You can do this using 301 (permanent) redirect. In the first example that shows four URLs that point to a site's home page, you may want to redirect index.html to www.example.com/. And you may want to redirect example.com to www.example.com so that any URLs that begin with one version are redirected to the other version. Note that you can do this latter redirect with the Preferred Domain feature in webmaster tools. (If you also use a 301 redirect, make sure that this redirect matches what you set for the preferred domain.)

Block the non-preferred versions of a URL with a robots.txt file
For dynamically generated pages, you may want to block the non-preferred version using pattern matching in your robots.txt file. (Note that not all search engines support pattern matching, so check the guidelines for each search engine bot you're interested in.) For instance, in the third example that shows two URLs that point to a page about the chairs available from brand 123, the "hotbuys" section rotates periodically and the content is always available from a primary and permanent location. If that case, you may want to index the first version, and block the "hotbuys" version. To do this, add the following to your robots.txt file:

User-agent: Googlebot
Disallow: /hotbuys?*

To ensure that this directive will actually block and allow what you intend, use the robots.txt analysis tool in webmaster tools. Just add this directive to the robots.txt section on that page, list the URLs you want to check in the "Test URLs" section and click the Check button. For this example, you'd see a result like this:

Don't worry about links to anchors, because while Googlebot will crawl each link, our algorithms will index the URL without the anchor.

And if you don't provide input such as that described above, our algorithms do a really good job of picking a version to show in the search results.
Email ThisBlogThis!Share to XShare to Facebook
Posted in crawling and indexing, webmaster tools | No comments
Newer Post Older Post Home

0 comments:

Post a Comment

Subscribe to: Post Comments (Atom)

Popular Posts

  • Our Valentine's day gift: out of beta and adding comments
    Here at webmaster central , we love the webmaster community -- and today, Valentine's Day, we want to show you that our commitment to ...
  • Traveling Down Under: GWC at Search Engine Room and Search Summit Australia
    G'day Webmasters! Google Webmaster Central is excited to be heading to Sydney for Search Summit and Search Engine Room on March 1-2 ...
  • Come see us at SES London and hear tips on successful site architecture
    If you're planning to be at Search Engine Strategies London February 13-15, stop by and say hi to one of the many Googlers who will be ...
  • How to verify Googlebot
    Lately I've heard a couple smart people ask that search engines provide a way know that a bot is authentic. After all, any spammer cou...
  • Introducing Sitemaps for Google News
    Good news for webmasters of English-language news sites: If your site is currently included in Google News , you can now create News Sitemap...
  • Better details about when Googlebot last visited a page
    Most people know that Googlebot downloads pages from web servers to crawl the web. Not as many people know that if Googlebot accesses a page...
  • Update on Public Service Search
    Public Service Search is a service that enables non-profit, university, and government web sites to provide search functionality to their vi...
  • Discover your links
    Update on October 15, 2008 : For more recent news on links, visit Links Week on our Webmaster Central Blog. We're discussing internal l...
  • Setting the preferred domain
    Based on your input, we've recently made a few changes to the preferred domain feature of webmaster tools. And since you've had some...
  • Joint support for the Sitemap Protocol
    We're thrilled to tell you that Yahoo! and Microsoft are joining us in supporting the Sitemap protocol. As part of this development, we...

Categories

  • crawling and indexing
  • events
  • feedback and communication
  • general tips
  • localization
  • products and services
  • search results
  • sitemaps
  • webmaster guidelines
  • webmaster tools

Blog Archive

  • ►  2007 (13)
    • ►  March (3)
    • ►  February (7)
    • ►  January (3)
  • ▼  2006 (34)
    • ►  December (5)
    • ▼  November (7)
      • Viva, Webmasters in Vegas
      • Introducing Sitemaps for Google News
      • Joint support for the Sitemap Protocol
      • Badware alerts for your sites
      • Las Vegas Pubcon 2006
      • New third-party Sitemaps tools
      • The number of pages Googlebot crawls
    • ►  October (7)
    • ►  September (8)
    • ►  August (7)
Powered by Blogger.

About Me

Unknown
View my complete profile