Search Engine Ranking

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Monday, 18 December 2006

Deftly dealing with duplicate content

Posted on 14:28 by Unknown
At the recent Search Engine Strategies conference in freezing Chicago, many of us Googlers were asked questions about duplicate content. We recognize that there are many nuances and a bit of confusion on the topic, so we'd like to help set the record straight.

What is duplicate content?
Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar. Most of the time when we see this, it's unintentional or at least not malicious in origin: forums that generate both regular and stripped-down mobile-targeted pages, store items shown (and -- worse yet -- linked) via multiple distinct URLs, and so on. In some cases, content is duplicated across domains in an attempt to manipulate search engine rankings or garner more traffic via popular or long-tail queries.

What isn't duplicate content?
Though we do offer a handy translation utility, our algorithms won't view the same article written in English and Spanish as duplicate content. Similarly, you shouldn't worry about occasional snippets (quotes and otherwise) being flagged as duplicate content.

Why does Google care about duplicate content?
Our users typically want to see a diverse cross-section of unique content when they do searches. In contrast, they're understandably annoyed when they see substantially the same content within a set of search results. Also, webmasters become sad when we show a complex URL (example.com/contentredir?value=shorty-george〈=en) instead of the pretty URL they prefer (example.com/en/shorty-george.htm).

What does Google do about it?
During our crawling and when serving search results, we try hard to index and show pages with distinct information. This filtering means, for instance, that if your site has articles in "regular" and "printer" versions and neither set is blocked in robots.txt or via a noindex meta tag, we'll choose one version to list. In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering rather than ranking adjustments ... so in the vast majority of cases, the worst thing that'll befall webmasters is to see the "less desired" version of a page shown in our index.

How can Webmasters proactively address duplicate content issues?
  • Block appropriately: Rather than letting our algorithms determine the "best" version of a document, you may wish to help guide us to your preferred version. For instance, if you don't want us to index the printer versions of your site's articles, disallow those directories or make use of regular expressions in your robots.txt file.
  • Use 301s: If you have restructured your site, use 301 redirects ("RedirectPermanent") in your .htaccess file to smartly redirect users, the Googlebot, and other spiders.
  • Be consistent: Endeavor to keep your internal linking consistent; don't link to /page/ and /page and /page/index.htm.
  • Use TLDs: To help us serve the most appropriate version of a document, use top level domains whenever possible to handle country-specific content. We're more likely to know that .de indicates Germany-focused content, for instance, than /de or de.example.com.
  • Syndicate carefully: If you syndicate your content on other sites, make sure they include a link back to the original article on each syndicated article. Even with that, note that we'll always show the (unblocked) version we think is most appropriate for users in each given search, which may or may not be the version you'd prefer.
  • Use the preferred domain feature of webmaster tools: If other sites link to yours using both the www and non-www version of your URLs, you can let us know which way you prefer your site to be indexed.
  • Minimize boilerplate repetition: For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details.
  • Avoid publishing stubs: Users don't like seeing "empty" pages, so avoid placeholders where possible. This means not publishing (or at least blocking) pages with zero reviews, no real estate listings, etc., so users (and bots) aren't subjected to a zillion instances of "Below you'll find a superb list of all the great rental opportunities in [insert cityname]..." with no actual listings.
  • Understand your CMS: Make sure you're familiar with how content is displayed on your Web site, particularly if it includes a blog, a forum, or related system that often shows the same content in multiple formats.
  • Don't worry be happy: Don't fret too much about sites that scrape (misappropriate and republish) your content. Though annoying, it's highly unlikely that such sites can negatively impact your site's presence in Google. If you do spot a case that's particularly frustrating, you are welcome to file a DMCA request to claim ownership of the content and have us deal with the rogue site.

In short, a general awareness of duplicate content issues and a few minutes of thoughtful preventative maintenance should help you to help us provide users with unique and relevant content.
Email ThisBlogThis!Share to XShare to Facebook
Posted in crawling and indexing | No comments
Newer Post Older Post Home

0 comments:

Post a Comment

Subscribe to: Post Comments (Atom)

Popular Posts

  • Our Valentine's day gift: out of beta and adding comments
    Here at webmaster central , we love the webmaster community -- and today, Valentine's Day, we want to show you that our commitment to ...
  • Traveling Down Under: GWC at Search Engine Room and Search Summit Australia
    G'day Webmasters! Google Webmaster Central is excited to be heading to Sydney for Search Summit and Search Engine Room on March 1-2 ...
  • Come see us at SES London and hear tips on successful site architecture
    If you're planning to be at Search Engine Strategies London February 13-15, stop by and say hi to one of the many Googlers who will be ...
  • How to verify Googlebot
    Lately I've heard a couple smart people ask that search engines provide a way know that a bot is authentic. After all, any spammer cou...
  • Introducing Sitemaps for Google News
    Good news for webmasters of English-language news sites: If your site is currently included in Google News , you can now create News Sitemap...
  • Better details about when Googlebot last visited a page
    Most people know that Googlebot downloads pages from web servers to crawl the web. Not as many people know that if Googlebot accesses a page...
  • Update on Public Service Search
    Public Service Search is a service that enables non-profit, university, and government web sites to provide search functionality to their vi...
  • Discover your links
    Update on October 15, 2008 : For more recent news on links, visit Links Week on our Webmaster Central Blog. We're discussing internal l...
  • Setting the preferred domain
    Based on your input, we've recently made a few changes to the preferred domain feature of webmaster tools. And since you've had some...
  • Joint support for the Sitemap Protocol
    We're thrilled to tell you that Yahoo! and Microsoft are joining us in supporting the Sitemap protocol. As part of this development, we...

Categories

  • crawling and indexing
  • events
  • feedback and communication
  • general tips
  • localization
  • products and services
  • search results
  • sitemaps
  • webmaster guidelines
  • webmaster tools

Blog Archive

  • ►  2007 (13)
    • ►  March (3)
    • ►  February (7)
    • ►  January (3)
  • ▼  2006 (34)
    • ▼  December (5)
      • Better understanding of your site
      • Deftly dealing with duplicate content
      • Building link-based popularity
      • SES Chicago - Using Images
      • Come and see us at Search Engine Strategies Chicago
    • ►  November (7)
    • ►  October (7)
    • ►  September (8)
    • ►  August (7)
Powered by Blogger.

About Me

Unknown
View my complete profile