xisto Community

# Does Using Copied Content Affect Your Serp?

## Recommended Posts

I have read a lot of articles and posts saying that Search Engines don't like copied content. They penalize websites that use them and many more things similar to these.
Today when I was searching for "Best ways to use Google analytics", I found two websites having the same content word by word both ranking on the first page of the search.
Urls of the sites under discussion are;
http://forums.xisto.com/no_longer_exists/
http://forums.xisto.com/no_longer_exists/

Now my question is that isn't all the information studded on different websites saying that GOOGLE DOES NOT LIKE COPIED CONTENT false? I mean if in reality google doesn't like copied content, what are both of these websites doing at the first page of the search.

##### Share on other sites

Maybe google love people talking about google?Maybe the rule "no copied text" is true, except when mentionning google?And maybe these sites would have a far better rank if they had no copied text?And maybe your site will have no such exceptions, and if you have copied text your own rank will be poor?

##### Share on other sites

Seems to me like the question to your answer is no. I might be wrong, but as usual, I blame it on the time being 4AM. Anyway, I made some quick research, and found two interesting links. The first one is Wikipedia. Not always so reliable, but why not give it a shot. It explains how PageRank is calculated, and it also explains what PageRank is. I'll quote the part I found important about the calculation.

Assume a small universe of four web pages: A, B, C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.

In the original form of PageRank initial values were simply 1. This meant that the sum of all pages was the total number of pages on the web. Later versions of PageRank (see the formulas below) would assume a probability distribution between 0 and 1. Here a simple probability distribution will be usedâhence the initial value of 0.25.

If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.

PR(A)= PR(B ) + PR( C) + PR(D).

This is 0.75.

Suppose that page B has a link to page C as well as to page A, while page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D's PageRank is counted for A's PageRank (approximately 0.083).

If you want to read the rest of the article, here's the link: Wikipedia - PageRank

I also found some interesting facts on how to improve your ranking, written by Google themselves I think. Here's an interesting quote:

Design and content guidelines

* Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link.

* Offer a site map to your users with links that point to the important parts of your site. If the site map has an extremely large number of links, you may want to break the site map into multiple pages.

* Keep the links on a given page to a reasonable number.

* Create a useful, information-rich site, and write pages that clearly and accurately describe your content.

* Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.

* Try to use text instead of images to display important names, content, or links. The Google crawler doesn't recognize text contained in images. If you must use images for textual content, consider using the "ALT" attribute to include a few words of descriptive text.

* Make sure that your <title> elements and ALT attributes are descriptive and accurate.

* Check for broken links and correct HTML.

* If you decide to use dynamic pages (i.e., the URL contains a "?" character), be aware that not every search engine spider crawls dynamic pages as well as static pages. It helps to keep the parameters short and the number of them few.

* Review our image guidelines for best practices on publishing images.

As you can see, there is nothing about copied text in the entire "content" part. Interesting article though,

Hope this was some good information.

Regards,

/Feelay

Edited by Feelay (see edit history)

##### Share on other sites

I think you are right.

Maybe the rule "no copied text" is true, except when mentionning google?

Well this is injustice then.

And maybe these sites would have a far better rank if they had no copied text?

There is a possibility

And maybe your site will have no such exceptions, and if you have copied text your own rank will be poor?

Not at all. Infact after seeing pages like that, I am planning to start using some copied text in my pages.

##### Share on other sites

Seems to me like the question to your answer is no. I might be wrong, but as usual, I blame it on the time being 4AM. Anyway, I made some quick research, and found two interesting links. The first one is Wikipedia. Not always so reliable, but why not give it a shot. It explains how PageRank is calculated, and it also explains what PageRank is. I'll quote the part I found important about the calculation.

If you want to read the rest of the article, here's the link: Wikipedia - PageRank

I also found some interesting facts on how to improve your ranking, written by Google themselves I think. Here's an interesting quote:

As you can see, there is nothing about copied text in the entire "content" part. Interesting article though,

Hope this was some good information.

Regards,

/Feelay

All information provided by you was very helpful. I think it is best to experiment with stuff like that to know the exact truth. But according to common sense, google should hate copied content because otherwise the internet will become a mess and there will be no original content left because after all Google is the master.

##### Share on other sites

The reason two duplicate copies exist on front page is because google can't decide the original between the two. If you put copied text on one site linking to the original source then google discards the second which points to the original. If you don't then google algorithm gets confused and then rank the pages based on backlinking, social buzz and other factors that keep the link from certain domain at the top. If the content is copied then adsense gives a lot of penalty and you can verify that with hubpages users on how duplicate content affects. Though some people do mix duplicate content from various sites and act like syndicator but they don't get authority as search engine's favorite resource. The reason penalty for duplicate content or syndicated content is not mentioned because people can then create such resource and knock each other out of front page. So google never mentions with what you get penalty to avoid the market competition crushing each other.

## Create an account

Register a new account