Avoiding duplicate content

18.04.2016

duplicate content

On the Internet, duplicate content is a problem that arises in many different forms. Web users are often faced with identical content when doing online research. Search engines react to identical content with negative rankings, and website operators are frustrated when Google & Co. suddenly stop listing their offers in the search results.

Duplicate content describes a many-faceted phenomenon that occurs on the Internet when identical or similar content appears on various websites. It can involve similar texts or text modules within the same domain (so-called internal duplicate content) or on various web domains (external duplicate content.) The duplicate content can soon become a make-or-break for the successful operation of websites. Similar texts give Internet users the impression that their online research has been pointlessly misrouted and at the same time, deliberate or non-deliberate duplicate content can have a negative influence on ranking given by relevant search engines.

Original content for Web users and search engines

Search engines such as Google have an interest in finding high-quality, unique content on the Internet delivering it as suitable search results and meeting the expectations of their online users. The particular way search engine robots function means that they cannot determine which website is more relevant when the websites have identical content. This usually means that they only list one of the Internet pages on their hit list, or downgrade the rankings of both. To curb this undesired effect it is therefore necessary to avoid duplicate content that, in addition to text, includes the same images or source codes

Duplicate content within the same web page

Internal duplicate content means that the same website content can be called up via several link versions. Numerous web pages can be accessed with or without providing the www-sub domain or via different protocol- status codes (http or https.) The inconsistent structure of the URLs on the pages cause them to be evaluated as separate websites. The same applies to print versions of articles to which the web page is linked, and signify double content for the search engines. The more duplicate content can be found on a website, the larger the risk that Google or Bing will give it a lower ranking. Many webmasters list information a number of times on their Internet pages; Online shops or travel operators frequently place a series of slightly altered descriptions of products or destinations on their Internet pages.

Mirror sites and duplicate content on external websites

Cases of external duplicate content are particularly difficult for websites, i.e., when similar content are found on different website domains. Mirror sites, i.e., similar Internet documents that are stored in various domain addresses, may confuse the Googlebots. The search engines cannot recognize which of the duplicate pages is most relevant.
Duplicate content occurs when the former Internet page continues to exist despite a move to another domain. The voluntarily distribution of their website content to other Internet pages (so-called syndicating) or quoting texts from other operators (unmarked) is also regarded as duplicate content. Online shops often use texts from manufacturer sites to provide their customers with relevant product information, press releases are, for example, frequently published on several media or web portals. The unauthorized copying of content from other websites is a severe error on the part of the administrator. Webmasters run the risk of search engines regarding this as a deceptive maneuver (for example in favor of better ranking results) and punish the online page accordingly.

Create exciting added value for the user

To avoid duplicate content on one’s own website domains, it appears important to refrain from presenting duplicate texts (for example in various categories), from using the same text modules and from disclosing placeholders. In addition, one ought to emphasize the creation of uniform link structures in Web documents. After the move of the website one can have users redirected to the original Web address by using the “Redirect Permanent Code” function in the server’s root directory. In the case of multiple page versions (print pages, archive pages) it is advisable to mark the respectively current online page by means of “Canonical tags.” References regarding their origins must identify quotes and images sources must always be provided.

When attempting to achieve the ideal design of an Internet presence for users and search engines, one must always try to create texts that are unique, appealing and captivate the reader’s interest. A Webmaster who is committed to effective user loyalty can create actual added value by using exciting new information instead of repetitive text elements. This fulfills the basic objective of an online offer, increases the reader’s dwell time on the website and, because it appeals to the users, search engines generally reward it with higher rankings.

Dieser Artikel wurde am 18.April 2016 von Clickblogger geschrieben.

avatar

Clickblogger

This blog entry has been written by a Clickworker.