Seo

Published on October 2021 | Categories: Documents | Downloads: 3 | Comments: 0 | Views: 108
of 8
Download PDF   Embed   Report

Comments

Content

 

Seo starter guide Search engine optimization From Wikipedia, the free encyclopedia "SEO" redirects here. For other uses, see SEO (disambiguation). Page semi-protected Part of a series on Internet marketing Search engine optimization Social media marketing Email marketing Referral marketing Content marketing  Native advertising Search engine marketing Pay per click  Cost per impression Search analytics Web analytics Display advertising Contextual advertising Behavioral targeting Affiliate marketing Cost per action Revenue sharing Mobile advertising vt e Search engine optimization (SEO) is the process of affecting the visibility of a website or a web  page in a search engine's "natural" or un-paid ("organic ("organic") ") search results. In general, the earlier (or  higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users. SEO may target different kinds of search, including image search, local search, video search, academic search,[1] news search and industry-specific vertical search engines. As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are  preferred by their targeted audience. Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or  inbound links, is another SEO tactic. The plural plural of the abbrev abbreviat iation ion SE SEO O can also refer refer to "searc "search h eng engine ine optimi optimizer zers", s", those those who  provide SEO services.

 

Conten Con tents ts

[hide] [hide]

1 History 2 Relationship with search engines 3 Methods 3.1 Getting indexed 3.2 Preventing crawling 3.3 Increasing prominence 4 White hat versus black hat techniques 5 As a marketing marketing strategy strategy 6 International markets 7 Legal precedents 8 See also 9 Notes 10 External links History Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to  be indexed.[2] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight weight for specific specific words, words, and all links links the page con contai tains, ns, which which are then place placed d int into o a scheduler for crawling at a later date. Site owners started to recognize the value of having their sites highly ranked and visible in search engine eng ine res result ults, s, creati creating ng an opport opportuni unity ty for both both white white hat and black black hat SEO SEO practi practitio tioner ners. s. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997.[3] On May 2, 2007,[4] Jason Gambert attempted to trademark the term SEO by convin convincin cing g the Trad Tradem emark ark Of Offic ficee in Arizon Arizona[5 a[5]] tha thatt SEO SEO is a "proce "process" ss" involv involving ing manipulation of keywords, and not a "marketing service." The reviewing attorney basically bought his incoherent argument that while "SEO" can't be trademarked when it refers to a generic process of manipulated manipulated keywords, keywords, it can be a service service mark for providing "marketing "marketing service services...in s...in the field of computers."[6] Early Ear ly ve versi rsions ons of search search algorit algorithms hms relied relied on webma webmaste sterr-pro provid vided ed inf inform ormati ation on suc such h as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each  page's content. Using meta data to index pages was found to be less than reliable, however,  because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irreleva irrelevant nt searches.[ searches.[7][d 7][dubiou ubiouss   –   discuss] W Web eb content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[8]

 

By re rely lying ing so much much on fact factor orss such such as ke keyw ywor ord d de dens nsit ity y whic which h were were ex excl clus usiv ivel ely y with within in a webmaster webm aster's 's control, control, earl early y sear search ch engi engines nes suffered suffered from abus abusee and rank ranking ing mani manipulat pulation. ion. To  provide better results to their users, search engines had to adapt to ensure their results pages showed sho wed the most most releva relevant nt search search res result ults, s, rat rather her tha than n unr unrela elated ted pages pages stu stuff ffed ed with with numero numerous us keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or  irrelevant search results could lead users to find other search sources. Search engines responded  by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate. Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rat ratee the prominen prominence ce of web pages. pages. The number number cal calcul culate ated d by the algorithm algorithm,, PageR PageRank ank,, is a function of the quantity and strength of inbound links.[9] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs surfs the web, and follows links from one  page to another. another. In effect, this means that some links are stronger than others, as a higher  PageRank page is more likely to be reached by the random surfer. Page Page and Brin Brin founde founded d Googl Googlee in 199 1998.[ 8.[10] 10] Google Google att attrac racted ted a loyal loyal follow following ing am among ong the growing grow ing number number of Internet Internet users, who liked its simple simple design.[1 design.[11] 1] Off-pa Off-page ge fact factors ors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of  manipu man ipulat lation ion see seen n in search search eng engine iness that that onl only y consid considere ered d onon-pag pagee fa facto ctors rs for the their ir rankin rankings. gs. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[12] By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[13] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners practitioners have studied diff different erent approaches approaches to sear search ch engi engine ne optimizati optimization, on, and have shared their personal opinions[14] Patents related to search engines can provide information to  better understand search engines.[15] In 2005, Google began personalizing search results for each user. Depending on their history of   previous searches, Google crafted results for logged in users.[16] In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[17] In 2007, Google announced a campaign against paid links that transfer PageRank.[18] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting  by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google,

 

announced that Google Bot would no longer treat nofollowed links in the same way, in order to  prevent SEO service providers from using nofollow for PageRank sculpting.[19] As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engine eng ineers ers develo developed ped alt altern ernati ative ve techni technique quess that that re repla place ce nofoll nofollowe owed d tags tags with with obf obfusc uscate ated d Javasc Jav ascrip riptt and thus thus per permit mit Pa PageR geRank ank sculpt sculpting ing.. Additi Additiona onally lly sever several al sol soluti utions ons hav havee been been suggested that include the usage of iframes, Flash and Javascript.[20] In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[21] On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than  before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[22] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely more timely and relev relevant ant.. Histor Historica ically lly site site adm adminis inistra trator torss have have spent spent mon months ths or even even ye years ars optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank  quickly within the search results.[23] In February February 2011, Google announced announced the Pand Pandaa updat update, e, which penalizes penalizes websites websites containing containing content conte nt duplicate duplicated d from other websites websites and sources. sources. Histo Historical rically ly websites websites have copied copied conte content nt from one another and benefited in search engine rankings by engaging in this practice, however  Google implemented a new system which punishes sites whose content is not unique.[24] In April 2012, Google launched the Google Penguin update the goal of which was to penalize websites that used manipulative techniques to improve their rankings on the search engine.[25] In Septe Septembe mberr 2013, 2013, Google Google releas released ed the Google Google Hummin Hummingbi gbird rd upd update ate,, an algori algorithm thm cha change nge designed desig ned to improve improve Googl Google's e's natural natural langu language age processi processing ng and semantic understan understanding ding of web  pages. Relationship with search engines By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results resul ts by stuffing stuffing pages with excessive excessive or irreleva irrelevant nt key keywords. words. Early search engines, such as Altavi Alt avista sta and Infose Infoseek ek,, adjust adjusted ed the their ir algorit algorithms hms in an ef effor fortt to preve prevent nt webmas webmaster terss from from manipulating rankings.[26]

In 2005, an annual annual conferenc conference, e, AIRWeb, AIRWeb, Adversaria Adversariall Informat Information ion Retri Retrieval eval on the Web was created to bring together practitioners and researchers concerned with search engine optimisation

 

and related topics.[27] Companies that employ overly aggressive techniques can get their client websites banned from the search searc h results. results. In 2005, the Wall Street Journal reported on a company company,, Tra Traffi fficc Power, Power, which allegedly alleg edly used high-risk high-risk technique techniquess and failed to disclose disclose those risks to its clients.[ clients.[28] 28] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the  ban.[29] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Traffic Power and some of its clients.[30] Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. Major search engines provide information and guidel gui deline iness to help help with with site site optimi optimizat zation ion.[3 .[31][ 1][32] 32] Goo Google gle has a Sitem Sitemaps aps pro progra gram m to hel help p webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[33] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the crawl rate, and track the web pages index status. Methods Getting indexed The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their  algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted submitted becau because se they are found automatically automatically.. Two major director directories, ies, the Yahoo Directory and DMOZ both require manual submission and human editorial review.[34] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for  free to ensure that all pages are found, especially pages that are not discoverable by automatically following links.[35] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[36] this was discontinued in 2009.[37] Search Sear ch engine crawler crawlerss may look at a number number of differe different nt factors factors when crawling a site. Not every  page is indexed by the search engines. Distance of pages from the root directory of a site may also  be a factor in whether or not pages get crawled.[38] Preventing crawling Main article: Robots Exclusion Standard To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which  pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion occasion crawl pages a webmaster webmaster does not wish crawled. crawled. Page Pagess typic typically ally prevented prevented from  being crawled include login specific pages such as shopping carts and user-specific user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should sho uld preven preventt indexi indexing ng of int intern ernal al sea search rch result resultss becaus becausee those those pages pages are consid consider ered ed search search

 

spam.[39] Increasing prominence A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to most important pages may improve its visibility.[40] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[40] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of  web pages accessible via multiple urls, using the canonical link element[41] or via 301 redirects ca can n he help lp make make sure sure link linkss to diff differ eren entt ve vers rsio ions ns of the the url url al alll co coun untt to towa ward rdss th thee pa page ge's 's li link  nk   popularity score. White hat versus black hat techniques SEO tec techni hnique quess can be cla classi ssifie fied d int into o two broad broad cat catego egorie ries: s: tec techni hnique quess tha thatt search search eng engine iness recommend recom mend as part of good design, and those technique techniquess of which search search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either  white hat SEO, or black hat SEO.[42] White hats tend to produce results that last a long time, wherea whe reass black black hats hats ant antici icipat patee that that the their ir sit sites es may eventu eventuall ally y be banned banned eit either her tempor temporari arily ly or   permanently once the search engines discover what they are doing.[43] An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[31][32][44] are not written as a series of  rules or commandments, this is an important distinction to note. White hat SEO is not just about fo foll llow owin ing g gu guid idel elin ines es,, bu butt is ab abou outt en ensu suri ring ng that that the the co cont nten entt a se sear arch ch en engi gine ne in inde dexe xess an and d subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[45] although the two are not identical. Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involv involvee decep deceptio tion. n. One black hat techni technique que uses tex textt that that is hidden hidden,, eit either her as text text colore colored d similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking. Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalised however do not act in  producing the best content for users, rather entirely focused on improving search engine rankings. Search engines may penalize sites they discover using black hat methods, either by reducing their 

 

rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of  deceptive practices.[46] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.[47]

As a marketing strategy SEO is not an appropriate strategy for every website, and other Internet marketing strategies can  be more effective effective like paid advertising through pay per click (PPC) campaigns, depending on the sit sitee operat operator' or'ss goals. goals.[48 [48]] A succe successf ssful ul In Inter ternet net mar market keting ing campai campaign gn ma may y als also o depen depend d upo upon n  building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[49] SEO may generate an adequate return on investment. However, search engines are not paid for  organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees guarantees and certainty certainty,, a business business that reli relies es heavily heavily on search engine traffi trafficc can suffer major losses if the search engines stop sending visitors.[50] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic.

 –

According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes almost 1.5 per day.[51] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[52] International markets Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition. In 2003, Danny

Sullivan Sulli van stated that Google represented represented about 75% of all searches searches.[53] .[53] In mark markets ets outside outside the United Unite d States, States, Google's Google's share is often larger, larger, and Google Google remains remains the dominant search engine worldwide as of 2007.[54] As of 2006, Google had an 85 – 90% market share in Germany.[55] While there were hundreds of SEO firms in the US at that time, there were only about five in Germany.[55] As of June 2008, the marketshare of Google in the UK was close to 90% according to Hitwise.[56] That market share is achieved in a number of countries. As of 2009, there are only a few large markets where Google is not the leading search engine. In most cases, when Google is not leading in a given market, it is lagging behind a local player. The most notable example markets are China, Japan, South Korea, Russia and the Czech Republic where respectively Baidu, Yahoo! Yahoo! Japan, Naver, Yandex and Seznam are market leaders. Successful search optimization for international markets may require professional translation of  web pages, registration of a domain name with a top level domain in the target market, and web ho host stin ing g th that at pr prov ovid ides es a loca locall IP ad addr dres ess. s. Othe Otherw rwis ise, e, th thee fund fundam amen enta tall el elem emen ents ts of se sear arch ch optimization are essentially the same, regardless of language.[55]

Legal precedents On October 17, 2002, SearchKing filed suit in the United States District Court, Western District of 

 

Oklahoma, Oklah oma, against against the search engine Google. Google. SearchK SearchKing's ing's claim was that Google's Google's tactics tactics to  prevent spamdexing constituted a tortious interference with contractual relations. On May 27, 2003, the court granted Google's motion to dismiss the complaint because SearchKing "failed to state a claim upon which relief may be granted."[57][58] In Marc March h 20 2006 06,, Kind Kinder erSt Star artt file filed d a laws lawsui uitt ag agai ains nstt Go Goog ogle le ov over er se sear arch ch en engi gine ne rank rankin ings gs.. Kinderstart's website was removed from Google's index prior to the lawsuit and the amount of  traffic to the site dropped by 70%. On March 16, 2007 the United States District Court for the  Northern District of California (San Jose Division) dismissed KinderStart's KinderStart's complaint without leave to amend, and partially granted Google's motion for Rule 11 sanctions against KinderStart's attorney, requiring him to pay part of Google's legal expenses.[59][60]

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close