Search Engine Optimization

Search Engine Optimization

SEO or Search Engine Optimization

First, what is SEO or Search Engine Optimization?

It is the process of getting traffic from the “free”, “organic”, “editorial” or “natural” search results on search engines.

Why is it important?

In today’s competitive market, SEO is more important than ever. Search engines serve millions of users per day looking for answers to their questions or for solutions to their problems.

Note that if you have a blog or online store, SEO can help your business grow and meet the business objectives.

Search engine optimization is essential because:

  • The majority of search engines users are more likely to click on one of the top 5 suggestions in the results pages (SERPS), so to take advantage of this and gain visitors to your web site or customners to your online store you need to in the top positions.
  • SEO is not only about search engines but good SEO practices improve the user experience and usability of a web site.
  • Users trust search engines and having a presence in the top positions for the keywords the user is searching, increases the web site’s trust.
  • SEO is good for the social promotion of your web site. People who find your web site by searching Google or Yahoo are more likely to promote it on Facebook, Twitter, Google+ or other social media channels.
  • SEO is important for the smooth running of a big web site. Web sites with more than one author can benefit from SEO in a direct and indirect way. Their direct benefit is having a common framework (checklists) to use before publishing content on the site.
  • SEO can put you ahead of the competition. If two web sites are selling the same thing, the search optimized web site is more likely to have more customers and make more sales.

SEO Guide

How Search Engines Operates?

      Search engines have two major functions: crawling and building an index, and providing search users with a ranked list of the websites they’ve determined are the most relevant.

1. Crawling and Indexing

      Imagine the World Wide Web as a network of stops in a big city subway system. Each stop is a unique document (usually a web page, but sometimes a PDF, JPG, or other file). The search engines need a way to “crawl” the entire city and find all the stops along the way, so they use the best path available — links.

Crawling and Indexing

      Crawling and indexing the billions of documents, pages, files, news, videos, and media on the World Wide Web.

 
Providing Answers

      Providing answers to user queries, most frequently through lists of relevant pages that they’ve retrieved and ranked for relevancy.

      The link structure of the web serves to bind all of the pages together.

      Links allow the search engines’ automated robots, called “crawlers” or “spiders,” to reach the many billions of interconnected documents on the web.

      Once the engines find these pages, they decipher the code from them and store selected pieces in massive databases, to be recalled later when needed for a search query. To accomplish the monumental task of holding billions of pages that can be accessed in a fraction of a second, the search engine companies have constructed datacenters all over the world.

      These monstrous storage facilities hold thousands of machines processing large quantities of information very quickly. When a person performs a search at any of the major engines, they demand results instantaneously; even a one- or two-second delay can cause dissatisfaction, so the engines work hard to provide answers as fast as possible.

 
2. Providing Answers

      Search engines are answer machines. When a person performs an online search, the search engine scours its corpus of billions of documents and does two things: first, it returns only those results that are relevant or useful to the searcher’s query; second, it ranks those results according to the popularity of the websites serving the information. It is both relevance and popularity that the process of SEO is meant to influence.

How do search engines determine relevance and popularity?

      To a search engine, relevance means more than finding a page with the right words. In the early days of the web, search engines didn’t go much further than this simplistic step, and search results were of limited value. Over the years, smart engineers have devised better ways to match results to searchers’ queries. Today, hundreds of factors influence relevance, and we’ll discuss the most important of these in this guide.

      Search engines typically assume that the more popular a site, page, or document, the more valuable the information it contains must be. This assumption has proven fairly successful in terms of user satisfaction with search results.

      Popularity and relevance aren’t determined manually. Instead, the engines employ mathematical equations (algorithms) to sort the wheat from the chaff (relevance), and then to rank the wheat in order of quality (popularity).

      These algorithms often comprise hundreds of variables. In the search marketing field, we refer to them as “ranking factors.” Moz crafted a resource specifically on this subject: Search Engine Ranking Factors.

How People Interact with Search Engines?

      One of the most important elements to building an online marketing strategy around SEO is empathy for your audience. Once you grasp what your target market is looking for, you can more effectively reach and keep those users.

      We like to say, “Build for users, not for search engines.” There are three types of search queries people generally make:

“Do” Transactional Queries: I want to do something, such as buy a plane ticket or listen to a song.
“Know” Informational Queries: I need information, such as the name of a band or the best restaurant in New York City.
“Go” Navigation Queries: I want to go to a particular place on the Internet, such as Facebook or the homepage of the NFL.

 

When visitors type a query into a search box and land on your site, will they be satisfied with what they find? This is the primary question that search engines try to answer billions of times each day. The search engines’ primary responsibility is to serve relevant results to their users. So ask yourself what your target customers are looking for and make sure your site delivers it to them.

 

It all starts with words typed into a small box.

Why Search Engine Marketing is Necessary?

      An important aspect of SEO is making your website easy for both users and search engine robots to understand. Although search engines have become increasingly sophisticated, they still can’t see and understand a web page the same way a human can. SEO helps the engines figure out what each page is about, and how it may be useful for users.

 

A Common Argument Against SEO

We frequently hear statements like this:

      “No smart engineer would ever build a search engine that requires websites to follow certain rules or principles in order to be ranked or indexed. Anyone with half a brain would want a system that can crawl through any architecture, parse any amount of complex or imperfect code, and still find a way to return the most relevant results, not the ones that have been ‘optimized’ by unlicensed search marketing experts.”

But Wait …

      Imagine you posted online a picture of your family dog. A human might describe it as “a black, medium-sized dog, looks like a Lab, playing fetch in the park.” On the other hand, the best search engine in the world would struggle to understand the photo at anywhere near that level of sophistication. How do you make a search engine understand a photograph? Fortunately, SEO allows webmasters to provide clues that the engines can use to understand content. In fact, adding proper structure to your content is essential to SEO.

      Understanding both the abilities and limitations of search engines allows you to properly build, format, and annotate your web content in a way that search engines can digest. Without SEO, a website can be invisible to search engines.

The Basics of Search Engine Friendly Design and Development

Search engines are limited in how they crawl the web and interpret content. A webpage doesn’t always look the same to you and me as it looks to a search engine. In this section, we’ll focus on specific technical aspects of building (or modifying) web pages so they are structured for both search engines and human visitors alike. Share this part of the guide with your programmers, information architects, and designers, so that all parties involved in a site’s construction are on the same page.

Indexable Content

      To perform better in search engine listings, your most important content should be in HTML text format. Images, Flash files, Java applets, and other non-text content are often ignored or devalued by search engine crawlers, despite advances in crawling technology. The easiest way to ensure that the words and phrases you display to your visitors are visible to search engines is to place them in the HTML text on the page. However, more advanced methods are available for those who demand greater formatting or visual display styles:

  1. Provide alt text for images. Assign images in gif, jpg, or png format “alt attributes” in HTML to give search engines a text description of the visual content.
  2. Supplement search boxes with navigation and crawlable links.
  3. Supplement Flash or Java plug-ins with text on the page.
  4. Provide a transcript for video and audio content if the words and phrases used are meant to be indexed by the engines.

Seeing your site as the search engines do

      Many websites have significant problems with index able content, so double-checking is worthwhile. By using tools like Google’s cache, SEO-browser.com, and the MozBar you can see what elements of your content are visible and index able to the engines. Take a look at Google’s text cache of this page you are reading now. See how different it looks?

Juggling Panda Image
“I have a problem with getting found. I built a huge Flash site for juggling pandas and I’m not showing up anywhere on Google. What’s up?”

Whoa! That’s what we look like?
Using the Google cache feature, we can see that to a search engine, JugglingPandas.com’s homepage doesn’t contain all the rich information that we see. This makes it difficult for search engines to interpret relevancy.

Hey, where did the fun go?

Uh oh … via Google cache, we can see that the page is a barren wasteland. There’s not even text telling us that the page contains the Axe Battling Monkeys. The site is built entirely in Flash, but sadly, this means that search engines cannot index any of the text content, or even the links to the individual games. Without any HTML text, this page would have a very hard time ranking in search results.

      It’s wise to not only check for text content but to also use SEO tools to double-check that the pages you’re building are visible to the engines. This applies to your images, and as we see below, to your links as well.

Keywoard Research

      It all begins with words typed into a search box.Keyword research is one of the most important, valuable, and high return activities in the search marketing field. Ranking for the right keywords can make or break your website. By researching your market’s keyword demand, you can not only learn which terms and phrases to target with SEO, but also learn more about your customers as a whole. It’s not always about getting visitors to your site, but about getting the right kind of visitors. The usefulness of this intelligence cannot be overstated; with keyword research you can predict shifts in demand, respond to changing market conditions, and produce the products, services, and content that web searchers are actively seeking. In the history of marketing, there has never been such a low barrier to entry in understanding the motivations of consumers in virtually any niche.

How to Judge the Value of a Keyword

      How much is a keyword worth to your website? If you own an online shoe store, do you make more sales from visitors searching for “brown shoes” or “black boots”? The keywords visitors type into search engines are often available to webmasters, and keyword research tools allow us to find this information. However, those tools cannot show us directly how valuable it is to receive traffic from those searches. To understand the value of a keyword, we need to understand our own websites, make some hypotheses, test, and repeat—the classic web marketing formula.

A basic process for assessing a keyword’s value
Ask yourself…

Is the keyword relevant to your website’s content? Will searchers find what they are looking for on your site when they search using these keywords? Will they be happy with what they find? Will this traffic result in financial rewards or other organizational goals? If the answer to all of these questions is a clear “Yes!” then proceed …

Search for the term/phrase in the major engines

      Understanding which websites already rank for your keyword gives you valuable insight into the competition, and also how hard it will be to rank for the given term. Are there search advertisements running along the top and right-hand side of the organic results? Typically, many search ads means a high-value keyword, and multiple search ads above the organic results often means a highly lucrative and directly conversion-prone keyword.

Buy a sample campaign for the keyword at Google AdWords and/or Bing Adcenter

      If your website doesn’t rank for the keyword, you can nonetheless buy test traffic to see how well it converts. In Google Adwords, choose “exact match” and point the traffic to the relevant page on your website. Track impressions and conversion rate over the course of at least 200-300 clicks.

Using the data you’ve collected, determine the exact value of each keyword

      For example, assume your search ad generated 5,000 impressions in one day, of which 100 visitors have come to your site, and three have converted for a total profit (not revenue!) of $300. In this case, a single visitor for that keyword is worth $3 to your business. Those 5,000 impressions in 24 hours could generate a click-through rate of between 18-36% with a #1 ranking (see the Slingshot SEO study for more on potential click-through rates), which would mean 900-1800 visits per day, at $3 each, or between 1 and 2 million dollars per year. No wonder businesses love search marketing!

How Usability, User Experience & Content Affect Search Engine Rankings

      The search engines constantly strive to improve their performance by providing the best possible results. While “best” is subjective, the engines have a very good idea of the kinds of pages and sites that satisfy their searchers. Generally, these sites have several traits in common:

  • Easy to use, navigate, and understand
  • Provide direct, actionable information relevant to the query
  • Professionally designed and accessible to modern browsers
  • Deliver high quality, legitimate, credible content
  • Despite amazing technological advances, search engines can’t yet understand text, view images, or watch video the same way a human can. In order to decipher and rank content they rely on meta information (not necessarily meta tags) about how people interact with sites and pages, and this gives them insight into the quality of the pages themselves.

    How and Why Great Sites Rise to the Top of Search Engine Rankings

    The Impact of Usability and User Experience

    On search engine rankings

          There are a limited number of variables that search engines can take into account directly, including keywords, links, and site structure. However, through linking patterns, user engagement metrics, and machine learning, the engines make a considerable number of intuitions about a given site. Usability and user experience are second order influences on search engine ranking success. They provide an indirect but measurable benefit to a site’s external popularity, which the engines can then interpret as a signal of higher quality. This is called the “no one likes to link to a crummy site” phenomenon.

    Crafting a thoughtful, empathetic user experience helps ensure that visitors to your site perceive it positively, encouraging sharing, bookmarking, return visits, and inbound links—all signals that trickle down to the search engines and contribute to high rankings.

    Growing Popularity and Links

          For search engines that crawl the vast metropolis of the web, links are the streets between pages. Using sophisticated link analysis, the engines can discover how pages are related to each other and in what ways.

          Since the late 1990s search engines have treated links as votes for popularity and importance in the ongoing democratic opinion poll of the web. The engines themselves have refined the use of link data to a fine art, and use complex algorithms to perform nuanced evaluations of sites and pages based on this information.

          Links aren’t everything in SEO, but search professionals attribute a large portion of the engines’ algorithms to link-related factors (see Search Engine Ranking Factors). Through links, engines can not only analyze the popularity websites and pages based on the number and popularity of pages linking to them, but also metrics like trust, spam, and authority. Trustworthy sites tend to link to other trusted sites, while spammy sites receive very few links from trusted sources (see MozTrust). Authority models, like those postulated in the Hilltop Algorithm, suggest that links are a very good way of identifying expert documents on a given subject.

    Thanks to this focus on algorithmic use and analysis of links, growing the link profile of a website is critical to gaining traction, attention, and traffic from the engines. As an SEO, link building is among the top tasks required for search ranking and traffic success.”

    Search Engine Tools and Services

          SEOs tend to use a lot of tools. Some of the most useful are provided by the search engines themselves. Search engines want webmasters to create sites and content in accessible ways, so they provide a variety of tools, analytics and guidance. These free resources provide data points and unique opportunities for exchanging information with the engines.
    Below we explain the common elements that each of the major search engines support and identify why they are useful.

    Common Search Engine Protocols

    1. Sitemaps

    Think of a sitemap as a list of files that give hints to the search engines on how they can crawl your website. Sitemaps help search engines find and classify content on your site that they may not have found on their own. Sitemaps also come in a variety of formats and can highlight many different types of content, including video, images, news, and mobile.

    You can read the full details of the protocols at Sitemaps.org. In addition, you can build your own sitemaps at XML-Sitemaps.com. Sitemaps come in three varieties:

    XML
    Extensible Markup Language (recommended format)

    This is the most widely accepted format for sitemaps. It is extremely easy for search engines to parse and can be produced by a plethora of sitemap generators. Additionally, it allows for the most granular control of page parameters.
    Relatively large file sizes. Since XML requires an open tag and a close tag around each element, file sizes can get very large.

    RSS
    Really Simple Syndication or Rich Site Summary

    Easy to maintain. RSS sitemaps can easily be coded to automatically update when new content is added.
    Harder to manage. Although RSS is a dialect of XML, it is actually much harder to manage due to its updating properties.

    Txt
    Text File

    Extremely easy. The text sitemap format is one URL per line up to 50,000 lines.
    Does not provide the ability to add meta data to pages.

    2. Robots.txt

    The robots.txt file, a product of the Robots Exclusion Protocol, is a file stored on a website’s root directory (e.g., www.google.com/robots.txt). The robots.txt file gives instructions to automated web crawlers visiting your site, including search crawlers.

    By using robots.txt, webmasters can indicate to search engines which areas of a site they would like to disallow bots from crawling, as well as indicate the locations of sitemap files and crawl-delay parameters. You can read more details about this at the robots.txt Knowledge Center page.

    The following commands are available:

    Disallow

    Prevents compliant robots from accessing specific pages or folders.

    Sitemap

    Indicates the location of a website’s sitemap or sitemaps.

    Crawl Delay

    Indicates the speed (in milliseconds) at which a robot can crawl a server.

    An Example of Robots.txt
    #Robots.txt www.example.com/robots.txt
    User-agent: *
    Disallow:

    # Don’t allow spambot to crawl any pages
    User-agent: spambot
    disallow: /

    sitemap:www.example.com/sitemap.xml
    Warning: Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail address scrapers) build bots that don’t follow this protocol; and in extreme cases they can use it to identify the location of private information. For this reason, it is recommended that the location of administration sections and other private sections of publicly accessible websites not be included in the robots.txt file. Instead, these pages can utilize the meta robots tag (discussed next) to keep the major search engines from indexing their high-risk content.
    Disallow Robot

    3. Meta Robots

    The meta robots tag creates page-level instructions for search engine bots.

    The meta robots tag should be included in the head section of the HTML document.

    An Example of Meta Robots


    The Best Webpage on the Internet


    Hello World



    In the example above, “NOINDEX, NOFOLLOW” tells robots not to include the given page in their indexes, and also not to follow any of the links on the page.

    4. Rel=”Nofollow”

    Remember how links act as votes? The rel=nofollow attribute allows you to link to a resource, while removing your “vote” for search engine purposes. Literally, “nofollow” tells search engines not to follow the link, although some engines still follow them to discover new pages. These links certainly pass less value (and in most cases no juice) than their followed counterparts, but are useful in various situations where you link to an untrusted source.

    An Example of nofollow
    Example Link
    In the example above, the value of the link would not be passed to example.com as the rel=nofollow attribute has been added.

    5. Rel=”canonical”

    Often, two or more copies of the exact same content appear on your website under different URLs. For example, the following URLs can all refer to a single homepage:

    http://www.example.com/
    http://www.example.com/default.asp
    http://example.com/
    http://example.com/default.asp
    http://Example.com/Default.asp
    To search engines, these appear as five separate pages. Because the content is identical on each page, this can cause the search engines to devalue the content and its potential rankings.

    The canonical tag solves this problem by telling search robots which page is the singular, authoritative version that should count in web results.

    An Example of rel=”canonical” for the URL http://example.com/default.asp


    The Best Webpage on the Internet

    Hello World



    In the example above, rel=canonical tells robots that this page is a copy of http://www.example.com, and should consider the latter URL as the canonical and authoritative one.

    Myths and Misconceptions About Search Engines

          Over the past several years, a number of misconceptions have emerged about how the search engines operate. For the beginner SEO, this causes confusion about what’s required to perform effectively. In this section, we’ll explain the real story behind the myths.
    Search Engine Submission

          In classical SEO times (the late 1990s), search engines had submission forms that were part of the optimization process. Webmasters and site owners would tag their sites and pages with keyword information, and submit them to the engines. Soon after submission, a bot would crawl and include those resources in their index. Simple SEO!

          Unfortunately, this process didn’t scale very well, the submissions were often spam, so the practice eventually gave way to purely crawl-based engines. Since 2001, not only has search engine submission not been required, but has become virtually useless. The engines all publicly note that they rarely use submitted URLs, and that the best practice is to earn links from other sites. This will expose your content to the engines naturally.

          You can still sometimes find submission pages (here’s one for Bing), but these are remnants of the past, and are unnecessary in the practice of modern SEO. If you hear a pitch from an SEO offering search engine submission services, run, don’t walk, to a real SEO. Even if the engines used the submission service to crawl your site, you’d be unlikely to earn enough link juice to be included in their indices or rank competitively for search queries.

    Search Engine Assistance

    Meta Tags

          Once upon a time, meta tags (in particular, the meta keywords tag) were an important part of the SEO process. You would include the keywords you wanted your site to rank for, and when users typed in those terms, your page could come up in a query. This process was quickly spammed to death, and was eventually dropped by all the major engines as an important ranking signal.

          Other tags, in particular the title tag and meta description tag (covered previously in this guide), are crucial for quality SEO. Additionally, the meta robots tag is an important tool for controlling crawler access. So, while understanding the functions of meta tags is important, they’re no longer the central focus of SEO.

    Keyword Stuffing

    Ever see a page that just looks spammy? Perhaps something like:

    “Bob’s cheap Seattle plumber is the best cheap Seattle plumber for all your plumbing needs. Contact a cheap Seattle plumber before it’s too late.”

    Not surprisingly, a persistent myth in SEO revolves around the concept that keyword density—the number of words on a page divided by the number of instances of a given keyword—is used by the search engines for relevancy and ranking calculations.

    Despite being disproved time and again, this myth has legs. Many SEO tools still feed on the concept that keyword density is an important metric. It’s not. Ignore it and use keywords intelligently and with usability in mind. The value from an extra 10 instances of your keyword on the page is far less than earning one good editorial link from a source that doesn’t think you’re a search spammer.
    Paid Search Helps Bolster Organic Results

    Put on your tin foil hats; it’s time for the most common SEO conspiracy theory: spending on search engine advertising (pay per click, or PPC) improves your organic SEO rankings.

    In our considerable experience and research, we’ve never seen evidence that paid advertising positively affects organic search results. Google, Bing, and Yahoo! have all erected walls in their organizations specifically to prevent this type of crossover.

    At Google, advertisers spending tens of millions of dollars each month have noted that even they cannot get special access or consideration from the search quality or web spam teams. So long as the search engines maintain this separation, the notion that paid search bolsters organic results should remain a myth.

    Measuring and Tracking Success

          They say that if you can measure it, then you can improve it. In search engine optimization, measurement is critical to success. Professional SEOs track data about rankings, referrals, links, and more to help analyze their SEO strategy and create road maps for success.

    RECOMMENDED METRICS TO TRACK

    Although every business is unique, and every website has different metrics that matter, the following list is nearly universal. Here we’re covering metrics critical to SEO; more general metrics are not included. For a more comprehensive look at web analytics, check out Choosing Web Analytics Key Performance Indicators by Avinash Kaushik.

    1. Search Engine Share of Referring Visits

    Every month, keep track of the contribution of each traffic source for your site, including:

    Direct Navigation: Typed in traffic, bookmarks, email links without tracking codes, etc.
    Referral Traffic: From links across the web or in trackable email, promotional, and branding campaign links
    Search Traffic: Queries that sent traffic from any major or minor web search engine

    Knowing both the percentage and exact numbers will help you identify weaknesses and give you a basis for comparison over time. For example, if you see that traffic has spiked dramatically but it comes from referral links with low relevance, it’s not time to get excited. On the other hand, if search engine traffic falls dramatically, you may be in trouble. You should use this data to track your marketing efforts and plan your traffic acquisition efforts.

    2. Search Engine Referrals

    Three major engines make up 95%+ of all search traffic in the US: Google and the Yahoo!-Bing alliance. For most countries outside the US, 80%+ of search traffic comes solely from Google (with a few notable exceptions including Russia and China). Measuring the contribution of your search traffic from each engine is useful for several reasons:

    Compare Performance vs. Market Share

    Compare the volume contribution of each engine with its estimated market share.

    Get Visibility Into Potential Drops

          If your search traffic should drop significantly at any point, knowing the relative and exact contributions from each engine will be essential to diagnosing the issue. If all the engines drop off equally, the problem is almost certainly one of accessibility. If Google drops while the others remain at previous levels, it’s more likely to be a penalty or devaluation of your SEO efforts by that singular engine.

    Uncover Strategic Value

          It’s very likely that some efforts you undertake in SEO will have greater positive results on some engines than on others. For example, we’ve observed that on-page optimization tactics like better keyword inclusion and targeting reap greater benefits with Bing and Yahoo! than with Google. On the other hand, gaining specific anchor text links from a large number of domains has a more positive impact on Google than the others. If you can identify the tactics that are having success with one engine, you’ll better know how to focus your efforts.

    3. Visits Referred by Specific Search Engine Terms and Phrases

          The keywords that send traffic are another important piece of your analytics pie. You’ll want to keep track of these on a regular basis to help identify new trends in keyword demand, gauge your performance on key terms, and find terms that are bringing significant traffic that you’re potentially under-optimized for.

          You may also find value in tracking search referral counts for terms outside the top terms and phrases—those that are most valuable to your business. If the trend lines are pointing in the wrong direction, you know efforts need to be undertaken to course-correct. Search traffic worldwide has consistently risen over the past 15 years, so a decline in the quantity of referrals is troubling. Check for seasonality issues (keywords that are only in demand certain times of the week/month/year) and rankings (have you dropped, or has search volume ebbed?).

    4. Conversion Rate by Search Query Term/Phrase

          When it comes to the bottom line for your organization, few metrics matter as much as conversion. For example, in the graphic to the right, 5.80% of visitors who reached Moz with the query “SEO Tools” signed up to become members during that visit. This is a much higher conversion rate than most of the thousands of keywords used to find our site. With this information, we can now do two things:

    Checking our rankings, we see that we only rank #4 for “SEO Tools.” Working to improve this position will undoubtedly lead to more conversion.
    Because our analytics will also tell us what page these visitors landed on (mostly https://moz.com/free-seo-tools), we can focus our efforts on improving the visitor experience on that page.
    The real value from this simplistic tracking comes from the low-hanging fruit: finding keywords that continually send visitors who convert to paying customers, and increasing focus on rankings and on improving the landing pages that visitors reach. While conversion rate tracking from keyword phrase referrals is certainly important, it’s never the whole story. Dig deeper and you can often uncover far more interesting and applicable data about how conversion starts and ends on your site.

    5. Number of pages receiving at least one visit from search engines

          Knowing the number of pages that receive search engine traffic is an essential metric for monitoring overall SEO performance. From this number, we can get a glimpse into indexation—the number of pages from our site the engines are keeping in their indexes. For most large websites (50,000+ pages), mere inclusion is essential to earning traffic, and this metric delivers a trackable number that’s indicative of success or failure. As you work on issues like site architecture, link acquisition, XML sitemaps, and uniqueness of content and meta data, the trend line should rise, showing that more and more pages are earning their way into the engines’ results. Pages receiving search traffic is, quite possibly, the best long tail metric around.

          While other analytics data points are of great importance, those mentioned above should be universally applied to get the maximum value from your SEO campaigns.

    Google’s (not provided) Keywords

          In 2011, Google announced it would no longer pass keyword query data through its referrer string for logged-in users. This meant that instead of showing organic keyword data in Google Analytics, visits from users logged into Google would show the keyword query as “(not provided).” At the time, Google said they expected this to affect less than 10% of all search queries. But soon webmasters reported up to 20% of their search queries were from keywords (not provided).

          Over the ensuing two years, webmasters began reporting much higher volumes of (not provided) keywords as more and more searched were performed using encrypted search (i.e., the https:// version of Google). With the launch of Google+, more logged-in users pushed this number even higher. Over time, smart SEOs have identified methods to contend with the (not provided) situation, and tips on reclaiming your data.

Post Your Comment Here

Your email address will not be published. Required fields are marked *