Penguin 4.0: Was It Worth the Wait?

Posted by Dr-Pete

For almost two years (707 days, to be precise), one question has dominated the SEO conversation: “When will Google update Penguin?” Today, we finally have the answer. Google announced that a Penguin update is rolling out and that Penguin is now operating in real-time.

September has been a very volatile month for the SERPs (more on that later in the post), but here’s what we’re seeing in MozCast for the past two weeks, including last night:

In a normal month, a temperature of 82°F would be slightly interesting, but it’s hardly what many people were expecting, and September 2016 has been anything but a normal month. It takes time to refresh the entire index, though, so it’s likely Penguin volatility will continue for a few days. I’ll update this graph over the next few days if anything more interesting happens.

What happened in September?

September has been the most volatile month for SERPs since I started tracking temperatures in April of 2012 (just a couple of weeks before Penguin 1.0). To the best of my knowledge at this time, the volatility during the rest of September was not due to the Penguin 4.0 roll-out.

There are no official statements (currently) about other updates, but we’re aware of two things. First, many local SEOs saw major shifts around September 1st, when MozCast tracked a high of 108°F. This has been dubbed the Possum Update, and reports are that local pack URLs also moved substantially (MozCast does not track this data). We did see an overall drop in local pack presence in our data set on that day (about 7.3% day-over-day).

Second, between September 13th and 14th there was a massive drop in SERPs with image (vertical) results on page 1 in our data set. This caused substantial volatility, as image results occupy an organic position and so those SERPs got an extra organic result on page 1. The temperature that day was 111°F. Here’s the two-week graph of SERPs with image results on page 1:

SERPs with images in our data set dropped 49% overnight and have not recovered. I’ve hand-checked dozens of these results and have verified the drop. In some cases, images moved to deeper pages. It’s unclear if other vertical/universal results were affected.

Were you affected by Penguin 4.0?

I’ve often said that measuring algorithm flux is like tracking the unemployment rate. It’s interesting to the economy at large if the rate is 5% or 6%, but ultimately you either have a job or you don’t. If you were hit by an algorithm update, it’s little comfort that the MozCast temperature was low on that day.

Hopefully, if you were impacted by Penguin in the past and have made changes, those changes have been rewarded (or soon will be). The good news is that, now that Penguin is real-time, we should haven’t to wait another two years for a major refresh.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/penguin-40-was-it-worth-wait.html

How to Appear in Google’s Answer Boxes – Whiteboard Friday

Posted by randfish

Featured snippets are the name of the rankings game. Often eclipsing organic results at the top of the SERPs, “ranking zero” or capturing an answer box in Google can mean increased clicks and traffic to your site. In today’s Whiteboard Friday, Rand explains the three types of featured snippets and how you can best position yourself to grab those coveted spots in the SERPs.

http://fast.wistia.net/embed/iframe/fyb386b5c1?seo=false&videoFoam=true

http://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week, we’re going to chat about answer boxes, those featured snippets that Google puts in ranking position zero, oftentimes above the rest of the organic results, usually below some of the top ads, and sometimes they can draw a ton of the clicks away from the rest of the 10 results that would normally appear in Google’s organic ranking.

Now, thanks to our friends up at STAT in Vancouver — Rob Bucci specifically, who did a great presentation at MozCon, he delivered some really interesting research — and so we know a little bit more about the world of featured snippets. Specifically, that there are three kinds of featured snippets or answer boxes, if you prefer, that appear in Google’s results on both mobile and desktop. Now, Rob used desktop-based, but in my research I checked through all the examples that I could find, and the same featured snippets that we saw in desktop were replicated on mobile. So I think this is a pretty one-to-one ratio that’s going on here.

The three were paragraphs, lists, and tables. I’ll show you examples of all of those. But globally, we’re talking about 15% of all queries in STAT’s database that came up with one of these answer boxes.

Paragraphs

So I did a search here for “Istanbul history.” You can see that Wikipedia is not just ranking number one, they’re also ranking number zero. So they have this nice featured snippet. It’s got a photo or an image that’ll appear on the right-hand side on desktop or on top of the text in mobile, and then the snippet, which essentially tries to give you a brief answer, a quick answer to the question. Now, of course, this query is pretty broad, I probably want to know a lot more about Istanbul’s history than the fact that it was a human settlement for 3,000 years. But if you want just that quick answer, you can get those.

There are paragraph answers for all sorts of things. These are about 63% of all the answer boxes are in paragraph format.

Lists

Lists look like this. So I search for “strengthen lower back,” I get, again, that image and then I get — this is from wikiHow, so quality, questionable — but back strengthening exercises. They say, number one, do pelvic tilting. Number two, do hip bridges. Number three, do floor swimming. Number four, do the bird dog exercise. That sounds exciting and painful. This is from an article called “How to Strengthen Lower Back,” and it’s on wikiHow’s URL there. These lists, that are usually in numeric or they can be in bullet point format, so either one can appear, they’re about 19% of answers.

Tables

And then finally, we have ones like this. I searched for “WordPress hosting comparison.” These tables show up in a lot of places where you see a comparison or a chart-type of view. In this case, there actually was a visual of an actual graph, and then performance of the best WordPress hosting companies, the name, the account type, the cost per month. This is from wpsitecare.com. Again, this was ranking, I believe, number two or number three and also ranking number zero. So this is sort of great. I can’t remember who was ranking number one, but they’re ranking ahead of the number one spot, as well, by being in this position zero.

These are about 16% of answers, so really close on tables and lists. This is via STAT’s featured snippet research, which I will link to. It’s a great PDF document that you can check out from Rob that I’ll point to in the Whiteboard Friday.

In addition to knowing this about featured snippets, that, hey, it’s a fairly substantive quantity of things, it can also jump you above the rest of the results, and there are these three different formats, we had a bunch of questions and we keep getting them on, “How do I get in there?” I actually have some great answers for you. So not only has Rob and his team been doing some research, but we’ve done some research and some testing work here at Moz, and Dr. Pete has done a bunch. So I do have some suggestions, some recommendations for you if you’re going to try and get into these featured snippets.

Best practices to appear in the answer box/featured snippet

1. Identify queries in KW research that, implicitly or explicitly, ask a question.

You actually need to do your keyword research and identify those queries that implicitly or explicitly are asking a question. The question needs to be slightly broader than what Google can deliver directly out of Knowledge Graph.

So for example, if you were to ask, “How old is Istanbul,” they might say “3,000 years old.” They might not even give any citation at all to Wikipedia or any other website. If I were to ask, “How old is Rand Fishkin,” they might put in 37, and they might give absolutely no citation, no link at all, no credit to any page of mine on the web. Again, very frustrating.

So these are essentially queries that we’re looking for in our keyword research that are slightly broader than a single line or single piece of knowledge, but they do demand a question that it’s being answered. You can find those in your keyword research pretty easily. If you go into Keyword Explorer, for example, and you use the suggestions filter for our questions, virtually all of those are. But many things, like Istanbul history, it’s an implicit question, not an explicit one. So you can get featured snippets for those as well.

2. Seek out queries that already use the answer box. If the competition’s doing a poor job, these are often easy to grab.

You want to seek out queries that already use the answer box. So again, if you’re using a tool like Keyword Explorer or something — I believe STAT does this as well — where they will identify the types of results that are in the query. You’re looking for these answer box- or featured snippets-types of results. If they are in there and someone else already owns it, that means you can usually leapfrog them by providing a better-formatted, more accurate, more complete, or higher-ranking answer.

So if you’re ranking number three or number four and the number two or number one result is producing that answer box and you reformat your content (and I’ll talk about how we can do that in a sec), you reformat your content to meet one of these items, the correct one, whichever one is being triggered, you can leapfrog them. You can take that position zero away from your competition and earn it for yourself. It’s especially easy when they’re doing a poor job. If they’ve got a weak result in there, and there are a lot of these that are very weak today, you can often take them away.

3. Ranking #1 can help, but isn’t required! Google will pull from any first page result.

Ranking number one is helpful, but it is not required. Google will pull from any first-page result. In fact, you can test this for yourself. Very frequently, if you do a query that pulls up an answer box and then you take the query string and you add “&num=100”, or you change your settings in Google Search such that Google shows 50 or 100 results, they are often going to pull from a lower-down result, sometimes in the bottom 30 or 40 results rather than the top 10. So Google is essentially triggering this answer result from anything that appears on page one of the query, which is awesome for all of us because it means that we could be ranking number 6, 7, 8, 9, 10, and still get the answer box if we do other things correctly, like…

4. Format and language are essential! Match the paragraph, or table, and use the logical answer to the query terms in your title/caption/label/section header.

Format and language. These are essential. The language means the language used. We need to use the terms and phrases a little more literally than we would with a lot of other types of keyword targeting, because Google really, really seems to like, if I search for “strengthen lower back,” they are showing me an article called “strengthen lower back,” not “back strengthening for newbies” or that kind of thing. They are much more literal in most of these than we’ve seen them be, thanks to technologies like RankBrain and Hummingbird, with other kinds of queries.

We also need to make sure that we’re matching the paragraph, the list, or the table format and that we’re using a logical answer to those query terms. That answer can be in the title of your web page, but it can also be in the caption of an image, the label of a section, or a section header. In this case, for example, part three of this article was back strengthening exercises. That’s where they’re pulling from. In this case, they have “City of Istanbul” and then they have history and that’s the section. In this case, it’s the performance chart that’s shown right at the top of the web page. But they will pull from inside a document. So as long as you’re structured in one section or in the document as a whole correctly, you can get in there.

5. Be accurate. Google tend to favor stronger, more correct responses.

You want to be accurate. Google actually does tend to favor more accurate results.I know you might say, “How do I know I’m being accurate? Some of this information is very subjective.” It is true. Google tends to look at sources that they trust to look for words and phrases and structured information that matches up many, many times over across many trusted sites, and then they will show results that match what are in those trusted sites more often.
So for example, many folks point out, “What about in political spheres where there might be arguments about which one is correct?” Google will tend to prefer the more accurate one from a scientific consensus-type of basis or from trusted resources, like an NPR or a Wikipedia or a census.gov or those kinds of things. Not necessarily from those domains, but information that matches what is on those domains. If your census numbers don’t match what’s on the actual census.gov, Google might start to trust you a little less.

6. Entice the clicks by using Google’s maximum snippet length to your advantage.

This is less about how to rank there, but more about how to earn traffic from it. If you’re ranking in position zero, you might be frustrated that Google is going to take those clicks away from you because the searcher is going to get the answer before they ever need to click on your site, thus you don’t earn the traffic.

We’ve seen this a little bit, but, in fact, most of the time when we rank number zero, we see that we get more traffic than just ranking number one by itself. You’re essentially getting two, because you rank number zero plus whatever normal or organic position you’re in. You can entice the click by using Google’s maximum snippet length to your advantage. Meaning, they are not going to put all the different numbered answers in the lists here from wikiHow, they’re only going to put the first four or five. Therefore, if you have a list that is six or seven or eight items long, someone has to click to see them all. Same thing with the paragraph. They’re only going to use a certain number of characters, and so if you have a paragraph that leads into the next paragraph or that goes long with the character count or the word count, you can again draw that click rather than having Google take that traffic away.

With this information at your disposal, you should be armed and ready to take over some of those result number zeros, get some answer boxes, some featured snippets on your side. I look forward to hearing your questions. I would love to hear if you’ve got some examples of featured snippets, where you’re ranking, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Use Moz Pro to track which SERP features drive traffic to your site.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/how-to-appear-in-googles-answer-boxes.html

How to Fix Crawl Errors in Google Search Console

Posted by Joe.Robison

A lot has changed in the five years since I first wrote about what was Google Webmaster Tools, now named Google Search Console. Google has unleashed significantly more data that promises to be extremely useful for SEOs. Since we’ve long since lost sufficient keyword data in Google Analytics, we’ve come to rely on Search Console more than ever. The “Search Analytics” and “Links to Your Site” sections are two of the top features that did not exist in the old Webmaster Tools.

While we may never be completely satisfied with Google’s tools and may occasionally call their bluffs, they do release some helpful information (from time to time). To their credit, Google has developed more help docs and support resources to aid Search Console users in locating and fixing errors.

Despite the fact that some of this isn’t as fun as creating 10x content or watching which of your keywords have jumped in the rankings, this category of SEO is still extremely important.

Looking at it through Portent’s epic visualization of how Internet marketing pieces fit together, fixing crawl errors in Search Console fits squarely into the “infrastructure” piece:

If you can develop good habits and practice preventative maintenance, weekly spot checks on crawl errors will be perfectly adequate to keep them under control. However, if you fully ignore these (pesky) errors, things can quickly go from bad to worse.

Crawl Errors layout

One change that has evolved over the last few years is the layout of the Crawl Errors view within Search Console. Search Console is divided into two main sections: Site Errors and URL Errors.

Categorizing errors in this way is pretty helpful because there’s a distinct difference between errors at the site level and errors at the page level. Site-level issues can be more catastrophic, with the potential to damage your site’s overall usability. URL errors, on the other hand, are specific to individual pages, and are therefore less urgent.

The quickest way to access Crawl Errors is from the dashboard. The main dashboard gives you a quick preview of your site, showing you three of the most important management tools: Crawl Errors, Search Analytics, and Sitemaps.

You can get a quick look at your crawl errors from here. Even if you just glance at it daily, you’ll be much further ahead than most site managers.

1. Site Errors

The Site Errors section shows you errors from your website as a whole. These are the high-level errors that affect your site in its entirety, so don’t skip these.

In the Crawl Errors dashboard, Google will show you these errors for the last 90 days.

If you have some type of activity from the last 90 days, your snippet will look like this:

If you’ve been 100% error-free for the last 90 days with nothing to show, it will look like this:

That’s the goal — to get a “Nice!” from Google. As SEOs we don’t often get any validation from Google, so relish this rare moment of love.

How often should you check for site errors?

In an ideal world you would log in daily to make sure there are no problems here. It may get monotonous since most days everything is fine, but wouldn’t you kick yourself if you missed some critical site errors?

At the extreme minimum, you should check at least every 90 days to look for previous errors so you can keep an eye out for them in the future — but frequent, regular checks are best.

We’ll talk about setting up alerts and automating this part later, but just know that this section is critical and you should be 100% error-free in this section every day. There’s no gray area here.

A) DNS Errors

What they mean

DNS errors are important — and the implications for your website if you have severe versions of these errors is huge.

DNS (Domain Name System) errors are the first and most prominent error because if the Googlebot is having DNS issues, it means it can’t connect with your domain via a DNS timeout issue or DNS lookup issue.

Your domain is likely hosted with a common domain company, like Namecheap or GoDaddy, or with your web hosting company. Sometimes your domain is hosted separately from your website hosting company, but other times the same company handles both.

Are they important?

While Google states that many DNS issues still allow Google to connect to your site, if you’re getting a severe DNS issue you should act immediately.

There may be high latency issues that do allow Google to crawl the site, but provide a poor user experience.

A DNS issue is extremely important, as it’s the first step in accessing your website. You should take swift and violent action if you’re running into DNS issues that prevent Google from connecting to your site in the first place.

How to fix

  1. First and foremost, Google recommends using their Fetch as Google tool to view how Googlebot crawls your page. Fetch as Google lives right in Search Console.

    If you’re only looking for the DNS connection status and are trying to act quickly, you can fetch without rendering. The slower process of Fetch and Render is useful, however, to get a side-by-side comparison of how Google sees your site compared to a user.

  2. Check with your DNS provider. If Google can’t fetch and render your page properly, you’ll want to take further action. Check with your DNS provider to see where the issue is. There could be issues on the DNS provider’s end, or it could be worse.
  3. Ensure your server displays a 404 or 500 error code. Instead of having a failed connection, your server should display a 404 (not found) code or a 500 (server error) code. These codes are more accurate than having a DNS error.

Other tools

  • ISUP.me – Lets you know instantly if your site is down for everyone, or just on your end.
  • Web-Sniffer.net – shows you the current HTTP(s) request and response header. Useful for point #3 above.

B) Server Errors

What they mean

A server error most often means that your server is taking too long to respond, and the request times out. The Googlebot that’s trying to crawl your site can only wait a certain amount of time to load your website before it gives up. If it takes too long, the Googlebot will stop trying.

Server errors are different than DNS errors. A DNS error means the Googlebot can’t even lookup your URL because of DNS issues, while server errors mean that although the Googlebot can connect to your site, it can’t load the page because of server errors.

Server errors may happen if your website gets overloaded with too much traffic for the server to handle. To avoid this, make sure your hosting provider can scale up to accommodate sudden bursts of website traffic. Everybody wants their website to go viral, but not everybody is ready!

Are they important?

Like DNS errors, a server error is extremely urgent. It’s a fundamental error, and harms your site overall. You should take immediate action if you see server errors in Search Console for your site.

Making sure the Googlebot can connect to the DNS is an important first step, but you won’t get much further if your website doesn’t actually show up. If you’re running into server errors, the Googlebot won’t be able to find anything to crawl and it will give up after a certain amount of time.

How to fix

In the event that your website is running fine at the time you encounter this error, that may mean there were server errors in the past Though this error may have been resolved for now, you should still make some changes to prevent it from happening again.

This is Google’s official direction for fixing server errors:

“Use Fetch as Google to check if Googlebot can currently crawl your site. If Fetch as Google returns the content of your homepage without problems, you can assume that Google is generally able to access your site properly.”

Before you can fix your server errors issue, you need to diagnose specifically which type of server error you’re getting, since there are many types:

  • Timeout
  • Truncated headers
  • Connection reset
  • Truncated response
  • Connection refused
  • Connect failed
  • Connect timeout
  • No response

Addressing how to fix each of these is beyond the scope of this article, but you should reference Google Search Console help to diagnose specific errors.

C) Robots failure

A Robots failure means that the Googlebot cannot retrieve your robots.txt file, located at [yourdomain.com]/robots.txt.

What they mean

One of the most surprising things about a robots.txt file is that it’s only necessary if you don’t want Google to crawl certain pages.

From Search Console help, Google states:

“You need a robots.txt file only if your site includes content that you don’t want search engines to index. If you want search engines to index everything in your site, you don’t need a robots.txt file — not even an empty one. If you don’t have a robots.txt file, your server will return a 404 when Googlebot requests it, and we will continue to crawl your site. No problem.”

Are they important?

This is a fairly important issue. For smaller, more static websites without many recent changes or new pages, it’s not particularly urgent. But the issue should still be fixed.

If your site is publishing or changing new content daily, however, this is an urgent issue. If the Googlebot cannot load your robots.txt, it’s not crawling your website, and it’s not indexing your new pages and changes.

How to fix

Ensure that your robots.txt file is properly configured. Double-check which pages you’re instructing the Googlebot to not crawl, as all others will be crawled by default. Triple-check the all-powerful line of “Disallow: /” and ensure that line DOES NOT exist unless for some reason you do not want your website to appear in Google search results.

If your file seems to be in order and you’re still receiving errors, use a server header checker tool to see if your file is returning a 200 or 404 error.

What’s interesting about this issue is that it’s better to have no robots.txt at all than to have one that’s improperly configured. If you have none at all, Google will crawl your site as usual. If you have one returning errors, Google will stop crawling until you fix this file.

For being only a few lines of text, the robots.txt file can have catastrophic consequences for your website. Make sure you’re checking it early and often.

2. URL Errors

URL errors are different from site errors because they only affect specific pages on your site, not your website as a whole.

Google Search Console will show you the top URL errors per category — desktop, smartphone, and feature phone. For large sites, this may not be enough data to show all the errors, but for the majority of sites this will capture all known problems.

Tip: Going crazy with the amount of errors? Mark all as fixed.

Many site owners have run into the issue of seeing a large number of URL errors and getting freaked out. The important thing to remember is a) Google ranks the most important errors first and b) some of these errors may already be resolved.

If you’ve made some drastic changes to your site to fix errors, or believe a lot of the URL errors are no longer happening, one tactic to employ is marking all errors as fixed and checking back up on them in a few days.

When you do this, your errors will be cleared out of the dashboard for now, but Google will bring the errors back the next time it crawls your site over the next few days. If you had truly fixed these errors in the past, they won’t show up again. If the errors still exist, you’ll know that these are still affecting your site.

A) Soft 404

A soft 404 error is when a page displays as 200 (found) when it should display as 404 (not found).

What they mean

Just because your 404 page looks like a 404 page doesn’t mean it actually is one. The user-visible aspect of a 404 page is the content of the page. The visible message should let users know the page they requested is gone. Often, site owners will have a helpful list of related links the users should visit or a funny 404 response.

The flipside of a 404 page is the crawler-visible response. The header HTTP response code should be 404 (not found) or 410 (gone).

A quick refresher on how HTTP requests and responses look:

Image source: Tuts Plus

If you’re returning a 404 page and it’s listed as a Soft 404, it means that the header HTTP response code does not return the 404 (not found) response code. Google recommends “that you always return a 404 (not found) or a 410 (gone) response code in response to a request for a non-existing page.”

Another situation in which soft 404 errors may show up is if you have pages that are 301 redirecting to non-related pages, such as the home page. Google doesn’t seem to explicitly state where the line is drawn on this, only making mention of it in vague terms.

Officially, Google says this about soft 404s:

“Returning a code other than 404 or 410 for a non-existent page (or redirecting users to another page, such as the homepage, instead of returning a 404) can be problematic.”

Although this gives us some direction, it’s unclear when it’s appropriate to redirect an expired page to the home page and when it’s not.

In practice, from my own experience, if you’re redirecting large amounts of pages to the home page, Google can interpret those redirected URLs as soft 404s rather than true 301 redirects.

Conversely, if you were to redirect an old page to a closely related page instead, it’s unlikely that you’d trigger the soft 404 warning in the same way.

Are they important?

If the pages listed as soft 404 errors aren’t critical pages and you’re not eating up your crawl budget by having some soft 404 errors, these aren’t an urgent item to fix.

If you have crucial pages on your site listed as soft 404s, you’ll want to take action to fix those. Important product, category, or lead gen pages shouldn’t be listed as soft 404s if they’re live pages. Pay special attention to pages critical to your site’s moneymaking ability.

If you have a large amount of soft 404 errors relative to the total number of pages on your site, you should take swift action. You can be eating up your (precious?) Googlebot crawl budget by allowing these soft 404 errors to exist.

How to fix

For pages that no longer exist:

  • Allow to 404 or 410 if the page is gone and receives no significant traffic or links. Ensure that the server header response is 404 or 410, not 200.
  • 301 redirect each old page to a relevant, related page on your site.
  • Do not redirect broad amounts of dead pages to your home page. They should 404 or be redirected to appropriate similar pages.

For pages that are live pages, and are not supposed to be a soft 404:

  • Ensure there is an appropriate amount of content on the page, as thin content may trigger a soft 404 error.
  • Ensure the content on your page doesn’t appear to represent a 404 page while serving a 200 response code.

Soft 404s are strange errors. They lead to a lot of confusion because they tend to be a strange hybrid of 404 and normal pages, and what is causing them isn’t always clear. Ensure the most critical pages on your site aren’t throwing soft 404 errors, and you’re off to a good start!

B) 404

A 404 error means that the Googlebot tried to crawl a page that doesn’t exist on your site. Googlebot finds 404 pages when other sites or pages link to that non-existent page.

What they mean

404 errors are probably the most misunderstood crawl error. Whether it’s an intermediate SEO or the company CEO, the most common reaction is fear and loathing of 404 errors.

Google clearly states in their guidelines:

“Generally, 404 errors don’t affect your site’s ranking in Google, so you can safely ignore them.”

I’ll be the first to admit that “you can safely ignore them” is a pretty misleading statement for beginners. No — you cannot ignore them if they are 404 errors for crucial pages on your site.

(Google does practice what it preaches, in this regard — going to google.com/searchconsole returns a 404 instead of a helpful redirect to google.com/webmasters)

Distinguishing between times when you can ignore an error and when you’ll need to stay late at the office to fix something comes from deep review and experience, but Rand offered some timeless advice on 404s back in 2009:

“When faced with 404s, my thinking is that unless the page:

A) Receives important links to it from external sources (Google Webmaster Tools is great for this),
B) Is receiving a substantive quantity of visitor traffic, and/or
C) Has an obvious URL that visitors/links intended to reach

It’s OK to let it 404.”

The hard work comes in deciding what qualifies as important external links and substantive quantity of traffic for your particular URL on your particular site.

Annie Cushing also prefers Rand’s method, and recommends:

“Two of the most important metrics to look at are backlinks to make sure you don’t lose the most valuable links and total landing page visits in your analytics software. You may have others, like looking at social metrics. Whatever you decide those metrics to be, you want to export them all from your tools du jour and wed them in Excel.”

One other thing to consider not mentioned above is offline marketing campaigns, podcasts, and other media that use memorable tracking URLs. It could be that your new magazine ad doesn’t come out until next month, and the marketing department forgot to tell you about an unimportant-looking URL (example.com/offer-20) that’s about to be plastered in tens thousands of magazines. Another reason for cross-department synergy.

Are they important?

This is probably one of the trickiest and simplest problems of all errors. The vast quantity of 404s that many medium to large sites accumulate is enough to deter action.

404 errors are very urgent if important pages on your site are showing up as 404s. Conversely, like Google says, if a page is long gone and doesn’t meet our quality criteria above, let it be.

As painful as it might be to see hundreds of errors in your Search Console, you just have to ignore them. Unless you get to the root of the problem, they’ll continue showing up.

How to fix 404 errors

If your important page is showing up as a 404 and you don’t want it to be, take these steps:

  1. Ensure the page is published from your content management system and not in draft mode or deleted.
  2. Ensure the 404 error URL is the correct page and not another variation.
  3. Check whether this error shows up on the www vs non-www version of your site and the http vs https version of your site. See Moz canonicalization for more details.
  4. If you don’t want to revive the page, but want to redirect it to another page, make sure you 301 redirect it to the most appropriate related page.

In short, if your page is dead, make the page live again. If you don’t want that page live, 301 redirect it to the correct page.

How to stop old 404s from showing up in your crawl errors report

If your 404 error URL is meant to be long gone, let it die. Just ignore it, as Google recommends. But to prevent it from showing up in your crawl errors report, you’ll need to do a few more things.

As yet another indication of the power of links, Google will only show the 404 errors in the first place if your site or an external website is linking to the 404 page.

In other words, if I type in your-website-name.com/unicorn-boogers, it won’t show up in your crawl errors dashboard unless I also link to it from my website.

To find the links to your 404 page, go to your Crawl Errors > URL Errors section:

Then click on the URL you want to fix:

Search your page for the link. It’s often faster to view the source code of your page and find the link in question there:

It’s painstaking work, but if you really want to stop old 404s from showing up in your dashboard, you’ll have to remove the links to that page from every page linking to it. Even other websites.

What’s really fun (not) is if you’re getting links pointed to your URL from old sitemaps. You’ll have to let those old sitemaps 404 in order to totally remove them. Don’t redirect them to your live sitemap.

C) Access denied

Access denied means Googlebot can’t crawl the page. Unlike a 404, Googlebot is prevented from crawling the page in the first place.

What they mean

Access denied errors commonly block the Googlebot through these methods:

  • You require users to log in to see a URL on your site, therefore the Googlebot is blocked
  • Your robots.txt file blocks the Googlebot from individual URLs, whole folders, or your entire site
  • Your hosting provider is blocking the Googlebot from your site, or the server requires users to authenticate by proxy

Are they important?

Similar to soft 404s and 404 errors, if the pages being blocked are important for Google to crawl and index, you should take immediate action.

If you don’t want this page to be crawled and indexed, you can safely ignore the access denied errors.

How to fix

To fix access denied errors, you’ll need to remove the element that’s blocking the Googlebot’s access:

  • Remove the login from pages that you want Google to crawl, whether it’s an in-page or popup login prompt
  • Check your robots.txt file to ensure the pages listed on there are meant to be blocked from crawling and indexing
  • Use the robots.txt tester to see warnings on your robots.txt file and to test individual URLs against your file
  • Use a user-agent switcher plugin for your browser, or the Fetch as Google tool to see how your site appears to Googlebot
  • Scan your website with Screaming Frog, which will prompt you to log in to pages if the page requires it

While not as common as 404 errors, access denied issues can still harm your site’s ranking ability if the wrong pages are blocked. Be sure to keep an eye on these errors and rapidly fix any urgent issues.

D) Not followed

What they mean

Not to be confused with a “nofollow” link directive, a “not followed” error means that Google couldn’t follow that particular URL.

Most often these errors come about from Google running into issues with Flash, Javascript, or redirects.

Are they important?

If you’re dealing with not followed issues on a high-priority URL, then yes, these are important.

If your issues are stemming from old URLs that are no longer active, or from parameters that aren’t indexed and just an extra feature, the priority level on these is lower — but you should still analyze them.

How to fix

Google identifies the following as features that the Googlebot and other search engines may have trouble crawling:

  • JavaScript
  • Cookies
  • Session IDs
  • Frames
  • DHTML
  • Flash

Use either the Lynx text browser or the Fetch as Google tool, using Fetch and Render, to view the site as Google would. You can also use a Chrome add-on such as User-Agent Switcher to mimic Googlebot as you browse pages.

If, as the Googlebot, you’re not seeing the pages load or not seeing important content on the page because of some of the above technologies, then you’ve found your issue. Without visible content and links to crawl on the page, some URLs can’t be followed. Be sure to dig in further and diagnose the issue to fix.

For parameter crawling issues, be sure to review how Google is currently handling your parameters. Specify changes in the URL Parameters tool if you want Google to treat your parameters differently.

For not followed issues related to redirects, be sure to fix any of the following that apply:

  • Check for redirect chains. If there are too many “hops,” Google will stop following the redirect chain
  • When possible, update your site architecture to allow every page on your site to be reached from static links, rather than relying on redirects implemented in the past
  • Don’t include redirected URLs in your sitemap, include the destination URL

Google used to include more detail on the Not Followed section, but as Vanessa Fox detailed in this post, a lot of extra data may be available in the Search Console API.

Other tools

E) Server errors & DNS errors

Under URL errors, Google again lists server errors and DNS errors, the same sections in the Site Errors report. Google’s direction is to handle these in the same way you would handle the site errors level of the server and DNS errors, so refer to those two sections above.

They would differ in the URL errors section if the errors were only affecting individual URLs and not the site as a whole. If you have isolated configurations for individual URLs, such as minisites or a different configuration for certain URLs on your domain, they could show up here.


Now that you’re the expert on these URL errors, I’ve created this handy URL error table that you can print out and tape to your desktop or bathroom mirror.

Conclusion

I get it — some of this technical SEO stuff can bore you to tears. Nobody wants to individually inspect seemingly unimportant URL errors, or conversely, have a panic attack seeing thousands of errors on your site.

With experience and repetition, however, you will gain the mental muscle memory of knowing how to react to the errors: which are important and which can be safely ignored. It’ll be second nature pretty soon.

If you haven’t already, I encourage you to read up on Google’s official documentation for Search Console, and keep these URLs handy for future questions:

We’re simply covering the Crawl Errors section of Search Console. Search Console is a data beast on its own, so for further reading on how to make best use of this tool in its entirety, check out these other guides:

Google has generously given us one of the most powerful (and free!) tools for diagnosing website errors. Not only will fixing these errors help you improve your rankings in Google, they help provide a better user experience to your visitors, and help meet your business goals faster.

Your turn: What crawl errors issues and wins have you experienced using Google Search Console?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/how-to-fix-crawl-errors-in-google.html

Should You Implement That New Google Feature?

Posted by dohertyjf

I remember when I first started in SEO back in 2010 full-time. It feels like forever ago and yesterday at the same time.

I was constantly plugged into the SEO Twitter firehose of information. I subscribed to the popular SEO blogs of the day, soaking up information about SEO that wasn’t even relevant to my day job at the time building links. While I read plenty of content about link acquisition, I also went deep into the geeky sides of technical SEO because it appealed to my web developer background.

Every week or two, Google was announcing something new. Some new feature, some new snippet, some new ad type, some new way of getting your pages/sites indexed faster and making them stand out from the crowd.

I remember SMX 2012 in New York City where I sat in on a session where now-former Mozzer Matt Brown spoke on Schema.org and counseled all of us to hop on the Schema bandwagon because it was the future of search. You can see that presentation here and I’ll reference it a few times in this post.

Five years later, I can look back and say, “Yes, they were right. Schema has stuck around and proven to be a stronger and stronger part of search algorithms and you should learn it and implement it, if you haven’t already.”

It works and we know that now in 2016, but back in 2012 it was new and took a lot of effort to implement. And so many people simply didn’t.

So how can you, as either a small business owner dabbling in SEO (while also doing all the things as the owner) or a professional SEO/digital marketer, know when you should implement something that’s brand-new, or whether you should wait on it until you have more data?


Is there a history of it?

Google is almost twenty years old, if you can believe that. They’ve been around a long time, built a huge business, and changed the way the world’s information is organized, found, and consumed. Google is a once-in-a-lifetime company, and I say that as someone with a love/hate relationship with them (alongside many other SEOs/digital marketers).

In spite of their growth and current size, their mission has always been the same:

Google’s mission is to organize the world’s information and make it universally accessible and useful.
Source

This is at Google’s core.

Google has moved into other areas, such as social, but haven’t seen great success because they’re better at organizing content than creating it. Check out this from Matthew Brown’s talk:

The Authorship program was killed in 2014 (post here on SEL), though the idea behind it (identifying who wrote what and where online) lives on to help Google organize the world’s information better.

This is a great example of something that everyone said you *should* do (and maybe short-term helped with clickthrough rates), but which Google eventually killed because it was a new initiative. You would have been much better served to spend your time writing around the Internet and marketing your company than just trying to get an image in the SERPs.


Are others already implementing it?

I hate the United States culture of consumerism and keeping up with the Joneses. Why do we feel the need to spend money that we don’t have to buy things we don’t want to impress people we don’t really like (paraphrase from here)?

The same thing happens in digital marketing. If we see someone implementing something, we should rightly ask “Why are they doing that?”, then make our own decisions.

The interesting thing — just like with impressing our neighbors — is that sometimes (but not always) they will have the inside line on something great that a) you can afford (aka get done for your company) and b) is in line with your personal strategy and values (aka you’re true to yourself).

HTTPS is one such example. If you’re a business with customers (which all of you are, because how do you make money without customers? If you can, I’d like to speak with you), then you care about them and want them to be safe and happy. While HTTPS takes time to deploy on large websites, and can have very real challenges as Wired is learning the hard way, on smaller sites it can be much simpler and can be implemented more quickly. You may not see a bump in rankings, traffic, or revenue right away, but you can be sure that HTTPS is something Google wants to and is beginning to reward.

Finally, if you see something rolled out and not many people are implementing it, ask why. If it’s because it’s difficult technically but you can get it done fast and it’s true to your strategy, then get it done — it’ll help you get ahead of the pack. If it requires a huge undertaking, however, take your time and wait until the barrier to entry is lower or until the search engines finally start making good on their promises.


Is it a continuation or a new initiative for Google?

Earlier I mentioned Google’s core mission of organizing the world’s information. This is why Google was initially created, and it’s what they still do incredibly well. Over time, they’ve (finally) taken the user into account and realized that offering a great user experience benefits their bottom line. User experience (and design!) has become part of their core.

There are a few things that Google is terrible at, such as social or content. They’re also terrible at launching software that works really well and can displace incumbents. Google Flights is great, but online travel agencies (OTAs) like Expedia are still winning, even as Google puts themselves above the organic results.

That’s just one example. If it’s a brand-new initiative that Google has not previously gone after, be very suspicious. I like the “hurry up and wait” approach here — hurry up to learn all that you can about it, but wait on implementing it, especially if you’re a small company with a million things to do already. Stay true to your strategy.

If it’s a continuation of something they’ve already been doing and received traction on, then you should take more notice and seriously consider how you can implement it for your company.

Take, for example, the recent rollout of AMP (Accelerated Mobile Pages), which essentially allows Google to display a cached version of your page to mobile users so that it loads quickly and makes users happy(er).

Google has said for years that they want above-the-fold content on mobile sites to load in under one second. AMP is a continuation of something they’ve been conveying for quite a while, a promised initiative they’re finally making good on. Within mobile search results, we now see how sites that load quickly tend to rank better than they otherwise would. I’ve witnessed this firsthand on some of the sites I’ve touched — when engineers care about speed, your site makes both search engines and users very happy.


Is it passive or active?

Sometimes Google creates new initiatives within search that require no implementation on your end. They run tests all the time (SERPtests is a great resource from Conrad O’Connell) that affect the way your site shows up.

Don’t assume that just because they’ve changed something that it’s in your best interest. Google is a business and they exist to make themselves money, not you.

As an SEO, you are not Google’s friend.

So, once again, we hurry up to learn and then decide whether we should take action (adjust your meta descriptions, add Schema, etc.) or just sit back and let the data accumulate to inform better decisions. The answer will always be different depending on your business, and I can’t tell you whether you’ll benefit from specific changes or not. But you’re empowered to make that decision.

If a new feature requires active development from your end, take the time to figure out why Google’s made the change, what it might mean for the future, and how much work it’s going to take to achieve the expected outcome. If you’re a consultant and not helping your clients prioritize their work based off the predicted impact and the amount of effort, you’re not doing your job. And if you’re an in-house SEO in this boat, same message: you’re not doing your job.


Does it fit with your current SEO strategy?

I’ve touched on this point a few times, but I consider it so important that it merits its own section.

I’ve been a consultant since 2011. I’ve worked with businesses of all kinds and ran marketing directly on a few bigger brands as well. I’ve seen companies with zero SEO strategy where we built it from scratch, and I’ve seen companies with an SEO strategy that was set years ago and hasn’t changed. Neither of these is good.

An SEO strategy should be set, to a degree. You should know what your business needs to do in order to rank and drive the business results needed from organic search. However, your strategy should not be so set that you’re unable to implement new things that are both true to your business and will move that metrics needle.

Know where you’re going with your strategy and what your metrics are. By having those goals in mind and by putting in place processes that allow you to grow passively, you can confidently say “yes” or “no” to new features that may move the needle or may be a distraction.


What about first-mover advantage?

Now, I know there are a lot of people who believe that being a first mover is a great thing. And when you’re launching a new business, this mindset is incredibly pervasive. Everyone wants to “find the niche where no one is and be there to be the first mover.”

The problem is that first mover advantage doesn’t always exist. From a Harvard Business Review article:

First-mover status can confer advantages, but it does not do so categorically. Much depends on the circumstances.

I don’t really believe in first-mover advantage, and as an entrepreneur, going into a completely new realm where no one else has gone before feels too risky to me. I’d rather take my time to learn from others who are trying to do something similar, figure out the unique angle on the business (whether the vertical or the business model), and then build something that users really want. This is called being wise (listening to others) not just smart (figuring it all out on your own).


SEO is a constantly shifting industry. We’re built on the back of a computer algorithm, after all. Because of this, things will change constantly and all digital marketers need to develop a rubric through which you can decide whether a new feature or opportunity is worth your time, effort, and change of strategy long-term.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/should-you-implement-that-new-google.html

Generate 100+ Blog Topic Ideas in Seconds

Posted by BrianChilds

Coming up with blog titles and topics can be a struggle. Most small businesses aim to publish blogs 3-10 times a month and then use these blog articles to populate everything from newsletters to conversion funnels. When you publish content on a regular basis it’s easy to burn through your initial list of blog titles in a few months. Coming up with good titles also takes a lot of time, and when you work on a team defining what’s “good” becomes subjective.

Because regular blogging has such a positive impact on inbound traffic, the process of coming up with ideas shouldn’t be a burden. Never worry about blog topics again: I’ll show you how to generate 100+ long-tail blog title ideas that include estimates of search volume and competitiveness.

What makes a good blog title?

Before jumping into how to generate 100+ blog topics quickly, let’s discuss the importance of having good titles.

I think of blog content development as having two parts: blog articles that form the core of my SEO or inbound marketing strategy, and a backup list of blog ideas I can pull from in a pinch. Both types benefit from having great titles.

Good topics generally follow some basic rules, including:

  • Your posts should answer common/valuable questions.
  • They should focus on your target buyer’s search intent.
  • They should tap into sufficient organic traffic to make them worth blogging about.

When it comes to generating a great backup list of blog topics quickly, it can be hard to identify titles that meet those criteria without succumbing to clickbait. There are several blog title generator tools available, but I find that they tend toward clickbait or “catchy” titles that are more useful for paid channels rather than the long-term value expected from organic search.

Some of the more popular blog title generators are:

HubSpot’s Blog Topic Generator

Impact’s BlogAbout Title Generator

Portent’s Content Idea Generator

Linkbait Generator

It should come as no surprise that there’s been a backlash against clickbait titles recently.

I recommend against using traditionally clickbait titles since they often result in only one type of beneficial metric: page views. To positively impact both search rank position and on-site conversions you need to focus on valuable content that delivers high engagement measured by things like better-than-average time on page, good page depth, and low bounce rates. Clickbait titles and content generally do not provide this.

A better way to generate

Okay, so let’s take a look at a quick way to generate blog titles. Read it, try it, and time it.

  1. In Moz Pro, navigate to Keyword Explorer and enter in your target keyword. (Even if you don’t have a subscription, you can try it free or get Moz Pro free for 30 days.)
  2. On the Overview page, click on Keyword Suggestions.
    Robot Army- KWE screengrab.jpg
  3. Use the “Display keyword suggestions that” dropdown to select “are questions.”
    Robot Army - Suggestions page - KWE.jpg
  4. Here’s your list of potential blog titles for your topic. Note: The “Relevancy” column shows how closely the search term matches the initial query you used, and the “Volume” column displays estimates of monthly organic search traffic.
  5. Access Difficulty and Opportunity scores for your search queries by selecting all the relevant check boxes and clicking the “Add selected to” drop-down to create or add them to a Keyword List in Moz Pro. (Rand put together a great presentation on how to do this.) In the Keyword List, you’re able to view, segment, and sort your blog titles by all the factors available in the Keyword Explorer Overview.

Boom! There you have it. Never hunt for blog titles again. You’ve created a list you can choose from in a pinch, knowing you have quality titles based on search volume, difficulty, and opportunity.

See how fast you can create a great list of blog titles!

More tips for professional marketers

As you analyze results from the Keyword Suggestions feature in Keyword Explorer, here are some additional things you can do to learn about your target customers:

Look for trends in the questions people ask. Do most questions center on a specific pain point, such as cost, quality, or ease of use? Consider segmenting your users based on these different pain points and their associated value drivers.

Find the “best question.” In your list of blog titles, look for the one question that best aligns with your target customer. Then run a Keyword Explorer query on that question by selecting the magnifying glass icon on the right side of the webpage. Often, these results will display an even longer, more targeted list of questions to choose from.

Hope this helps your blogging efforts! Tell us about your experience using Keyword Explorer to generate targeted blog titles. If you want to keep mastering keywords and blog titles after your Moz Pro free trial ends, check out Moz Pro Medium or Keyword Explorer standalone subscriptions.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/generate-100-blog-topic-ideas-in-seconds.html

Google Is Grouping Keyword Volumes – What Does This Mean for SEO?

Posted by sam.nemzer

As of June this year, Google is now grouping keyword volumes for similar keywords in Keyword Planner. I wanted to investigate whether or not this is having an impact on the pages that rank for these similar, grouped keywords. My hypothesis is that, given that Google is associating keywords closely enough to group their volumes, we should expect that the search results would be very similar too.

What has Google changed and why does it matter?

The grouping of keyword volumes is a problem for anyone working in search because Keyword Planner is the primary source for volume data that we use in keyword research, whether that be from Keyword Planner directly, or through a third party tool that takes Keyword Planner data as its input—such as SEMRush, BrightEdge or SearchMetrics.

By “grouping keyword volumes,” we mean that different keywords that are slightly different (but generally convey the same meaning) are given the same volume, which represents the combined volume of every variation. For example, if (hypothetically) [SEO] is searched 21,000 times per month in the UK, and [Search Engine Optimisation] is searched 12,100 times per month, once these keywords are combined, each will be reported as receiving the total of the two—33,100 searches per month.\

On top of this, in the last few weeks Google have also been reducing access to keyword planner data for some accounts. Earlier this month, it was announced that Keyword Planner data will be given only in very broad buckets for advertisers with “lower monthly spend” (although some ways around this have been found). This is a separate change from the volume grouping, which is the main focus of this article.

The fact that Google is grouping keyword volumes in this way implies that they see these keywords as equivalent, at least to some extent. The questions that this raised for me were:

  • Does this mean that we should see keywords with grouped volumes as identical?
  • From an SEO point of view, should we focus our targeting efforts on any one of the grouped keywords, given that Google is seeing them as the same?

There is further reason to think this way given the simple fact that Google is always getting smarter. As well as Parsey McParseface, the English language parser that Google released to the public, much of the research output that we see in patents and journal articles from Google relates to natural language processing, so it is clear that this is an area that Google see as a priority for their research.

One way to test whether or not Google does indeed consider grouped keywords to be identical is to look at search results. The theory is that if keywords are viewed identically, we should see exactly the same pages ranking for the keywords.

What’s going on in the SERPs?

I did a similar analysis a few months ago, which was focused more on general distinctions between keywords within a topic. This analysis is much more focused on the types of variations of keywords that we are seeing being grouped. These types of variations were categorised by, among others, Jennifer Slegg at The SEM Post.

The five types of variations that I’ve looked into for this analysis are the following:

  1. Initialisms/Abbreviations. For example, comparing SERPs for [BBC] and [British Broadcasting Corporation]
  2. Plurals. For example, [waffle maker] and [waffle makers].
  3. Verb stems with and without suffixes. For example, [calculate], [calculated] and [calculating].
  4. Keywords with and without punctuation. For example, [midnight’s children] and [midnights children]
  5. Keywords with and without typos. For example, [heart rate monitor] and [heart rat monitor].

For each of these five categories, I put together a list of 50-100 keywords, along with a variation for each. Within these keyword pairs I investigated whether or not Keyword Planner reported the same volume, and also used the rank tracking tool STAT to see what pages are ranking for each keyword.

From that analysis, I was able to measure the prevalence of grouping keyword volumes within each category (i.e. the percentage of keyword pairs that have grouped volumes), and the similarity of the SERPs (the number of top ten results that were shared between the two keywords) for grouped and ungrouped keyword pairs.

Results

The results for those metrics are the following:

I also looked at how common it is that SERPs are exactly identical, that is that the top ten results are the same pages, in the same order. This showed an interesting pattern. There are only two categories with significant numbers of identical SERPs—Punctuation and Typos. In the case of keywords with and without punctuation, you are more likely to see identical SERPs (implying that Google sees the pair of keywords as identical) if keyword volumes are grouped than if they are not. This is not a hard-and-fast rule though – there are still some ungrouped keywords which have identical SERPs.

In the case of Typos, there are no grouped keyword pairs at all that have identical SERPs. Given also the low prevalence of grouped keywords in this category, it appears that the identical SERPs are coming from “showing results for” SERPs, where Google replaces results for the mistyped keywords with the correct one.

What conclusions can we draw?

  1. The prevalence of keyword grouping is highest for plurals, and very low for typos

    This may be a result of the sample of keywords used in this study, but overall, around 50% of keywords in the sample are grouped. This indicates that, although this volume grouping is a growing phenomenon in Keyword Planner data, it is not yet consistent across all SERPs.

  2. There is not a lot of difference between keywords that are grouped by Keyword Planner, and those that aren’t.

    This is a surprising result. The motivation for conducting this study was to confirm the suspicion that Google associating keyword volumes means that it also associates the search intent. This is comprehensively disproven by this data. There is no significant difference between grouped and ungrouped keyword pairs when it comes to SERP similarity.

    The one group where there is a larger difference is the verb stems category. This is likely because there are many verbs where the present and past tense mean very different things, indicating different search intent. For example, the keywords [march] and [marched] have completely different intents due to the multiple meanings of the word ‘march.’ This means that there’s no chance that these SERPs will be similar. On the other hand, some verbs have little intent difference between past and present forms (for example [admire] and [admired]). These types of keyword pairs generally have grouped volume, and also have more similar SERPs.

  3. Overall, there is not a very high rate of similarity of SERPs for similar keywords

    When starting this analysis, I expected to see much higher rates of similarity between very similar keywords. This is not the case, and to me that is surprising for two reasons. The first is that, as mentioned above, I saw the grouping of keyword volumes to be a clear signal that the keywords were seen as identical intents. This appears not to be the case.

    The other reason is that I have a lot of faith in how smart Google is. Its developments in natural language processing and intent assessment give me the impression that it is able to associate similar keywords in the results it shows.

    It may be that things are heading in this direction, but it’s too early for it to have been fully implemented. The alternative explanation would be Google is that smart, and can interpret the subtle difference between keywords with incredibly similar content.

What should we take away from this?

What does this mean for SEOs doing keyword research? Rank tracking companies such as STAT are looking into ways of splitting keyword volumes between the constituent keywords, so there is hope for at least semi-accurate volume data. What it does mean is that we should ignore the grouped volumes when targeting keywords—just because keywords are given the same volume, it doesn’t mean you shouldn’t target them individually on your site.

On a wider scale, this tells us something about how the anthropomorphised “Google” thinks and works. There are two very separate factors at work here—what Google tells us, and what we actually see. This is something Rand picked up on in his recent Whiteboard Friday, and it applies across all of search—Google tells us one thing, but search rankings don’t necessarily behave the same way. This backs up my belief to never take anything at face value, and always do your own research.

Do these results surprise you as much as they do me? Let me know in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/google-is-grouping-keyword-volumes-what.html

How To Support Data with Real-Life Interviews – Whiteboard Friday

Posted by rcancino

With all the data that today’s marketers can access, there’s often still no substitute for the quality of information you can get from interviewing real people. In today’s Whiteboard Friday, we welcome Rebekah Cancino — a partner at Phoenix-based Onward and #MozCon 2016 speaker — to teach us the whys and hows of great interviews.

http://fast.wistia.net/embed/iframe/d8wzkdu36u?seo=false&videoFoam=true

http://fast.wistia.net/assets/external/E-v1.js

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, Moz fans. I’m Rebekah Cancino. I’m a partner at Onward, and I lead content strategy and user experience design. Today I’m here to talk to you about how to support the data you have, your keyword data, data around search intent, analytics with real life user interviews.

So recently, Rand has been talking a little more about the relationship between user experience design and SEO, whether it’s managing the tensions between the two or the importance of understanding the path to customer purchase. He said that in order to understand that path, we have to talk to real people. We have to do interviews, whether that’s talking to actual users or maybe just people inside your company that have an understanding of the psychographics and the demographics of your target audience, so people like sales folks or customer service reps.

Now, maybe you’re a super data-driven marketer and you haven’t felt the need to talk to real people and do interviews in the past, or maybe you have done user interviews and you found that you got a bunch of obvious insights and it was a huge waste of time and money.

I’m here to tell you that coupling your data with real interviews is always going to give you better results. But having interviews that are useful can be a little bit tricky. The interviews that you do are only as good as the questions you ask and the approach that you take. So I want to make sure that you’re all set and prepared to have really good user interviews. All it takes is a little practice and preparation.

It’s helpful to think of it like this. So the data is kind of telling us what happened. It can tell us about online behaviors, things like keywords, keyword volume, search intent. We can use tools, like KeywordTool.io or Ubersuggest or even Moz’s Keyword Explorer, to start to understand that.

We can look at our analytics, entry and exit pages, bounces, pages that get a lot of views, all of that stuff really important and we can learn a lot from it. But with our interviews, what we’re learning about is the why.

This is the stuff that online data just can’t tell us. This is about those offline behaviors, the emotions, beliefs, attitudes that drive the behaviors and ultimately the purchase decisions. So these two things working together can help us get a really great picture of the whole story and make smarter decisions.

So say, for example, you have an online retailer. They sell mainly chocolate-dipped berries. They’ve done their homework. They’ve seen that most of the keywords people are using tend to be something like “chocolate dipped strawberries gifts” or “chocolate dipped strawberries delivered.” And they’ve done the work to make sure that they’ve done their on-page optimization and doing a lot of other smart things too using that.

But then they also noticed that their Mother’s Day packages and their graduation gifts are not doing so well. They’re starting to see a lot of drop-offs around that product description page and a higher cart abandonment rate than usual.

Now, given the data they had, they might make decisions like, “Well, let’s see if we can do a little more on-page keyword optimization to reflect what’s special about the graduation and Mother’s Day gifts, or maybe we can refine the user experience of the checkout process. But if they talk to some real users — which they did, this is a real story — they might learn that people who send food gift items, they worry about: Is the person I’m sending the gift to, are they going to be home when this gift arrives? Because this is a perishable item, like chocolate-dipped berries, will it melt?

Now, this company, they do a lot of work to protect the berries. The box that they arrive in is super insulated. It’s like its own cooler. They have really great content that tells that story. The problem is that content is buried in the FAQs instead of on the pages in places it matters most — the product detail, the checkout flow.

So you can see here how there’s an opportunity to use the data and the interview insights together to make smarter decisions. You can get to insights like that for your organization too. Let’s talk about some tips that are going to help you make smarter interview decisions.

So the first one is to talk to a spectrum of users who represent your ideal audience. Maybe, like with this berry example, their ideal customer tends to skew slightly female. You would want that group of people, that you’re talking to, to skew that way too. Perhaps they have a little more disposable income. That should be reflected in the group of people that you’re interviewing and so forth. You get it.

The next one is to ask day-in-the-life, open-ended questions. This is really important. If you ask typical marketing questions like, “How likely are you to do this or that?” or, “Tell me on a scale of 1 to 10 how great this was,” you’ll get typical marketing answers. What we want is real nuanced answers that tell us about someone’s real experience.

So I’ll ask questions like, “Tell me about the last time you bought a food gift online? What was that like?” We’re trying to get that person to walk us through their journey from the minute they’re considering something to how they vet the solutions to actually making that purchase decision.

Next is don’t influence the answers. You don’t want to bias someone’s response by introducing an idea. So I wouldn’t say something like, “Tell me about the last time you bought a food gift online. Were you worried that it would spoil?” Now I’ve set them on a path that maybe they wouldn’t have gone on to begin with. It’s much better to let that story unfold naturally.

Moving on, dig deeper. Uncover the why, really important. Maybe when you’re talking to people you realize that they like to cook and by sharing a food item gift with someone who’s far away, they can feel closer to them. Maybe they like gifts to reflect how thoughtful they are or what good tastes they have. You always want to uncover the underlying motives behind the actions people are taking.

So don’t be too rushed in skipping to the next question. If you hear something that’s a little bit vague or maybe you see a point that’s interesting, follow up with some probes. Ask things like, “Tell me more about that,” or, “Why is that? What did you like about it?” and so on.

Next, listen more than you talk. You have maybe 30 to 45 minutes max with each one of these interviews. You don’t want to waste time by inserting yourself into their story. If that happens, it’s cool, totally natural. Just find a way to back yourself out of that and bring the focus back to the person you’re interviewing as quickly and naturally as possible.

Take note of phrases and words that they use. Do they say things like “dipped berries” instead of “chocolate-dipped strawberries?” You want to pay attention to the different ways and phrases that they use. Are there regional differences? What kinds of words do they use to describe your product or service or experience? Are the berries fun, decadent, luxurious? By learning what kind of language and vocabulary people use, you can have copy, meta descriptions, emails that take that into account and reflect that.

Find the friction. So in every experience that we have, there’s always something that’s kind of challenging. We want to get to the bottom of that with our users so we can find ways to mitigate that point of friction earlier on in the journey. So I might ask someone a question like, “What’s the most challenging thing about the last time you bought a food gift?”

If that doesn’t kind of spark an idea with them, I might say something even a little more broad, like, “Tell me about a time you were really disappointed in a gift that you bought or a food gift that you bought,” and see where that takes them.

Be prepared. Great interviews don’t happen by accident. Coming up with all these questions takes time and preparation. You want to put a lot of thought into them. By asking questions that tell me about the nature of the whole journey, you want to be clear about your priorities. Know which questions are most important to you and know which ones are must have pieces of information. That way you can use your time wisely while you still let the conversation flow where it takes you.

Finally, relax and breathe. The people you’re interviewing are only going to be as relaxed as you are. If you’re stiff or overly formal or treating this like it’s a chore and you’re bored, they’re going to pick up on that energy and they’re probably not going to feel comfortable sharing their thoughts with you, or there won’t be space for that to happen.

Make sure you let them know ahead of time, like, “Hey, feel free to be honest. These answers aren’t going to be shared in a way that can be attributed directly to you, just an aggregate.”

And have fun with it. Be genuinely curious and excited about what you’re going to learn. They’ll appreciate that too.

So once you’ve kind of finished and you’ve wrapped up those interviews, take a step back. Don’t get too focused or caught up on just one of the results. You want to kind of look at the data in aggregate, the qualitative data and let it talk to you.

What stories are there? Are you seeing any patterns or themes that you can take note of, kind of like the theme around people being worried about the berries melting? Then you can organize those findings and make sure you summarize it and synthesize it in a way that the people who have to use those insights that you’ve gotten can make sense of.

Make sure that you tell real stories and humanize this information. Maybe you recorded the interviews, which is always a really good idea. You can go back and pull out little sound bites or clips of the people saying these really impactful things and use that when you’re presenting the data.

So going back to that berry example, if you recall, we had that data around: Hey, we’re seeing a lot of drop-offs on the product description page. We’re seeing a higher cart abandonment rate. But maybe during the user interviews, we noticed a theme of people talking about how they obsessively click the tracking link on the packages, or they wait for those gift recipients to send them a text message to say, “Hey, I got this present.” As you kind of unraveled why, you noticed that it had to do with the fact that these berries might melt and they’re worried about that.

Well, now you can elevate the content that you have around how those berries are protected in a little cooler-like box on the pages and the places it matters most. So maybe there’s a video or an animated GIF that shows people how the berries are protected, right there in the checkout flow.

I hope that this encourages you to get out there and talk to real users, find out about their context and use that information to really elevate your search data. It’s not about having a big sample size or a huge survey. It’s much more about getting to real life experiences around your product or service that adds depth to the data that you have. In doing that, hopefully you’ll be able to increase some conversions and maybe even improve behavioral metrics, so those UX metrics that, I don’t know, theoretically could lead to higher organic visibility anyway.

That’s all for now. Thanks so much. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Blogger http://jake-bennett-business-blog.blogspot.com/2016/09/how-to-support-data-with-real-life.html