Not so long ago, tracking a “United States” SERP could give you an accurate depiction of what a searcher would see, regardless of where in the country they were. Now, unless a searcher is actively hiding their whereabouts, Google always knows where they are and serves results that are heavily influenced by their precise surroundings.
In other words, the “national” or “market” level SERP is dead. Long live the “precise location” SERP.
Of course, even though we know this to be true, we wouldn’t be us if we didn’t validate with a little (a lot of) data — when it comes time to set up your keyword tracking strategy, we want you to care as much about location as Google does.
Since proper keyword segmentation is essential to making sense of your SERP data (and more than one data point is always preferable when proving points), we divvied up our queries into two different categories:
Then came the tracking. We took each group of keywords, stuffed them into STAT, and tracked them in the centre of specific Portland and New York City ZIP codes, as well as in the US, English-speaking market as a whole. So, for example, one SERP is located to “10038 New York, NY” and the other is only located as far as “US-en.”
Once we’d gathered all our SERPs (just over 600,000 of them), we did a bunch of side-by-side comparisons. We went keyword by keyword and looked at whether search results were present on both the ZIP code and market-level SERPs, and then if they appeared in the same order — is it here? Is it there? Are they both in rank four?
The answer to all of the above is: Yes, yes, and yes, okay, similar.
But the biggest question on our minds: Are market-level SERPs accurate enough to trust?
Diving into our general intent keywords first, we were surprised to find a somewhat high similarity between ZIP code and market-level SERPs.
For starters, they shared 83 percent of their top 20 organic URLs. Of course, while this is higher than we expected, it still means that if we track our keywords without putting a searcher somewhere on the map, we kiss 17 percent of real-life search results goodbye, which is a lot.
What important insights might we lose out on — are we ranking and don’t know it? Are we letting a competitor sneak up on us?
| Result type | Similarity: NYC vs. no location | Similarity: Portland vs. no location |
|---|---|---|
| Organic | 83.11% | 83.14% |
| SERP features | 73.69% | 67.02% |
Looking at whether or not any of those results showed up in the same spot from one SERP to the next, things got even more concerning. Only 28 percent of organic results appeared in the same ranking position on both the local and national SERPs. So, even if we decide that doing without 17 percent of the results that our searchers see is fine by us, we can’t depend on the rank of the results to be accurate.
Moving our attention over to SERP features, we found that only 70 percent appeared on both ZIP code and market-level SERPs, which is less than the organic results. Local packs are a good example of just how much you can miss. 31 percent of Portland’s SERPs and 27 percent of NYC’s returned a local pack, whereas only 12 percent of national SERPs produced one. So, even though these keywords don’t necessarily require a physical location, when Google knows that there’s a real searcher standing in a real spot, it will err on the side of local intent and adjust its results accordingly.
As for the similarity of SERP feature rankings, they were only marginally more consistent than organic results with 33 percent appearing in the same position from SERP to SERP — hardly enough to make up for the increase in incorrect results.
Next, it was time to look at our keywords with explicit local intent — how were they faring?
The answer was: worse. Much, much worse. Remember how we could count on about 83 percent similarity for our general intent keywords? Here, national and ZIP code SERPs were only 32 percent similar when we compared the organic results on each.
| Result type | Similarity: NYC vs. no location | Similarity: Portland vs. no location |
|---|---|---|
| Organic | 33.35% | 29.94% |
| SERP features | 23.46% | 20.09% |
So, if we only track at the national level for queries that are searching for local businesses, we get a SERP where 68 percent of its results are different from what a searcher actually sees. Things took another dramatic turn when we compared ranking positions: only four percent of organic results had the same rank.
And just like before, similarity took a big hit when it comes to SERP features. Market and ZIP code-level SERPs only share around 22 percent of SERP features, and just over nine percent showed up in the same rank.
Think about something like the jobs result type from Google’s perspective, which is very specifically meant for hyper-local audiences. If you type [jobs] into Google (because this SEO business is just too much) and it doesn’t know where you are, what’s a search engine to do?
Not give you accurate local job listings, that’s what.
| Tracked location | % of SERP appearances |
|---|---|
| 97204 Portland | 0.09% |
| 10038 New York | 0.10% |
| US-en market | 0.01% |
As you can see above, only 0.01 percent of our market-level SERPs returned a jobs result type compared with 0.09 percent of our Portland SERPs and 0.10 percent of our New York SERPs.
So, what does all of this mean?
To sum: you need to track hyper-locally if you want to nab accurate results. After all, one SERP that searchers actually see in the hand is worth two make-believe market-level SERPs in the bush.
Want a detailed walkthrough of STAT? Say hello (don’t be shy) and request a demo.
If you liked The Case for Pinpoint Local Tracking by Then you'll love Miami SEO Expert
We designed Moz’s AI Content Brief to let LLMs do what they do best—produce natural-language…
Here’s what stuck with me:Start simple. A rough prototype beats a perfect idea.Be specific when…
Virtually every SEO tool is powered by scraped search results. In order to get more…
How to build a brand-led content strategy that drives demand outside searchYou’ll learn to move…
Five years after this video was created, the idea that machine learning might be “far…
I’ve spent the past year fielding the same panicked question from clients repeatedly: “Why is traffic…