If you’ve ever pulled a Google Ads report, looked at Store Visits, then compared it to what actually happened in your stores, you’ve probably had this moment:
“Wait…that can’t be right.”
According to Google visits are up 22%, but your people counter shows a slight dip.
So, what’s going on?
Short answer: Google Store Visits are not a literal count of people walking through your doors. They’re modeled estimates. Directional. Useful in the right context. Dangerous in the wrong one.
Let’s break down why they’re noise, where they fall apart, and how smart teams actually use them without letting them wreck optimization or credibility.
First: What Google Store Visits Actually Are (And Aren’t)
A common misconception is that Store Visits are some kind of invisible door counter.
They’re not.
Google doesn’t see every person who walks into your store. It sees a subset of users:
- Who have location history turned on
- Who are carrying a device Google can observe
- Who generate usable GPS, Wi-Fi, or Bluetooth signals
- Who meet Google’s privacy thresholds
From that partial signal, Google then models the rest using machine learning.
That means Store Visits are:
- Inferred
- Scaled
- Smoothed
- And filled in where data is missing
Google is pretty explicit about this if you read the fine print: Store Visits are meant to be directionally accurate, not exact counts. They’re not designed to reconcile cleanly with POS data, people counters, or turnstiles.
If you expect them to match perfectly, you’re setting yourself up for frustration.
Why the Numbers Drift from Reality
Even when everything is “working”, there are structural reasons Store Visits diverge from what actually happens in stores.
Modeling has Limits (Especially at Smaller Scales)
Because Google only sees a fraction of real-world visitors, it has to extrapolate.
That introduces:
- Sampling error
- Geographic bias
- Demographic skew
- Device-type skew
If location services adoption is lower in certain areas or among certain customer segments, Google fills the gaps with statistical assumptions. Sometimes those assumptions hold. Sometimes they don’t.
This is why Store Visits tend to look:
- More stable at large chains
- More volatile at individual locations
- More believable over months than days
At small scales, the math just gets shaky.
Dense Locations Create Attribution Confusion
Malls. Urban streets. Strip centers. Multi-tenant buildings.
These are Store Visit nightmare scenarios.
Google assigns a visit when it sees a device near your location that appears to stay long enough. But when stores share walls, entrances, or parking structures, things get fuzzy fast.
Edge cases pile up:
- Someone shops next door
- An employee clocks a full shift
- A delivery driver waits nearby
- A customer walks the mall but never enters your store
Even with dwell-time filters, misclassification happens, and it happens more often in exactly the environments where many retailers operate.
Privacy Creates Blind Spots (By Design)
This part isn’t a bug, it’s a feature.
If someone:
- Turns off location tracking
- Uses airplane mode
- Restricts background app activity
- Opts out of location history
Google can’t observe that visit directly.
So again, the model fills in the blanks.
On top of that, Google enforces privacy thresholds. If there isn’t enough data for a store, campaign, or time window, Store Visits may:
- Be suppressed
- Appear late
- Swing wildly week to week
This is why low-volume locations often see numbers that feel random or “wrong”.
Reporting Volatility
If you’ve ever seen Store Visits jump or dip in ways that don’t line up with reality, you’re not imagining it.
Modeled metrics are inherently volatile, especially:
- Over short time windows
- At the campaign or keyword level
- During seasonal transitions
And historically, Google has had platform bugs that affected Store Visits and Store Sales reporting for specific date ranges.
The tricky part? These issues don’t always come with alerts. If you’re not watching closely, they can quietly distort ROAS, CPA, and bidding decisions.
It’s a Black Box
You don’t get to see:
- The exact signals used
- How much weight each signal carries
- How credit is assigned to campaigns or keywords
- Where confidence intervals break down
That’s fine for directional insights. It’s risky when Store Visits are:
- Auto-enabled
- Set as a primary conversion
- Used to drive Smart Bidding decisions
In those cases, Google may optimize toward modeled visits that don’t actually align with your real revenue or store performance.
When that happens, teams chase a metric they can’t independently validate.
So, are Google Store Visits Useless?
No, but they’re also not truth.
Used correctly, Store Visits are:
- A trend signal
- A directional indicator
- A way to compare channels or tactic at a high level
Used incorrectly, they become:
- A false source of confidence
- A noisy optimization target
- A credibility problem when numbers don’t line up internally
When Store Visits Work Best
Store Visits tend to be more reliable when:
- You’re a large chain
- Locations have high foot traffic
- You’re analyzing month-over-month or quarter-over-quarter trends
- You’re comparing relative lift, not absolute counts
They’re weakest when:
- You’re looking at individual stores
- You’re slicing by small campaigns or keywords
- You’re making week-to-week decisions
- You expect them to reconcile to POS exactly
The Smart Way to Use Them
The best teams don’t treat Store Visits as a single source of truth.
They use them alongside:
- POS transactions
- Loyalty data
- People counters
- Wi-Fi logins
- Third-party foot-traffic panels
When Store Visits move with those baselines, confidence goes up. When they diverge, teams know to ask questions, instead of blindly optimize.
Most importantly, they keep Store Visits directional, not definitive.
The Bottom Line
Google Store Visits aren’t lying to you.
They’re modeled estimates, built from partial data, shaped by privacy constraints, and optimized for scale, not precision.
If you treat them like a door counter, you’ll lose trust fast. If you treat them like a trend signal and validate them against reality, they can still be useful.
The goal isn’t to get rid of Store Visits.
It’s to stop asking them to be something they were never designed to be.