Published: July 2017
In the world of Wi-Fi troubleshooting, two metrics dominate conversations: RSSI (Received Signal Strength Indicator) and SNR (Signal-to-Noise Ratio). While they often appear together in dashboards and support tools, understanding their differences is critical to diagnosing wireless issues correctly. Misinterpreting them leads to wasted effort, unnecessary site visits, and poor user experiences.
By 2017, Wi-Fi engineers and support teams have widespread access to signal diagnostics via controller dashboards, mobile tools, and even end-user apps. But all too often, tickets still state “low signal” based on RSSI alone—without consideration for SNR, which more accurately reflects signal quality. Let’s explore how each metric works and how they interact in practice.
RSSI measures the strength of the signal received by the client device. It’s expressed in negative dBm values, usually ranging from -30 dBm (excellent) to -90 dBm (unusable). Most vendors normalize this into a 0–100 scale, but the raw dBm value is more consistent and useful for engineers.
What RSSI does well:
But RSSI has critical limitations. It doesn’t account for interference, environmental noise, or signal corruption. A strong signal in a noisy environment is still a bad signal. And this is where SNR comes in.
SNR is the ratio of signal strength (RSSI) to background noise. It’s measured in dB, and higher values indicate a cleaner, more usable connection. A client with -60 dBm RSSI and -90 dBm noise floor has an SNR of 30 dB—excellent. If the noise floor rises to -70 dBm, the same RSSI now yields an SNR of just 10 dB—likely to cause retransmissions and latency.
Why SNR matters:
Vendors vary in how they report SNR, but most modern platforms highlight it in client health or AP association summaries. It’s especially critical in high-density environments like schools, hospitals, or stadiums—where spectrum reuse and interference dominate.
A classic support trap: a user calls IT and says “I have full bars, but my Zoom call keeps freezing.” The dashboard shows RSSI of -55 dBm—great. But SNR is only 9 dB due to adjacent AP overlap, microwave interference, or neighboring SSIDs. The result? A strong signal that’s nearly useless.
Another example: A hallway shows solid RSSI across its length, but users report packet loss. An onsite visit reveals an unshielded HDMI extender emitting wideband noise—dropping SNR to single digits. Raising AP power doesn’t help; only removing the interference restores function.
1. Use RSSI to define coverage zones: Validate that clients maintain at least -67 dBm for standard data and -65 dBm or better for voice.
2. Set SNR thresholds in monitoring tools: Alert when SNR drops below 20 dB. Anything below 15 dB deserves investigation.
3. Conduct spectrum analysis: Use tools like Ekahau or AirMagnet to visualize noise sources and identify interferers outside the 802.11 protocol (microwaves, Zigbee, DECT).
4. Educate support teams: Help frontline staff understand that “5 bars” does not equal performance. Build SNR into scripts and user guides.
RSSI is foundational, but incomplete. SNR reveals the true experience. Together, they offer the full picture—but only when interpreted correctly. Engineers who use both metrics avoid false positives, streamline diagnostics, and deliver better outcomes for users relying on Wi-Fi to stay connected and productive.