Made radio-frequency performance visible in real time. I led end-to-end UX for a map-based analytics feature that plots call quality and signal strength across time and geography. The design unified multiple stakeholder needs—proactive monitoring and historical comparison—into one intuitive view with difference overlays, thresholds, and drill-downs. Released as part of a cloud platform for public-safety communications infrastructure, the feature now helps agencies detect coverage gaps faster and validate fixes with live telemetry instead of delayed reports.
In public safety, a missed radio call can mean a missed moment to save a life. Yet admins often learned about RF coverage issues after the fact, through delayed tickets or anecdotal reports, if they were reported at all. Despite rich telemetry from thousands of devices, there wasn’t an intuitive way to see performance on a map and compare how it changed over time.
Different stakeholder groups prioritized different lenses for RF data: some needed configurable, aggregate analysis to spot patterns, while others needed a continuously updated, mobile-friendly view for operational awareness. I aligned these needs into a single map experience that supports both proactive monitoring and historical comparison.
Lead the end-to-end UX for a dynamic, map-based analytics feature that continuously plots radio performance (RSSI, BER) across geography and time.
Align stakeholder needs and deliver clarity for users operating in complex, high-risk environments.
I reviewed stakeholder input, prior learnings, and public references. The patterns were consistent:
Admins struggled to pinpoint when and where issues occurred.
Reports were delayed, inconsistent, or anecdotal.
Drive testing was costly and reactive.
These gaps made it hard to separate device issues from network coverage problems, which led to guesswork and delayed fixes.
A typical path looked like this: a first responder misses a call; it’s reported much later (if at all); a technician tries to infer location/time/cause, then drives out to test, often without reproducing the issue. We needed a tool that proactively surfaces and validates signal anomalies, without waiting for user complaints.
Different groups prioritized different lenses for RF data:
Trend & analysis: configurable, aggregate views to spot patterns.
Operational awareness: a continuously updated, mobile-friendly view for day-to-day monitoring.
I facilitated workshops to synthesize these needs and defined a unified interaction model so the same map supports both historical comparison and ongoing monitoring.
Grounded in user and organizational goals:
“I need to see where signal quality drops below acceptable levels in a continuously updated view.”
“I want to compare this week’s coverage to last week’s to spot changes.”
“I need to drill into an area to understand what’s driving issues there.”
Early concepts explored:
Time-range filters and quick presets
Zoomable geospatial overlays (hex-tiled regions)
A clear legend for signal-quality states (Good / Fair / Poor)
Threshold toggles to highlight areas breaching defined limits
I partnered with:
Subject-matter experts to calibrate meaningful ranges for key RF indicators in the UI
Product managers to prioritize scenarios
Engineering leads to validate feasibility and delivery paths
Design-system owners to establish reusable, accessible mapping patterns
Data partners to explore geospatial signals that could enrich the experience
I produced multiple clickable prototypes in Figma to evaluate:
Heatmap overlays vs. segmented region tiles
Static vs. adjustable thresholds
Desktop vs. mobile interaction models
Because interactive mapping patterns were limited, I:
Designed the legend and filter interactions for map views
Created color-blind-safe visualization palettes
Documented components for reuse across products
Walkthroughs with product and engineering validated feasibility and surfaced early refinements.
Design peers stress-tested readability, flow, and terminology. We learned:
Legends needed clearer explanations (e.g., RF measures)
Threshold controls benefited from presets plus manual input
With representative users in mission-critical contexts, we evaluated scenarios like:
“Spot coverage regressions over the past month.”
“Find areas with elevated call failures last week.”
“Compare performance before and after a tower change.”
Key refinements
Added a Difference View to compare two timeframes visually
Introduced drill-down panels with area-level trends and stats
Made filters resettable with one click
“Seeing coverage like this makes it easier to justify where we spend our time. We’ve wanted something like this for years.” — System administrator
Feature | What it does | Why it matters |
---|---|---|
Continuously Updated Map | Plots call-quality and signal-strength data on a geographic map. | Surfaces problems as they emerge. |
Historical Views | Select time windows (e.g., last 7 days) to view trends over time. | Helps detect slow degradation and validate long-term fixes. |
Difference View | Compare any two date ranges visually on the map. | Powerful for regression detection or post-fix verification. |
Threshold Alerts | Emphasize areas breaching defined limits. | Enables proactive, user-defined monitoring. |
Drill-Down Panels | Reveal contextual details and trends for the selected region. | Makes the map actionable with less guesswork. |
Accessible Palette | Color-blind-friendly visualization. | Ensures every admin can interpret signal quality at a glance. |
Responsive Layouts | Optimized for desktop and field use. | Enables monitoring outside the control room. |
Delivered production designs and specifications adopted into the live release of the RF Analytics Map. Early user evaluations and internal telemetry reviews indicated:
Faster pinpointing of coverage gaps
Fewer manual drive tests and anecdotal tickets
Clearer visibility into network health trends
Improved collaboration between RF engineers and operations teams
Established reusable geospatial patterns for future products
Raised accessibility standards for map-based data visualization
Aligned stakeholders via shared scenarios and a single interaction model
This project transformed an ambiguous, data-rich problem into a live, high-impact product.
It sharpened how I:
Align divergent needs through shared user stories
Translate domain-specific data (RF metrics) into actionable UI
Design for accessibility, scale, and clarity in complex tools
Ultimately, thoughtful map-based UX made RF performance observable and actionable when it mattered most—now powering live visibility across thousands of radio systems.
Authenticity & Confidentiality
This case study reflects real design work. Certain labels, visuals, and data are anonymized, generalized, or reconstructed from public references. No non-public, confidential, or proprietary information, nor any third-party proprietary information, is disclosed.