Enterprise UCaaS >

How Star Ratings Provide a Better Experience for Users & Increase Visibility into Data

June 15, 2021 by João Rosa

user touching a screen with a big star

​Let’s set the scene: It’s 2020.  


All of a sudden, you’re asked to leave your familiar, comfortable office—with desk phones, business-level Internet connections and finely-tuned Wi-Fi—to work from home.  


Home: Where the quietest spot usually has the worst internet connection.  


Home: Where you're interrupted constantly by the construction happening in your neighbors yard, and getting much better acquainted with the mute button.  


Home: Where you’re competing with pre-teens for internet, and negotiating timing so your video streams don’t drop.  


Conference rooms, office internet connections, and desk phones became temporarily obsolete—and you become your own IT expert, running a home office that demands corporate-level audio and video streaming conditions.  


It’s pretty easy to deduce that all of these new stressors, skills, and hardware demands could easily lead to an uptick in bad experiences and frustration. At a time like this—and moving forward— our Fuze administrators needed more visibility into how their users were coping, so they could proactively solve end-user work from home issues.  


At Fuze, we love our IT administrators. We know what an essential part of the Fuze experience they are, and how hard they work to help their users and organization get the most out of Fuze. Being able to identify potential user issues, proactively troubleshoot, and provide training and support to end-users is one of the most powerful services we can provide to our administrators—and the shift to full time remote work highlighted this like never before.  


Knocking on IT’s Door


At Fuze, we are constantly gathering platform performance data, both qualitatively and quantitatively. After each call or meeting, users are prompted to rate their experience on a scale of one to five stars (one being very problematic, five being excellent), and users have the ability to add context around any issues that may have occurred. We use this data to identify and proactively improve platform stability and UX issues.


Fuze Quote


With the shift to work from home, we began to see the value of this data in a new light and for a new purpose: it’s a communication from an end-user to their administrator, an early warning sign or ask for help. “Please, my audio doesn’t work and I have no idea why” or “My network can’t keep up—what’s going on?”


In the old world of the office, our users would knock on IT’s door and ask for help with their software problems. We realized that in the world of remote work, star ratings became the digital “knock” at the IT admins’ office door. They were an invitation for their IT partner to jump in and help to solve their issues, and it was our job to make sure their administrators would be there to answer the door.


Fuze Quote


Proactive Support


Delivering user feedback directly to administrators creates a proactive and personal support channel. They can reach out—right when the frustration glass starts to fill, and before it spills out and overflows. As a product team, we set out with a goal to understand the overall mood of our user base and why individuals might be struggling— and how they may benefit from some personalized help.


Proactive support keeps frustrations from spilling over—something we want to avoid here at Fuze.

Proactive support keeps frustrations from spilling over—something we want to avoid here at Fuze.


Star Ratings are a powerful data point when talking about satisfaction and frustration. They’re as close to the source as possible—asking real people about real events, immediately after the fact in a simple and direct manner, without leading questions or forced interaction. When a user goes out of their way to rate an event, it means they feel strongly enough about it to go the extra mile (or extra click) to let you know what’s going on. This is invaluable data to a UX team, and for an IT administrator who can reach out and troubleshoot.


Experiencing audio issues? Let’s make sure your headset bluetooth is set up correctly.


Your meeting dropped? Let’s check your router.


A simple rating and some context makes for the perfect raised hand to signal IT for help. But is it too simple?  


Data Flaws = Design Challenge


Picture this: your organization has positive ratings all around and everyone is having good experiences. A couple of bad ratings pop up and you, the proactive and empowered IT admin, drill down to see what’s going on. At a glance, you may see a satisfaction trend, and act quickly and confidently on downturns by targeting the individuals reporting issues in real time.


But here is the flaw with this assumption: we discreetly and unobtrusively collect ratings in the Fuze apps (specifically Fuze Desktop & Web). By design, the prompt is out of the way and will vanish relatively fast if the user doesn’t want to interact with it. It’s also dismissable and can even be turned off entirely in the user or organization settings. This behaviour is intentional, as it is our belief that Fuze should never get in the way of people doing their best work— and we’re not about to change this mantra any time soon.


But what does this mean for our rating data? The customizable, unobtrusive nature of the prompt does make “star ratings” an inconsistent dataset on a good day, and an empty set on a bad one. Maybe our perfect solution wasn’t as perfect as we thought.


Our Process


Creating a complete experience from inconsistent data, and making sure visualizations of that data tell the true story is a design challenge. Here are some principles that guided us to the answers to some of the toughest UX questions out there.


Using Real Data


We looked at real user data to get a sense of the possible pitfalls in exposing it. We used this data in the first iterations of the visual design process, creating “mock” graphs and visualizations to closely resemble a possible “final product”. This helped the team notice the problems that an “at-a-glance” check could cause, like misunderstanding the true volume of ratings in relation to the volume of events, or having a single datapoint drown out all the others.


Talking to Real Users


With a preliminary draft, we interviewed users with specific experience in troubleshooting problems from raw data. I know it sounds like UX 101, but I can’t overstate enough how much informed insight can help tailor a tool for a specific purpose. These interviews transformed our data into an empowering set of indicators that help administrators take action.


Defining Thresholds


“Users with the worst overall average ratings are considered at risk”.


Imagine a user with 1,000 events in a month, where 998 have no rating and 2 with negative ratings. They would have a terrible average! But with only 2 in 1000 events (or 0.2%) rated badly, their actual experience is probably quite positive, or at the very least fair, or not noteworthy most of the time. To tell a more accurate story, we chose to use the total number of ratings to define what’s “at risk” in the first iteration.


Emphasizing What Matters


With real data insights and interviews in hand, we took on the task of telling the true data story, highlighting the most useful pieces for the admin and helping them identify where to spend their time and what the biggest addressable issues were in their user base. To achieve this, we connected the organization’s “big picture” to individual user insights.


Keeping it Simple


I think it was Albert Einstein that said “If you can't explain it simply, you don't understand it well enough.” I think this is our user experience mantra here at Fuze. Here are some ways we achieved simplicity for this feature:

  • Broke concepts into multiple, simple graphs painting a clear, concise picture.
  • Gave special attention to interactivity when outliers in the data could skew the graphs.
  • Provided toggles and filters for personalized views, so users could draw their own conclusions and focus on what they feel is notable.


Special care was taken for interactivity and customization, so that all of our customers can get the most out of this feature.

Special care was taken for interactivity and customization, so that all of our customers can get the most out of this feature.


For all Users and Abilities


At Fuze, we make it a priority to create inclusive and accessible tools that work for all users and abilities. Here are some of the unique measures we took for this feature:

  • Each graph can also be viewed as a table making it easier for screen readers to work with the values of the raw data.
  • Color accessibility and contrast for shapes and type—a must have for graphs!


Let us know what you think! Star Ratings is just the first step in our troubleshooting story, and for now we are proud to say that users are knocking at the door and our IT admins are there to answer it. We set up this survey for IT Admins who have used this feature. We’d love to know what you think of it and what you’d like to see next.  ​

João Rosa
João Rosa

João is a Product Designer at Fuze, leading our administrative experiences & tools as well as our Design Systems.


Read more from this author
Subscribe to Fuze's Newsletter