1300 130 447
internal and external
Source - HDI Report
About the Knowledge Base
Search all the Knowledge Base
Testimonial: I have found that the new HDAA Knowledge Base reduces the time it takes me to research industry stats & reliable information for the ITSM sector. It’s easy to use search functionality encompassing KCS principles, helps to filter & tailor my searches more accurately & there are numerous new services now available through the website. Every time I return to the site there is new information published. Very impressive.
Chris Powderly, Support & Services Manager, Allens
supportworld , customer experience , customer satisfaction , customersatisfactionmeasurement
No Result Found
Having a customer satisfaction survey doesn’t mean you have insight into what frustrates your customers. The score from your key performance indicator, whether it’s a Net Promoter Score, Customer Effort Score, or something else, is a nice barometer, but it doesn’t explain itself. To act on the score effectively, we need some context from the customer. Not all follow-up questions are created equal. If you’re not careful, asking the wrong follow-up questions can degrade the survey experience and the entire customer experience.
The traditional method of getting more specific details is to ask a series of rating scale questions, each evaluating a different attribute assumed to be important to the experience. For instance, transactional surveys in the fast-food industry might begin with an overall performance or satisfaction question and then ask “Please rate your satisfaction with the friendliness of the crew,” “Please rate your satisfaction with the accuracy of your order,” and so on. For each of these follow-up questions, respondents are presented with the same 5-point scale, which might range from “highly dissatisfied” to “highly satisfied.”
This method of collecting feedback demands a ton of work from the respondent. Consider employee performance appraisals; many of them ask managers to rate their employees on similar scales. Many managers find these ratings tedious because it takes so much effort to evaluate employees on so many different attributes thoughtfully. Forcing customers to evaluate us this way is equally demanding on their time.
If a customer isn’t motivated to take a survey, they might abandon it when they see a long list of questions. If respondents are motivated by a reward offered at the end of a survey, they might hastily answer every question with the same rating. Perhaps the customer is taking a survey because they have something particular they want to share. If it’s a complaint or unresolved issue, they’re likely to mark the lowest level of satisfaction for everything to get to an open-ended question where they can write freely. Each of these problems reduces the value of the follow-up questions; they aren’t effective in these cases.
Assume that customers faithfully answer every question as if they were under oath. Taking action based on the results can be challenging! An average of the ratings is often the go-to method of analyzing these types of responses. But averages don’t tell the whole story. Wildly different response sets may result in the same average. They don’t provide a clear picture of the experience or how frequently problems are occurring.
One of the most noticeable shifts in modern customer surveying is to ask a single rating scale question followed by a single open-ended question, often asking why a respondent chose a particular score. This method attempts to alleviate the problems caused by a long series of rating scale questions, and in general, it makes responding much easier. When customers have a strong opinion that led to a particular score, they’re often glad to share that with us. What about customers who aren’t that passionate?
When designing customer surveys, we can’t focus only on customers who are highly engaged; they love us or hate us. There are many customers in the middle! Assuming we’re lucky enough for them to take our survey, it’s not easy for them to tell us why they’re indifferent. They might not even have a reason in mind. These types of customers aren’t the ones writing essays about why we’re awesome or terrible, and they need a little nudge to help them share meaningful feedback.
Consider the last time you used a water cooler or water fountain in your office. It’s something we often do without much thought. If I asked how to make the experience better, would you have much to say? If not, that’s alright; not everyone needs to have a strong opinion about everything! (Full disclosure: I don’t take that advice myself.) If it were your job to improve water cooler experiences, though, you might be a little frustrated. Because no one gives it a second thought, it can be hard to discover pain points.
I’ll help you out. Choose any of the following that affects your enjoyment of the water cooler: provided drinkware, bottle compatibility, temperature, flavor, purity, height, location, or ease of operation. That could have been eight tedious rating scale questions, but you’ve finished already! Best of all, if you didn’t have a strong opinion before, this might have jogged your memory to a lingering frustration.
I implement the same concept in transactional surveys using check-boxes or multiple selection questions. It’s as easy as replacing a rating scale for “How satisfied were you with the technician’s courteousness and professionalism?” with “Did any of the following affect the rating you selected? (Choose all that apply.)” and then listing “Technician courteousness and professionalism” as one of many choices. I also allow the respondent to write in their own choice. No, we don’t learn any more from particularly happy or angry customers; they are going to write us a thorough recount, regardless. It does make responding easier for them, effectively getting out of the way of their original purpose for taking our survey. For customers who haven’t spent much time contemplating their apathy towards us, it makes it easier for them to point us in the right direction. It’s more respectful of everyone’s time and purpose for responding.
Unlike unhelpful averages, the response count for each attribute makes it easy to identify causes of friction and spot trends over time. There are no more guessing games about whether a 4 out of 5 score is good or bad; clearly, it matters to the customer. During a busy time of year, the number of times an option representing the speed of service might jump up significantly. It’ll stand out more than the change in average, and it makes it easier to quantify the impact on customers. You can even monitor the percentage of times each attribute was selected out of the total number of responses and compare that rate across different reporting periods.
The best part of using the check-box method as opposed to a series of rating scales, is that customers don’t feel compelled to check every box. Rating scales expect a response (or N/A), and therefore customers feel the need to assign a rating to every attribute, whether it’s an accurate rating or not. By allowing customers to pick one or more factors from a list, they pinpoint the most significant causes of frustration. When the feedback we receive is specific and targeted, we can respond to this feedback more deliberately and precisely.
The survey experience is a part of the overall customer experience, and it’s important to design surveys in a way that is respectful of customers’ time and effort. We’re asking them for a favor, after all. Following-up the key performance questions in the right way have a big impact on how customers feel about your survey and how much and how quickly you can learn from it. Asking the right questions is better for customers, and it’s better for the business.
Andrew Gilliam is an HDI-certified IT Support Center Analyst for a public university and an ICMI Featured Contributor and was named one of ICMI’s Top 50 Customer Experience Thought Leaders to follow on Twitter. He speaks and writes about Voice of the Customer strategy, employee and customer experience innovation, and contact center best practices. Andrew has developed employee portals, created effective surveys, and built silo-busting collaboration systems. Learn more at andytg.com, follow @ndytg on Twitter, and connect with him on LinkedIn..
No Result Found
- Contact Us
- IT Membership
- Support Centre Association
- Comparison Guide
- Price Guide
- Membership Conditions
Training & Workshops
- Training Courses
- Recent Workshops
- Cancellation & Transfer Policy
- ITIL Training
- ITIL Foundations
- Support Centre Consulting
- Service Desk Consulting
- Help Desk Consulting
- Media Kit
- Update your details
- New account
© Copyright HDAA. All rights reserved.
HDAA - Energising the Service & Support Profession
Help Desk Association Australasia Pty Ltd trading as HDAA
T: 1300 130 447 T: +61 (0) 2 9986 1988 F: +61 (0) 2 9986 1330
E: firstname.lastname@example.org W: www.hdaa.com.au A: PO Box 303, Turramurra NSW 2074 Australia
ABN: 20 088 292 755
Our Services: ITIL | ITIL Training | ITIL Foundations | IT Membership | Service Desk Association | Support Centre Association | Support Centre Training | Service Desk Training | Help Desk Training | Support Centre Consulting | Service Desk Consulting | Help Desk Consulting
ITIL® and PRINCE2® are registered trade marks of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
RESILIA™ is a trade mark of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
The Swirl logo™ is a trade mark of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
DevOps Foundation®, is a registered mark of the DevOps Institute.
HDI® is a Registered Trade Mark. HDAA is the Australasian Gold Partner of HDI®.
KCSSM is a Service Mark of the Consortium for Service Innovation™.
SIAM™ is a registered trademark of EXIN.
Apollo 13 Insignia image by 'NASA Johnson' (copyright-free) June 2017 via https://www.hq.nasa.gov/alsj/a13/images13.html
WEB DEVELOPMENT PARTNER