About the Knowledge Base
Search all the Knowledge Base
Testimonial: I have found that the new HDAA Knowledge Base reduces the time it takes me to research industry stats & reliable information for the ITSM sector. It’s easy to use search functionality encompassing KCS principles, helps to filter & tailor my searches more accurately & there are numerous new services now available through the website. Every time I return to the site there is new information published. Very impressive.
Chris Powderly, Support & Services Manager, Allens
supportworld , benchmarking , metrics and measurements , support center , service desk
No Result Found
Most service and support organizations make improvements only incrementally, through trial-and-error. As a result, fewer than 10% ever achieve world-class performance. We define a world-class support organization as one that meets the following performance criteria:
What world-class performers have in common is that almost all of them benchmark their performance at least once a year. So rather than improving incrementally, at an evolutionary pace, these top performers leverage benchmarking to build upon the industry’s proven best practices. This allows them to bypass the incrementalism associated with trial-and-error improvement, thereby achieving world-class performance at a revolutionary pace!
When it comes to KPIs for service and support, less really is more! Although there are literally hundreds of service and support metrics, the short list of metrics that really matter for a level 1 service desk includes the following:
These metrics represent the 80/20 rule for service desk KPIs: 80% of the value you receive from performance measurement and management in your service desk can be derived from these six simple metrics! With these metrics, one can measure and manage a service desk very effectively. The same principle applies to desktop support, where the short list of metrics that really matter includes those listed above for the service desk, but we add one additional metric called Mean Time to Resolve (MTTR), which measures the elapsed time from when a ticket is opened until it is closed.
Almost all support organizations track some KPIs. Likewise, almost all support organizations do some reporting. But very few go beyond this “observational mode” with their KPI’s and actively leverage their metrics for continuous improvement. Yet the true value of KPIs can only be unlocked when they are used not just to track and trend performance, but also to diagnose strengths and weaknesses and then act upon those insights to close or mitigate any performance gaps.
In my experience, there are two primary barriers that prevent support organizations from moving beyond metrics as a mere visual tool to leveraging metrics for world-class performance. The first barrier is that most do not understand the cause-and-effect relationships of the KPIs. The second barrier is that most do not know how to establish performance targets because they don’t have any idea what constitutes poor, acceptable, above average, or outstanding performance. But there is a straightforward solution to both dilemmas.
The KPIs for service and support are interconnected through a linkage that is illustrated in the cause-and-effect diagram below. The metrics in red are the ones on our short list of metrics that really matter. This linkage demonstrates that any time one metric moves, they all move! This is a critical concept to understand, as it empowers you to affect positive change in your support organization. Increasing agent utilization, for example, will reduce your cost per ticket. Likewise, increasing first contact resolution rate will increase customer satisfaction. And if you are trying to improve agent job satisfaction (and hence morale), you can achieve that objective through training, coaching, and career pathing.
What about establishing performance goals and then achieving those goals? The empirical evidence suggests that benchmarking is the single most effective tool for realizing the benefits of continuous service improvement and ultimately achieving world-class performance. In fact, there is a virtual one-to-one correspondence between support organizations that are world-class and support organizations that benchmark their performance annually.
Benchmarking is a well-established tool for measuring and managing your performance. Effective benchmarking enables an organization to quantify its performance, compare itself to others in the industry, identify performance gaps, and define the actions necessary to close the gaps. Although benchmarking is a rigorous analytical process, conducting a benchmark is relatively straightforward. The basic approach is illustrated below.
Let me demonstrate with a brief case study how benchmarking empowers organizations to achieve transformational improvement. The table below shows actual data from a benchmark that included a total of 42 service desks. As you can see, the cost per ticket is well above average, and customer satisfaction is well below average (both metrics are highlighted in the table). It is worth pointing out that before conducting this benchmark, the service desk in our case study had no idea that their performance was poor. In fact, they thought they were doing quite well because both metrics–cost per ticket and customer satisfaction–were trending in the right direction. But a positive trend, by itself, does not necessarily mean that you are performing well. It is only by benchmarking against a valid, external peer group that you can objectively and definitively judge your performance.
We know from our KPI cause-and-effect diagram that agent utilization is a primary driver of cost per ticket. Additionally, we know that service levels have a significant impact on agent utilization. The data from our case study shows that utilization is below the peer group average (hence the high cost), and that service levels are extremely aggressive (hence the low utilization). This simple diagnosis, enabled by our understanding of how the KPIs are interconnected and the external data from the benchmark, tells us everything we need to know about how to close the cost gap for this service desk.
This organization is quite simply overstaffed. That’s why their utilization is low and their costs are high. Moreover, they are overstaffed because they are pursuing very aggressive service level targets. The remedy is quite simple: reduce headcount through attrition until agent utilization reaches the benchmarking peer group average. While this will reduce costs, it will also cause the service levels–ASA and abandonment rate–to increase. But that’s an acceptable tradeoff because service levels have very little impact on customer satisfaction in a service desk. The service desk in question implemented these measures, and within six months their cost per ticket was well below the peer group average.
What about their low customer satisfaction? Following the same logic we used to diagnose their high costs, we see that first contact resolution rate (FCR), a primary driver of customer satisfaction, is low. We also see that training hours, a key driver of FCR, are also low. Here again, the remedy is straightforward. Increasing training hours will have the effect of improving FCR, and that, in turn, will have the effect of improving customer satisfaction. The service desk in our case study implemented the change by increasing new and annual training hours, and in short order their FCR and customer satisfaction had surpassed the peer group average.
Now, let’s create some performance goals. Recall that our definition of world-class performance states that you must be in the top quartile on customer satisfaction and in the bottom quartile on cost per ticket. Although the quartiles are not shown in the data table above, we know from the benchmark that the top quartile for customer satisfaction ranges from 89.1% to 96.6%, and the bottom quartile for cost per ticket ranges from $6.59 to $16.11. Assuming that the goal is world-class performance, one could reasonably set a goal of $16.11 for cost per ticket, and a goal of 89.1% for customer satisfaction.
The power of benchmarking combined with an understanding of the cause-and-effect relationship of service and support KPIs cannot be overstated. Together they allow you to:
Finally, if you have never benchmarked your service desk or desktop support functions before, you should not be intimidated by the process. As mentioned previously, the benchmarking methodology is very straightforward. Moreover, an effective benchmark does not have to cost a lot of time or money. In fact, benchmarking has the highest ROI of any tool in the continuous improvement toolbox!
Jeff Rumburg is the winner of the 2014 Ron Muns Lifetime Achievement Award, and was named to HDI’s Top 25 Thought Leaders list for 2016. As co-founder and CEO of MetricNet, Jeff has been retained as an IT service and support expert by some of the world’s largest corporations, including American Express, Hewlett Packard, Coca-Cola, and Sony. He was formerly CEO of the Verity Group and Vice President of Gartner. Jeff received his MBA from Harvard University and his MS in Operations Research from Stanford University. Contact Jeff at firstname.lastname@example.org . Follow MetricNet on Twitter @MetricNet.
No Result Found
- Contact Us
- IT Membership
- Support Centre Association
- Comparison Guide
- Price Guide
- Membership Conditions
Training & Workshops
- Training Courses
- Recent Workshops
- Cancellation & Transfer Policy
- ITIL Training
- ITIL Foundations
- Support Centre Consulting
- Service Desk Consulting
- Help Desk Consulting
- Media Kit
- Update your details
- New account
© Copyright HDAA. All rights reserved.
HDAA - Energising the Service & Support Profession
Help Desk Association Australasia Pty Ltd trading as HDAA
T: 1300 130 447 T: +61 (0) 2 9986 1988 F: +61 (0) 2 9986 1330
W: www.hdaa.com.au A: PO Box 303, Turramurra NSW 2074 Australia
ABN: 20 088 292 755
Our Services: ITIL | ITIL Training | ITIL Foundations | IT Membership | Service Desk Association | Support Centre Association | Support Centre Training | Service Desk Training | Help Desk Training | Support Centre Consulting | Service Desk Consulting | Help Desk Consulting
ITIL® and PRINCE2® are registered trade marks of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
RESILIA™ is a trade mark of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
The Swirl logo™ is a trade mark of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.
The APMG-International Service Catalogue and Swirl Device logo is a trade mark of The APM Group Limited.
DevOps Foundation®, is a registered mark of the DevOps Institute.
HDI® is a Registered Trade Mark. HDAA is the Australasian Gold Partner of HDI®.
KCSSM is a Service Mark of the Consortium for Service Innovation™.
Apollo 13 Insignia image by 'NASA Johnson' (copyright-free) June 2017 via https://www.hq.nasa.gov/alsj/a13/images13.html
WEB DEVELOPMENT PARTNER