“What gets measured, gets managed.”

— Peter Drucker

Great products and great marketing attract customers. However, no matter how good a product is, the journey of the customer is often full of hiccups and misunderstandings.

Customer support must then help the user — to keep them using the product, to retain them as a paying customer, and to encourage them to become evangelists for the product. This is especially true for competitive markets, products with significant onboarding commitments, and customers with high lifetime values (LTVs).

Every company, then, must invest in developing a customer support team to increase their revenue. But — like sales teams which are judged on whether they meet their quota — how might we measure the performance of a customer support or service team? Let’s first consider the factors at play.

Factors

When considering metrics, there are several types that we may want to consider, such as:

  1. Operational metrics: Metrics that determine how efficient customer support is in processing requests.
  2. Customer satisfaction metrics: Metrics of customer engagement, churn, satisfaction, and promotion of the product.
  3. Support agent satisfaction metrics: An oft-neglected dimension, these metrics consider the happiness of the team at the core of the support operations.

Similar to tracking the performance of other systems, there are leading and lagging metrics. Lagging metrics are often the ultimate success metrics that we care about, but leading metrics are more easily and immediately tracked. Both types must be tracked and used to improve customer service, and we’ll consider each type in this post.

Yet another trade-off is between cost and quality. For example, a generic self-serve website may be cheap to run (despite a high initial investment) but insufficient and low-quality. A service where a customer is immediately connected to a support agent who is told to fulfill every request may be high quality, but not cost-effective. The trade-offs that make sense will vary for each company, depending on whether support requests tend to be short and transactional, or more full-service.

Finally, when setting metrics in general, it’s important to choose metrics that fulfill certain criteria. A popular mnemonic for metrics and goals is SMART (specific, measurable, assignable, realistic, and time-bound).

Let’s start by considering operational metrics.

Operational metrics

Operational metrics can be specified at the level of the agent and at the level of individual teams. For the most part, operational metrics are leading metrics. For more details, we encourage you to read this GlowTouch article, from which this operational metrics section borrows heavily.

We’ll try to divide operational metrics into those that allow the company to control costs, versus those that affect downstream customer satisfaction, though many are tied to both.

Cost

Operational metrics that can be used to manage costs include:

  • Ticket volume: The number of requests per hour or day will determine the size of the support team necessary. Unfortunately, volume is not constant, and surges and quiet spells will lead to inefficiencies. Thus it’s useful to examine volume by different time periods. It’s also important to look at volume by channel, to determine the specific channels and areas where more resources should be devoted.
  • Availability: Availability is the portion of the time that agents can be reached; unless agents are available 24/7, availability will be less than 100%. In addition to reducing costs, increasing availability during off-work hours is one of the reasons why many companies look to outsourced support teams, which may be the topic of a future post.
  • Occupancy and concurrency: Occupancy measures the fraction of the time that an agent is serving customers. Concurrency considers the number of customers an agent can service at one time.
  • Handle time (by activity and channel): The amount of time an agent spends interacting with a customer, including wait times. Segmenting activities into more fine-grained buckets can provide insights into which steps of the process need to be improved. For example, it may be the case that agents are spending a great deal of time clicking through the knowledge base, in which case a better query and retrieval tool should be developed, or it may be the case that agents spend much of their time composing messages, in which case a tool like Sapling may be helpful.

Using the above metrics, you can calculate the expected throughput of your support agents. Each can be used as a knob to adjust for cost while achieving satisfactory quality. Next, we’ll discuss measures of quality.

Quality

For most of these metrics, a simple way to report the metric is as an average. However, percentiles are more informative—for example, if most customers receive a response within a few minutes, but for a few customers it takes half a day, averages will be skewed high despite satisfactory service.

  • (First) Reply time (or time to engage): In order to prevent a customer from becoming frustrated waiting for support, it’s important to measure how long they must typically wait. One cost-effective solution to reduce the response time is to set up an automatic workflow (or, in the case of calls, an interactive voice system) to get a customer started. Once these have been implemented, however, you’ll want to consider response times after the first when trying to improve the quality of service. Note that reasonable response times will vary significantly between channels (e.g. between chat, phone, and email.)
  • Service level: Similar to response time, service level specifies the percentiles of how long it takes for a customer to receive a response after being placed in a queue.
  • Number of interactions per ticket: While this could be a cost-saving measure, a high number of interactions may make for a very dissatisfied customer, depending on the quality of service delivered. In general the number of interactions should be low.
  • Resolution time: Although we place resolution time in the quality section, note that at the other extreme, tickets being closed before a customer feels their issue has been given due consideration can seriously hurt customer satisfaction. As previously mentioned, it’s useful to segment this further into each of the steps until resolution.
  • First contact resolution rate (FCRR): First contact resolution rate measures the percentage of the time that an issue or ticket is resolved in the first interaction with the support agent. In many cases, FCR can be the metric most correlated to customer satisfaction. Make sure that the definition of a ticket being resolved handles cases where tickets are reopened or resumed in new tickets when measuring FCRR. Learn more about this important metric here.
  • Abandonment rate: This considers the fraction of the time that customers stop being responsive after contacting customer support. This could be because response time or service is too slow, or because the quality of service is poor.

Customer satisfaction metrics

We now consider the ultimate success metrics to evaluate the effect of support on customer relationships. Many of these metrics are lagging metrics of quality. As we previously mentioned, lagging metrics are often more difficult to measure, and a key reason is because they are often based on surveys. It can be difficult both to get a representative sample as well as to get customers to respond to such surveys, the result being that it is only the customers who are extremely unhappy or displeased with the product or service who respond.

In the previous sections we have considered each support case to be equally important; however, especially for companies selling to businesses and enterprises, this will not be the case, as accounts should be prioritized based on account value.

  • Customer satisfaction score (CSAT): Unfortunately, there is no universally agreed upon method for computing CSAT. One method is to survey users asking for their satisfaction with the product on a scale of 1 to 10, then taking the average over responses. Other companies such as Zendesk simply ask the customer if an interaction was good or bad (a binary choice). Due to the surveying issues we mentioned, the importance of how the question is phrased, and the non-specificity of the term “satisfaction”, CSAT can be difficult to get right. Maintaining high CSAT, however, is crucial for preventing churn, or the percentage of customers who discontinue their relationship with the business. Here’s a useful guide for surveying from SurveyMonkey: Smart Survey Design. It’s also useful to calibrate your satisfaction score against benchmarks in your industry.
  • Net promoter score (NPS): Net promoter score is computed by first asking the question: “How likely is it that you would recommend our company/product/service to a friend or colleague?” (link).The idea behind NPS is that customers can be divided into promoters (9, 10), passives (7, 8), and detractors (0–6). Promoters encourage more users to use the product, while detractors discourage other users from trying it.
  • Customer effort score (CES): CES is similar to CSAT in that it is survey-based, but instead asked the customer how much effort their support transaction required from them. The assumption is that the amount of effort required from the customer in a support request is a more direct or actionable measure of the support experience, as well as the customer’s loyalty to the company.
  • Up-sells and cross-sells: The frequency with which customers buy higher-priced or related products from the company. This is an indirect but useful measure of satisfaction and how engaged a customer is with a product.

Support agent satisfaction metrics

A crucial piece of the puzzle, but one that is sometimes neglected in considering metrics, is the happiness of the agents who are providing the support. Happier, more motivated agents will provide better support. Further, keeping team members as part of the team can help maintain support quality and reduce time-consuming onboarding of new members.

  • Agent satisfaction (ASAT): Like CSAT, ASAT is measured through a survey which asks support agents to rate their satisfaction on a scale. According to the Zendesk customer support metrics guide, it may also ask for which aspects of their job they like and which aspects could be improved, which gives more actionable insights to their rating. Surveys should be administered to everyone on the team.
  • Number of escalations: A high number of escalations relative to other members of the team may indicate that an agent has not been well-trained to respond to support requests. Helping educate agents to help them be more productive is one step towards improving agent satisfaction.

Conclusion

There are metrics we have not discussed here, such as metrics that provide insights as to which aspects of the product may be problematic, as well as metrics that try and respond to viral trends such as on social media. These may be the topics of future posts.

Any feedback or other metrics that should have been included? What metrics are most important to you and your team? Please leave a comment below. ■

References and further reading


About Sapling

At Sapling, we’re building the intelligence layer for chats, tickets, and emails. Our team has over a dozen years of experience in machine learning and deep learning at the Berkeley AI Research Lab, the Stanford AI Lab, and Google’s Brain Team. The Sapling product suite is used by teams supporting startups as well as several Fortune 500 companies.

The Sapling Blog describes our learnings from developing solutions for customer-facing teams using the latest AI technology.

If you want us to email you when we publish new essays, sign up for our newsletter below (we’ll ping you biweekly or monthly, no more than that).

Write A Comment