Do You React to Bad News or Respond to It?

See no evil. As a third-party provider of Customer Satisfaction surveys and Quality Assessment, our group has delivered a tremendous number of reports to a wide array of companies over the past 25 years. We've delivered a lot of good news (which is always fun). More often than not we make presentations that start with a variation on the theme: "there's some good news, and some bad news." Once in a while, we're stuck presenting information that we know the client doesn't want to hear.

When the data paints a bleak picture you discover the true character of your client. Ultimately, I've witnessed one of two responses:

  1. Deny, Dispute, Deflect, & Ditch. It usually begins with disputing the data and the methodology, progresses to pointing blame elsewhere, and ends with the data getting buried in "the circular file." It's always fascinating to me to watch it. I beleive, in most cases, it's hard for people to get past the initial fearful reaction to the information. Experience tells me that this course rarely ends well, and never ends as well as it could.
  2. Accept, Aim, Arrange, & Act. Good data, whether it's from a customer survey or a Quality Assessment, is priceless information. It gives you a clear picture of where you stand and the course you need to take. I beleive, in most cases, people who take this course are able to get past their initial fearful reaction and consciously respond to the information. I'm always excited to watch someone take the data, even the bad news, and realize that it's a road map for their success. Watching people use the data to improve their service, their customer's satisfaction and their own future helps to keep me motivated each day.

The next time the data doesn't look good, catch yourself reacting and choose to respond.

Creative Commons photo courtesy of Flickr and billyrowlinson

Coffee Time Links

Cwg coffee cup lr 15 minutes.
1 cup of coffee.
5 great links.

Top Tips for Increasing Telesales Conversions
12 Lessons from the Best Customer Service Companies

Keeping Pace with Customers

Can We Prove Customer-Centered is Better?

Tweet, Tweet, a Little Birdie Told Me You’d Better Pay Attention to Customer Service

Would You Make Your Customer Experience Public?

This commercial from Zappos! was picked up in our group's internal weekly Items of Interest (IOI) email  [thanks, Wendy!] which she picked up from Service Untitled and AdGabber. It's a method originally used successfully by On-Star in which actual customer service calls are utilized to showcase the power of the customer's experience.

Is your company delivering a customer experience that you'd be willing to make public? Is your company delivering a customer experience you're praying your customer's won't make public?

"Passionate About You"

Many thanks to Matthew over at Conversations With Life for sending me this YouTube video from Brussels Airlines. What a great conversation starter for your call center or customer service team:

  • What are you passionate about?
  • What is your company passionate about?
  • Who are you passionate about?
  • What do our actions reveal about our passions?

When You've Already Dropped the Ball

They call because something has gone wrong. One of the more subtle service skills employed by world-class service providers is recognizing when customers are expressing their dissatisfaction. Customers often call because something has already gone wrong in a previous interaction with the company. Perhaps the customer called before and has not received the promised call back or follow-up. Perhaps they tried to self-serve on the web or through the IVR and didn't receive resolution for their issue. It's important for Customer Service Representatives to listen for the customer's words and phrases which indicate the company has already dropped the ball:

  • "I haven't received..."
  • "I called yesterday. Someone was supposed to call me back..."
  • "I'm still waiting on..."
  • "I tried to find it on-line, but..."
  • "I was in your automated system, and..."

The customer is trying to tell you something. You didn't meet my expectations. Something went wrong. I've already been inconvenienced. You already dropped the ball.

As soon as you hear it, employ an empathy/resolution statement. Quickly and sincerely apologize for exactly what went wrong, then focus on what you can and will do for the customer.

  • "I'm sorry you didn't receive it. I will be happy to check on that."
  • "I apologize we didn't get back to you. Let me look that up and we'll get this resolved for you."
  • "I'm sorry for the delay. I can check the status of that for you."
  • "Sorry for the confusion. I'll help you find what you're looking for."
  • "I apologize for the trouble you had. Let's get that information for you."

This simple technique quickly acknowledges you'ver heard the customer's dissatisfaction, communicates you empathize with their inconvenience, but quickly focuses on the number one priority of resolving the issue.

Creative Commons photo courtesy of Flickr and realestateclientreferrals

Why Monitor Your Company's Phone Calls?

Bigstockphoto_Making_the_call_817508 While it seems that everyone is monitoring phone calls these days, and it is certainly the norm in the call center industry, the reality is that there are many small to mid-sized companies who have not entered the world of call monitoring. Some companies are unaware that the technology exists and is easily accessible to companies who have just a few people serving customers on the phone. For others, the idea of call monitoring and Quality Assessment (QA) seems a daunting idea. The thought of recording and assessing phone calls brings to mind several uncomfortable questions. Many executives and managers are overwhelmed with the idea of trying to figure out what to do with it and how to figure out how to make it work for them. Others prefer to remain blissfully unaware.

Nevertheless, for a business of any size, there is value in call monitoring. When it's done well, the recording and assessment of customer interactions provides:

  • Valuable Knowledge. Monitoring and analyzing calls between your business and your customers is far more than playing Big Brother and grading the performance of your agents. Within those recorded conversations is a plethora of valuable information. From monitoring calls you find out why your customers are calling, what problems your customers are commonly experiencing with products and services, what customers are saying about your business, and who your customers are. You discover clear opportunities to improve efficiency, productivity, and improving your customer's experience.
  • Accountability. Call monitoring also provides you and your employees with accountability, ensuring that your brand is being consistently communicated and your people are performing to their potential. Monitoring calls and performance allows you to reward those who you know are contributing to your success and address those who are impeding it. Without call monitoring, you're blind to the hundreds or thousands of "moments of truth" that are impacting your customer's satisfaction and future purchase intent on a daily basis.
  • Tactical Improvement. When our group performs employee satisfaction surveys for our clients, we find employees consistently desiring more communication and feedback from their superiors. The vast majority of employees want to know know how they are doing and how they can improve. Call monitoring provides a company with the means to make that communication and feedback happen. A successful Quality Assessment (QA) process gives employees specific, behavioral goals for improvement, tracks their progress, and gives managers the data they need for productive performance management discussions.

There has never been a greater opportunity for businesses of every shape and size to benefit from the available technology to record, monitor and analyze conversations between your company and your customers. Companies who take advantage of the resulting data and information will find themselves a step ahead of others who continue to trust their gut.

100,000 Call Maintenance

Regular maintenance required. I'm a believer in regular maintenance on my car. Through the years I've owned several cars and I've found that following the maintenance schedule makes a huge difference in the life and durability of the vehicle. My car doesn't need the exhaustive 100,000 mile maintenance every 3,000 miles. It only needs an oil change.

I'm also a firm believer in an exhaustive assessment of service quality. Not just subjectively considering how I thought a Customer Service Representative (CSR) did on a particular call, but a systematic, statistically valid and comprehensive analysis of what is, and what is not, happening when a CSR picks up the phone to talk with a customer.

I watch companies who, for the sake of time/energy/ease, will assess their agents on a small list of fairly subjective elements. It's fine. It's good. It serves the purpose of checking the oil and keeping the quality effort lubricated. However, sometimes there's a crack in the brake line that will prove costly, even deadly, if it's not caught and remedied. Pulling the oil dipstick every 3,000 miles isn't going to warn you of a problem with the brakes.

I understand you don't have resources for a comprehensive tune-up every day, but an exhaustive and objective assessment of the call center on a regular, if periodic, basis isa great way to ensure your service delivery continues running smoothly over time. It can help you catch little problems and blind spots before they become tragic and costly issues.

Who QA's the QA Team?

Bigstockphoto_Business_Meeting_1264156 It's a classic dilemma. The Quality Assesment (QA) team, whether it's supervisor or separate QA analyst, evaluates calls and coaches Customer Service Reps (CSRs). But, how do you know that they are doing a good job with their evaluations and their coaching? Who QA's the QA team?

The question is a good one, and here are a couple of options to consider:

  • QA Data analysis. At the very least, you should be compiling the data from each supervisor or QA analyst. With a little up front time spent setting up some tracking on a spreadsheet program, you can, overtime, quanitfy how your QA analysts score. How do the individual analysts compare to the average of the whole? Who is typically high? Who is the strictest? Which elements does this supervisor score more strictly than the rest of the group? The simple tracking of data can tell you a lot about your team and give you the tool you need to help manage them.
  • CSR survey. I hear a lot of people throw this out as an option. While a periodic survey of CSRs to get their take on each QA coach or supervisor can provide insight, you want to be careful how you set this up. If the CSR is going to evaluate the coach after every coaching session, then it puts the coach in an awkward position. You may be creating a scenario in which the coach is more concerned with how the CSR will evaluate him/her than providing an objective analysis. If you're going to poll your CSR ranks, do so only on a periodic basis. Don't let them or the coaches know when you're going to do it. Consider carefully the questions you ask and make sure they will give you useful feedback data.
  • Third-party Assessment. Our team regularly provides a periodic, objective assessment of a call center's service quality. By having an independent assessment, you can reality test and validate that your own internal process is on-target. You can also get specific, tactical ideas for improving your own internal scorecard.
  • QA Audit. Another way to periodically get a report card on the QA team is through an audit. My team regularly provides this service for clients, as well. Internal audits can be done, though you want to be careful of any internal bias. In an audit, you have a third party evaluate a valid sample of calls that have already been assessed by the supervisor or coach. The auditor becomes the benchmark and you see where there are deviations in the way analysts evaluate the call. In one recent audit, we found that one particular member of the QA team was more consistent than any other member of the QA and supervisory staff. Nevertheless,there was one element of the scorecard that this QA analyst never scored down (while the element was missed on an average of 20% of phone calls). Just discovering this one "blind spot" helped an already great analyst improve his accuracy and objectivity.

Any valid attemps you make to track and evaluate the quality of your call analysis is helpful to the entire process. Establishing a method for validating the consistency of your QA team will bring credibility to the process, help silence internal critics and establish a model of continuous improvement.

If you think our team may be of service in helping you with an objective assessment or audit, please drop me an e-mail. I'd love to discuss it with you.

One Way to Save Time, Save Money & Get Better Data

Loads of worthless data. Many companies ask their Customer Service Representatives (CSRs) to enter a "wrap code" at the end of a call so that the company can track what types of callers and what types of calls are being handled. In over 15 years of working with corporate call centers, I have yet to see a "wrap code" process that worked well and yielded valid data. Here's the problem:

  • It's a hassle. Most CSRs have to enter the wrap codes during their After Call Work (ACW) and before the next call comes in. Because most call centers are routinely slammed with call volume, and CSRs can't finish their ACW before the next call is popped to their desktop, the wrap code becomes an annoyance that they consider secondary to the priority of the next customer who is on the line. They will ignore the wrap code or click on anything just to move on.
  • Poor options. Call Type codes are too vague (the codes available don't match the actual reasons customers call) or too specific (there are hundreds or thousands of call types that become a quagmire for CSRs to weed through).
  • It's perceived as useless. While many call centers gather this call type data, few share the data or why it's important. If CSRs don't understand why it's important and what the data is used for, they have no motivation to care about coding calls correctly.
  • It is useless. Managers who understand that CSRs aren't faithful in appropriately coding the calls appropriately don't trust the data. Others simply don't know what to do with it and it becomes another thing that we waste time doing gathering data we will never use (think of the thousands of seconds clicking drop down boxes, looking for codes, thinking about what to put).

Getting an idea of who is calling, and why they are calling can, indeed, be a valuable piece of information. It can help you segment your call volumes to various specialists to be more efficient. It can help you address call routing and IVR issues which will save your call center time and money while improving customer satisfaction. It can even give you early warning of larger impending issues.

However, asking CSRs to code calls on the fly could very well be an expensive, inefficient way to gather loads of invalid data.

Here are two options to consider:

  • If you have a QA team that is analyzing a large, random sample of calls, you just might get better call type data by asking them to code the caller and call type in their analysis. Be careful. Ensure that your QA team is, indeed, analyzing a random sample of all calls. QA teams are notorious for selectively excluding calls from the sample ("That one is too long, I don't want to take the time to score that one.")
  • Having a person or a team doing a periodic call type analysis may be the most efficient and effective route to go. Randomly select a valid sample of calls from your recording pool and go through them simply to mark "Who called?" and "What were they calling about?" You can get through a ton of calls in a short period of time and will likely get better data than having CSRs coding every call.

Creative Commons photo courtesy of Flickr and hand-nor-glove

Managing Appeals & Challenges in QA

A process of appeal. Special thanks to one of our readers, Sarah M., who sent an email asking about the process of a CSR challenging their Quality Assessment (QA) evaluation. Unless you've gone the route of having speech analytics evaluate all of your calls (which has inherent accuracy challenges of its own), your QA process is a human affair. Just as every CSR will fall short of perfection, so will every QA analyst. No matter how well you set up the process to ensure objectivity, mistakes will be made.

Because QA is a human affair, you will also be evaluating individuals who do not respond positively to having their performance questioned or criticized. There are a myriad of reasons for this and I won't bother to delve into that subject. The reality is that some individuals will challenge every evaluation.

So, we have honest mistakes being made, and we have occasional individuals who will systematically challenge every evaluation no matter how objective it is. How do you create a process of appeal that acknowledges and corrects obvious mistakes without bogging down the process in an endless bureaucratic system of appeals, similar to the court system?

Here are a couple of thoughts based on my experience:

  • Decide on an appropriate "Gatekeeper." Front line supervisors, or a similar initial "gatekeeper" are often the key to managing the chaos. There should be a person who hears the initial appeal and rightfully acknowledges there was an honest mistake, a worthy calibration issue, or dismisses the appeal outright. Now we've quickly addressed to probabilities: the honest mistake can be quickly corrected or the appeal without standing is quickly dismissed.
  • Formulate an efficient process for appeal. If an appeal is made that requires more discussion, than it needs to go a step further. I have seen many different set ups and scenarios this may successfully take. The "gatekeeper" might take it to the QA manager for a quick verdict. There might be a portion of regular calibration sessions given to addressing and discussing the issues raised by appeals. Two supervisors might discuss it and, together, render a quick decision.
  • Identify where the buck stops. When it comes to QA, my mantra has always been that"Managers should manage." A process of appeal becomes bogged down like a political process when you try to run it democratically. The entire QA process is more efficient, including the process of appeal, when a capable manager, with an eye to the brand/vision/mission of the company, can be the place where the buck stops.

Those are my two cents worth. What have you found to be key to handling challenges and appeals in your QA program?

About Tom

cwenger group web site

Subscribe to Feed

Search QAQnA

  • Google

    WWW
    QAQnA

Badges of Honor

Powered by TypePad