Building a support quality assurance program is a big but worthy mission.
From the beginning, it lays the groundwork for a uniform support experience. As it evolves, QA fuels continuous improvement, helping organizations understand the ROI of customer service.
But quality assurance is also challenging. You can bake all the right policies into your rubric and train agents to bring their best, but without a system to evaluate it, your team’s efforts still come down to guesswork.
That’s why the most forward-thinking companies embrace QA processes—to drive better customer relationships, grow and refine their teams, and earn long overdue bragging rights within their organizations.
Their motto? Hold the guesswork, pass the recurring revenue.
To join their ranks, read on.
Conduct a conversation audit
When building a support QA program from scratch, there’s no better information source than your existing well of agent-customer conversations.
Quality assurance programs are most effective when support leaders understand what their team struggles with, what customers struggle with, and how agents and customers react to various issues and communication styles
To unearth these insights:
Choose a balanced sample of customer interactions to review
Pay special attention to tickets on either side of the CSAT extreme
Make note of what agents should start, stop, and continue
By the end of this process, you should have a sense of what makes an interaction good (or not good) given a range of circumstances.
You should also have some ideas about what agents can do to avoid negative interactions, particularly when a customer’s ideal outcome is out of support’s hands.
Establish service quality standards
Once you understand your support team’s strengths and weaknesses, it’s time to turn insights into expectations. What does good support look like—for the business and your team?
Armed with various customer service scenarios, your first step is to identify their corresponding customer outcomes.
Step two is defining which agent actions are most likely to move customers as close to their ideal outcome as possible. For instance, product issues and bugs may be beyond your team’s control, but how agents communicate this news can make all the difference.
In situations like these, an ideal customer outcome might be: The customer is confident our team is doing everything we can to improve the problem as quickly as possible.
Generally, customer service quality standards should serve to:
Ensure support reps know how to best help customers meet their goals
Make customers feel valued—especially when barriers impede their goals
Here are some guiding questions to help craft your customer service credo:
What kind of language should agents use to maximize empathy and effectiveness across all interactions?
What tools and resources should agents exploit to help them serve customers more efficiently? Does the order of operations matter?
What steps should agents take to increase the odds of customers walking away with a positive impression of your company given an inconvenience or failure?
What steps should agents take when they’ve exhausted their options but a customer is still upset?
What strategies can agents employ to diffuse conflict with angry customers?
Documenting an ideal vision of support means identifying common scenarios and then arming agents with the right strategies, workflows, and escalation paths to handle them effectively.
Build a rubric that won’t get in anyone’s way
Your team’s quality assurance rubric should be a direct reflection of what good support looks like for your team.
But translating an “ideal vision” into an objective scorecard presents its own problems. Too often, quality assessment fails to produce actionable insights because the rubric is either:
Too vague or generic
Not linked to coaching
The key to effective conversation scoring is being specific without being too prescriptive. Striking that balance is a matter of asking the right questions.
Instead of asking...
Did the agent display empathy?
Did the agent reflect an understanding
of the customer’s concerns that the customer agreed with?
Were all of the customer’s issues resolved?
Did the agent provide actionable solutions or a timeframe for follow up if solutions aren’t immediately possible?
Was the agent transparent?
Did the agent use explicit language to convey their next steps (i.e. “I’d like to place you on hold so I can double-check on this”)?
Remember: Ambiguous questions can create difficulties for conversation reviewers. If they have to think too hard about how an agent performed, you’ll wind up with a biased grading system that harms morale and fails to create meaningful insights.
When writing a rubric, don’t be cute. Be concrete. Link key agent behaviors (X) to ideal customer outcomes (Y), so that if an agent says or does X, then a customer can Y.
If you’re struggling to find the specificity in a dimension like “friendliness” or “personalization,” do as NiceReply suggests: Keep asking how until you arrive at behaviors you can actually measure.
In terms of scoring agent-customer conversations, using a binary (Yes/No) scale is the simplest way to get started.
You can expand your scale as your QA program matures. Just make sure everyone understands what constitutes each score—especially if your process grows to include more than one reviewer or quality assurance analyst.
Earn buy-in from key stakeholders
No, not your executive team.
When building a quality assurance program, getting support from agents—and whoever’s charged with evaluating their conversations—is a non-skippable step that can infuse some much-needed momentum into the process.
But getting people to rally around new business initiatives can be tough. Especially when the initiative at hand involves critiquing their work.
To build enthusiasm around an initiative like QA, Harvard Business Review recommends:
Being transparent about your motives: Ask yourself why you’re pushing for QA, what you hope to accomplish, and how it will benefit others.
Making the goal as specific as possible: What exact issue are you trying to solve—and how will a quality assurance program help to solve it?
Tailoring your sales pitch to the audience: How will QA impact your agents? What objections or concerns are they likely to have and how can you combat or allay them?
Suggesting a test run of your plan: Pilot programs reduce the risk associated with new initiatives, offering participants the opportunity to share feedback during the process.
While each takeaway is important, customizing your sales pitch might be the most impactful.
Among support reps, quality assurance gets a bad rap. Too often, it’s treated as a reaction to an isolated event or as a way to penalize agents for making mistakes.
Your challenge is to change these perceptions. And you won’t do it by pretending they don’t exist.
So while you’re selling agents on support QA’s many benefits, make sure they know it’s not about forcing them to stick to a script or writing them up for uttering too many “Umms.”
Prepare your tech stack
Quality assurance tools make QA activities easier, but not having QA software won’t doom your program.
Without a purpose-built tool, the key to a successful launch is figuring out how to best funnel and centralize conversations from each channel so that conversation reviewers (and in some cases, agents) have reliable access to the information they need.
This typically involves:
Keeping well-organized records of conversations and agent QA scores
For many teams new to QA, this data lives in spreadsheets—which begs the question what about process?
With cloud-based software like Aircall, you can handle QA call reviews in a few ways depending on what works best for your team:
Option 1: Listen to and review calls within your phone system; manually log them in a spreadsheet as you go
Option 2: Use filters (tag, date range, agent, team, etc.) to run targeted exports that include call recording URLs; listen and review on the spot or save it for later
Option 3: Automate the process with webhooks or Zapier so call recordings are deposited directly into Google spreadsheet(s) with minimal human intervention
Option 3—the most scalable choice—requires above-average tech-fluency, but the effort invested upfront will save your team a ton of time in the long-run.
As for other channels, here’s what we recommend:
Email: Take a cue from Fiverr. Use your CRM to create QA-specific views that pull tickets handled per agent or agent group—then funnel the information into spreadsheets with manual data entry, exports, or a tool like Za
Live chat: Like email and phone, live chat data will need to be exported, reviewed, and organized**.** You may also be able to automate this depending on your live chat provider. Make sure to purge accidental and spam chats to keep data clean and accurate. Tagging helps tremendously here.
If you’re inclined to put numbers to trends without QA software, take advantage of Excel’s conditional formatting functionality. Once set up, you’ll be able to track negative or positive changes to QA data without cross-referencing various documents.
Commit to a regular cadence
Consistent customer service is one of the primary benefits of good quality assurance. Agents will know exactly what’s expected to deliver a uniform support experience, and when new or unforeseen issues arise, they’ll receive prompt coaching to help them nail it the next time.
But seeing these rewards will require sustained effort. If you perform and apply QA inconsistently, your outcomes will become compromised.
Like any other well-intentioned undertaking, you’ll need to commit to drive results. Which means figuring out:
How often you’ll review conversations
How soon you’ll meet with agents to discuss results or offer feedback—and how often
How much time your team can invest in coaching and other group training activities
One of the most challenging aspects of support QA is finding the time to collect and make sense of the data without tying up significant resources, so feel free to adjust the frequency of your activities as you become more comfortable with the QA process.
Being consistent with QA is less about being tethered to a schedule than it is about being able to:
Enable meaningful measurement
Create individual and team accountability
Establish a reputation
Maintain your message
Building a successful support QA program is about considering your people, your process, and your technology.
With these core components in place, success is a matter of staying the course, measuring your results, identifying trends, and optimizing based on what you learn.
It’s not enough to write a rubric and cross your fingers. Maintaining a scalable and repeatable quality assurance process requires data-driven, actionable insights powered not only by technology but by a motivated team—the ultimate in competitive advantage.
Enjoy this post? You may also like these:
Published on January 2, 2024.