Player Support

View Original

Agent Performance Management Tool

Introduction

This tool’s purpose is to help a player support team understand the individual performance of each agent, facilitating valuable management conversations for further action. Used correctly, it helps a team push it’s performance upwards without setting hard KPI targets or giving cash bonuses based on certain performance metrics. At its heart is the desire to see a high quality of fair, transparent people management owned and driven by the team itself. Practically, it is designed to make itself obsolete when the average “low performer” is at the same level as a new hire. When you can no longer hire new people who will automatically do better than your “low-performers”, the system as described here must be retired.

Here is a summary of the intended benefits:

  1. Fair Work Evaluation - All managers are required to thoroughly understand their people’s performance, which is a mix of automatically captured performance metrics and qualitative and quantitative, evidenced feedback on other activities that contribute to value delivery.

  2. Observation of Management Behaviors for Improvement - Supervisors above the direct people managers of the agents will get to observe how their juniors are getting on, their strengths and weaknesses, and can take notes for both individual and group opportunities. You may even want to tweak your systems, training, and processes as a result.

  3. Focus of Management Energies - The stack ranking results will divide the agents into three general groups: top performers (15-20%), performers (60-70%), and low performers (15-20%). The range is arbitrary, but can be experimented with to see what creates a reasonable workload for your managers. Top performers are people who need acknowledgement, challenge, promotions, more responsibility, and additional rewards. Performers may need motivation, training, and encouragement. Low performers need help and that’s okay, so long as they improve.

The Tool

This information will be used to populate a spreadsheet that all of your managers will have access to with every agent in your operation and their respective performance data. Please feel free to use it as it is, but these are just examples of what you could put in your system. The intention was to be comprehensive, fact-based, fair, and thoughtful.

The following are some explanations to help you either use this and/or make your own:

  • Dimensions of Work - the categories at the very top are the most important ideas here, as they capture what dimensions of work you’re going to focus your agent evaluations on.

  • Productivity - this is how your operation measures general productivity. Notice that we have both the total number of issues resolved and another throughput metric. Some people call it ‘touches per hour/TPH’, some call it ‘contacts per case'/CPC’. Use whatever works. Having multiple metrics avoids some of the imbalances and gaming that can occur if you’re only looking at one of them. The delta sign is to help you understand performance from one measurement period to the next.

  • Quality - this is how your operation defines the general quality of your responses. I have generally dismantled vanilla BPO quality programs and distributed their functions to senior agents, but know that many are uncomfortable with the idea, so I’ve left this here. The delta sign is to help you understand performance from one measurement period to the next.

  • CSAT/NPS/CES - one or a mixture of these is usually how your operation will define success in the eyes of the customer because they’re the one that rates your service. Each has its own emphasis and value. In the past, I’ve mainly focused on providing email and various chat options as service channels. However many you have, and however their rating systems work, you’ll want to track these numbers. The “%1s” is the percentage of their reviews in a specific channel that gets the lowest customer review rating possible. These are of special note for pain analysis.

  • Relevancy - this means data relevancy and is important on how much weight you want to place on an agent’s CSAT/NPS/CES numbers. If they have only been reviewed five times because they’re new then even if they have a perfect CES score it’s not super relevant, despite it being a fantastic start. Conversely, if they’ve done hundreds of tickets with a perfect score then there’s definitely something worth paying attention to.

  • Additional Work - these are the skills the person is bringing to the table and the added value that some of those skills are bringing through projects. There are many things agents do besides tickets that are very valuable, which should be captured and appropriately recognized. There are strong possibilities to give certain operational activities outside of ticket work credit, so that people do not get falsely penalized for engaging in these. Some examples are self-service articles, the agent knowledge base, training other agents quickly on a process, building an automation, and more. Please be careful to document these efforts well and to make a note of the results with strong supporting data.

  • Behavior - there are behaviors that make work a better place and matter to a lot of us. We want to encourage these things. Then there are behaviors that make work like Hell. It’s up to each team how it wants to deal with these, but I’m a fan of putting some of it explicitly into the way we’re thinking about work performance. Remember, though, just like everything else whatever you use is required to have hard evidence and should be documented.

The Process

  1. Assuming you have your individual agent performance data, plug it into your spreadsheet.

  2. Stack rank your agents according to the KPIs and/or categories that hold the most weight.

  3. Each people manager then needs to find where there teammates fall in the ratings. If they’ve been doing a good job supporting their people, there shouldn’t really be many surprises.

  4. For those people managers who believe they have a case for one of their ‘performer’ agents to be a ‘top performer’, they should prepare themselves to advocate for that person during the review sessions with all the other managers. For those people managers that have people in the ‘low performer’ bracket that they believe should be moved up, they should also prepare.

  5. Leaders need to set a series of meetings where all people managers attend to review the stack-ranked list together. The conversations here are important and through them you will be building a centralized understanding of what 'good looks like’. Focus on the borders where some managers will have disagreements. Remember not to fudge with where you draw the lines between groups too much. There has to be cutoff points, otherwise the definitions will lose their meaning. Everyone does not get a trophy.

  6. Once the conversations have been settled and the new stack rank is decided, people managers will review where their people fall in the list and support accordingly. It is important not to treat this like a high school exam, only to procrastinate until the next review period. The results and the conversations are fuel for you to act, so create plans for each of your teammates you’re responsible for. Managers must help their people grow.

A Few Notes:

  • Frequency - You can set the cadence however you please, but I personally prefer the trimester. Quarters tend to cause too much scrambling for my taste and there’s a lot of preparation that needs to be done. I also tend to work with outsourced partners, so I sympathize with the generally heavy amounts of paperwork they have to do.

  • Do a Test Run - Besides training and socialization, I would also make the first run (or two) a trial to help people get comfortable with it and to answer questions.

  • Statistics - I have found that a lot of first time people managers could use statistics training, especially around the basics of inferentials.

  • Continued Low Performance - Eventually, every team will have to decide where the limit is for low performance. Define a required time period before issuing a PIP or whatever performance boosting mechanism you have. Please remember that PIPs are meant to help a person do well, so design and administer them with great care.

  • Additional Motivation - Be mindful of not ruining the instrinsic motivation of your best people with only focusing on extrinsic rewards. There’s a lot that can be done here that’s meaningful and shows a continued investment in their future.

  • Beware Stray Narratives - Always be talking to your agents to understand if their perception of your performance reviews matches yours. If it doesn’t, try to understand why and make shifts as you can. Look to them for improvement ideas, open a real dialogue. Avoid the reflex to just tell others that they’re ‘getting it wrong.’

Conclusion

Every team needs a way to measure, understand, and talk about performance to create fairness at work. This is a basic outline of how a Player Support team could do it, that you can modify to suit your needs. In the end you just need to find a way that’s true and fair, and takes the right things (that you decide) into account.

I would like to acknowledge my friends and colleagues who did a lot of the heavy lifting to build this tool and carry it through to execution. In no particular order: Tony Adams, Karol Potrykus, Dwayne Jenkins, James Urand, Pierluigi Paglione, my old teams at Riot Games and Epic Games - thank you so very much for all your hard work and trust.

If you have any questions, please feel free to send me an email at tony@playersupport.com or just leave a comment below.