Views

Everyone dreads performance reviews. Here's how to fix them

Sora Shimazaki for Pexels

No one — especially HR pros — likes performance reviews. They are time consuming, ineffective and dreaded by managers and staff. But because they are the basis for compensation and promotions, they live on. 

How did we get here? Hierarchical performance reviews — the most common type used today, where managers record their opinions of staff — were created by the military after World War I. This technique has hardly changed in the 80+ years since, even though it seems to many to be arbitrary and heavily influenced by individual biases and company politics. The 360-degree review style emerged in the 1930s, but all it really did was add more subjectivity to the process. Candidates generally picked who reviewed them, so politics and popularity still played a big role. 

Read more: Ask an Adviser: How can managers improve feedback and performance reviews?

More recently, HR software companies emerged to help businesses automate performance reviews, but that only made a rotten process more efficient — not more effective. 

A better approach would accommodate for these factors:

  • Most people now work in a networked rather than a hierarchical way. That means they collaborate across teams, not just with a direct manager, so reviews should solicit input from those networks in addition to direct managers and first-degree peers. At Confirm, we've found in our performance cycles that top performers impact dozens of coworkers from all around the company.
  • Remote work has eliminated water-cooler talk and "management by walking around." Managers need other ways to get visibility and evaluate performance. Many conversations that used to happen in the open now happen in closed point-to-point channels like Slack and Zoom. Collecting multiple views on a person's performance helps create a mosaic using a larger sample size of data.
  • The U.S. has moved toward a knowledge worker economy. There isn't much variation in how quickly a person can install a part on an assembly line. But there is a lot of variation in how well an engineer on a team can contribute to a new software feature. As work has become more complex and collaborative, those distinctions are harder to measure and demand new approaches.
  • The unavoidable fact of bias. Research has shown that more than 60% of a typical performance rating can be attributed to the idiosyncrasies of the manager. Unconscious bias is one idiosyncrasy that distorts evaluations based on race, gender, age and other irrelevant factors. A broader group of reviewers can create a check on one manager's bias. 

Using data to evaluate employee performance 
Organizational Network Analysis (ONA) is a technique for mapping employee relationships and patterns. We see an application for ONA in performance reviews. By asking employees a few specific questions, and then applying ONA techniques, companies can quickly surface which team members are making the most impact on the company, and which are not. Example ONA questions include:

  • Who do you consider to be a top contributor at the company?
  • Who do you believe needs additional support or attention?
  • Who at the company do you go to for help and advice?

The questions are presented via online survey and take just a few minutes to answer. This enables companies to assess performance more than once a year without burdening employees — many do it quarterly.

Read more: How to reduce bias in performance reviews

ONA helps companies identify their superstars — the people they absolutely need to retain — including whether they are a flight risk. It also identifies underperformers who may need assistance or may eventually be managed out. Importantly, ONA provides quantitative data based on a large sample size, not just one person's opinion, and that gives every person a fair shot at a high score and a raise or promotion. It tends to amplify the quieter voices in the room, and reduces the need for people to manage up or sell themselves (something not everyone is good at).

ONA in action
Often, ONA is used alongside manager ratings as a sanity check or to calibrate. Here's a real example: consider Jane and John, who are both are software developers in an engineering department. They are both in the same job function, job level and compensation band, and both received "Strong" ratings from their managers, making them eligible for the same raise and promotion. However, ONA ratings (i.e., what the network thinks) identify Jane as an exceptional performer and John as an underperformer. This indicates a breakdown in the system. In this case, both reviews merited a deeper look. After the employer calibrated with ONA ratings, Jane was promoted and John was eventually managed out of the company.

Read more: The annual performance review is 'straight-up dead.' But what comes next?

It's not unusual for manager ratings and ONA ratings to differ. In fact, Confirm data shows that it happens approximately half of the time. Because ONA relies on larger sample sizes, it can improve fairness by applying the same process to every person, and control for managers' various interpretations of performance criteria. It enables companies to base decisions about career advancement, retention and pay raises on hard data. It also saves hundreds of hours previously spent on a broken performance review process.

While software and data have transformed many areas of HR — think applicant tracking systems, payroll and learning & development — performance reviews have been frozen in time. It's about time that changed. 

For reprint and licensing requests for this article, click here.
Employee productivity HR Technology
MORE FROM EMPLOYEE BENEFIT NEWS