Australian tech darling Atlassian is overhauling its approach to annual performance reviews in a bid to broaden its definition of success in the workplace.
The changes were made in response to the phenomenon of ‘brilliant jerks’, workers who smash formal targets or key performance indicators, but do so while being high maintenance or disruptive.
Bek Chee, Atlassian's global head of talent, said the review also addressed other cognitive biases in traditional review processes that can unfairly impact on women and underrepresented communities both in tech and the wider corporate sectors.
“For example, we know that women who exhibit leadership behaviors are often characterised as “bossy” while their male peers showing the same characteristics are considered “leaders” and often times promoted as a result,” Chee said in a blog post.
Atlassian's new scheme weighs three areas equally to build performance ratings:
- Expectation of the role
- Contribution to the team
- Demonstration of its company values
A brilliant jerk might tick off the first selection, but their sub-par contribution to a healthy team environment would take them down a peg.
“Our goal was to create a system that recognizes and rewards employees for work that often goes unnoticed or unaccounted for,” Chee said.
“This can be especially true for traditionally underrepresented groups who tend to volunteer for (and in some cases, are assigned) a greater percentage of work related to team building such as planning team offsites or leading community lunches for employees of colour.
“By including team contribution into our assessment, we’re ensuring that everyone is equally incentivised to contribute to their team environment and are recognised for doing so.”
Atlassian's Chee argued that traditional review processes can be “too little, too late” to identify or mitigate such behaviour. She therefore argued that a more holistic, structured review process also helps mitigate against reviewers’ unconscious bias.
“Research shows that unstructured or non-specific review criteria are most likely to create biased assessments, so we introduced detailed descriptions of the behaviours and impact associated with each performance level.
“Our experiments also showed that by rating values, role, and team contribution separately, we avoid the halo effect, where performance in one area unnecessarily influences ratings in other areas.”
The rollout of the new review process has been staggered over 12 months, to help the company calibrate its reviews, and confirm whether the assessments were both fair and led to actionable insights.
Chee said the early feedback has shown modest but notable increases in the quality of review outcomes, with a 10 percent increase in how individual performance is assessed, and a nine percent increase in the feeling that feedback will actually help employees improve their performance.