When Managers Misbehave

By Linnea Gandhi (TGG Group)

Are you sure you really want to be a consultant?” My manager’s words rang harshly in my ears during my first review. I was a new consultant with few Excel skills, and my performance could certainly have used improvement—but was it bad enough that I should have been considering a different line of work?

Six years and many more consulting projects later, I’ve gained enough distance and experience to realize that my overall performance was far from the only factor affecting my manager’s judgment. Maybe he was right, and based on my lackluster Excel skills alone there was a high probability I would fail in consulting. Maybe, though, he was misbehaving, influenced by factors that should have been irrelevant (or at least not weighted quite so strongly) in his evaluation.

Was it due to inappropriate anchoring?  His reference point may have been the senior consultant on the project, a man he clearly esteemed and even rated as one of the best he’d ever worked with. Next to this more seasoned colleague, I may have looked inadequate even if my skills were on par with my level. How about recency bias? The timing of the review was less than ideal: just the day before, my manager had caught a major mistake of mine only minutes before a big meeting. Or perhaps the affect heuristic played a role? The whole team had been working overtime to meet a deadline, and we were all beginning to feel the effects of sleep deprivation.

As much as I might question my manager’s judgment, I can’t blame him.  Chances are pretty high that I – and you as well – similarly misbehave when evaluating our colleagues. Our judgments of their performance and potential vary due to systematic bias and random noise, even when we are aware and try to correct ourselves.  Recent events and nearby reference points (as above), memorable successes and failures, and even our uninformed first impressions easily and imperceptibly stick with us to unduly shape our later reflections on overall performance trends.

The impact of biases on our professional judgment doesn’t stop with performance evaluation. When it comes to organizational decision-making, one of the richest areas in which to spot misbehaving is within HR processes. Recruiting talented individuals, evaluating their performance over time, and then compensating them in a fair and motivating way—and doing it all based on limited or even erroneous information—is often subjective and incredibly complex. In such a context, biases can easily take over our decision-making.

But smart companies are starting to do something about it. Over recent years, and particularly in the last few months, organizations have begun to test solutions to these biases and, in the process, “reinvent” or even “blow up” HR (as the Harvard Business Review put it, rather dramatically).

Emerging Examples

  • In 2013, Microsoft very publicly abandoned forced or stack ranking, a system by which employees were relatively and rigidly ranked against each other from best to worst across a fixed distribution (i.e., a certain percentage of employees had to end up in each bucket of performance, regardless of the true shape of the population’s performance). Under forced ranking, employees’ fates often depend on who else happens to be on their team or at their given level. Dropping this system, as Microsoft did, can help reduce the influence of irrelevant or arbitrary reference points and prevent superstar outliers from overshadowing solid performers on a small team—not to mention encourage greater collaboration and a focus beyond the current evaluation cycle. Expedia, Adobe, and others have similarly moved away from forced ranking or even ratings in general.
  • As described in detail in Laszlo Bock’s Work Rules, published this spring, Google has experimented with its recruiting and performance evaluation systems for years. In addition to educating employees about potential biases, the company has developed processes to de-bias them and gather the best predictors of future job performance. This includes work sample tests (which they do for all technical hires), tests of general cognitive ability, and structured interviews (each recruiter follows a consistent set of questions and scales).  Together, these tools help ensure that when candidates receive different scores, it is due to actual differences in their performance and not differences between interviewers.
  • This April, Deloitte boldly announced its adoption of an evaluation aimed to reduce rater bias (e.g., the fact that one rater may be more or less lenient than others).  Instead of soliciting raters’ beliefs about an employee’s skills, the new review asks about the rater’s planned actions with respect to the employee. Raters respond to a series of four statements including “This person is ready for promotion today” and “Given what I know about this person’s performance, I would always want him or her on my team.” Deloitte believes that raters will demonstrate greater consistency when reporting on their own intentions than on their beliefs about others. Though this approach has yet to be rigorously tested, its success would be notable for the HR community given the potential gains in accuracy, simplicity, and efficiency.
  • My current employer TGG Group – and other behaviorally-minded organizations like ideas42, where experimentation is embedded in the culture – has tested and integrated decision aids into the recruiting process. Two notable characteristics of these aids at TGG Group include independent assessments (i.e., recruiters evaluate candidates independently before discussing together) and comparative judgments on a flexible, non-forced distribution (i.e., recruiters rate relative performance on those qualities where absolute performance is likely to be inconsistently measured).

What can you do?

As examples like these continue to emerge – and there are many more out there – it may be tempting to jump on the bandwagon and adopt the latest HR trend. But before doing so, it’s critical to contemplate not only “Will this work?” but also “Will this work for my particular organization?” When we’re applying behavioral science outside of the lab, context matters. Is the performance cycle project-based? Are there quantifiable metrics of performance for all employees? How much variation among employees is expected given compensation systems, and should the latter evolve as well? Given the novelty of behaviorally-informed solutions and the natural diversity of our organizations, we need to re-design our HR processes with care, transition at the right pace for the culture, collect data on what works and what doesn’t, and then re-design and re-test all over again.

Lest I leave you on too cautionary a note, here’s one tactical solution that you can implement immediately to mitigate your own misbehaving when evaluating your colleagues: create a feedback log. Once or twice a week, dedicate ten minutes to reflect on specific examples of accomplishments and areas for improvement for these individuals and write them down. (As my professor Linda Ginzel always says: If you don’t write it down, it doesn’t exist!)  For good measure, you can even set up a calendar invite to remind you to reflect. You’re now officially collecting data, so when your next feedback cycle or even ad hoc chat comes around, you can review your log for actual patterns to compliment or constructively critique, rather than be biased by whatever comes easily to mind. And you’ll have taken a critical step towards what our beloved misbehaving author (Professor Richard H. Thaler) would call evidence-based management.

Here’s to ridding the world of misbehaving managers…including ourselves!

Image credit: Flazingo Photos