Skip to content

Are there any objective objectives?

image by gadl from flickr

My company is currently undergoing an initiative to improve our performance appraisal process. Some people don’t believe bonuses and increases are fairly paid. Our solution to this problem has been to train managers in objective setting and roll out an initiative where objectives set for the senior management team are rolled down to the departments under them so that everyone understands what they need to do to perform well, and how their objectives relate to the performance of the company.

Can you guess how this has been received? No prizes for guessing that we have some very unhappy and vocal people. I know that measuring metrics leads to the wrong behaviour because as humans we will always game the metric. I’ve read all the articles suggesting that performance appraisals are bad, and do more damage than good, but I’m faced with a few stumbling blocks in throwing it all away.

1. I work for a global mid-size corporate company. So far we do Scrum in one of our 3 development offices, with less than 5% of the total company staff. They are fairly invested in having a performance appraisal system, and I don’t have the influence to make it go away.

2. I myself have never really worried about performance appraisals. I’ve pretty much always been successful in my job, and usually performance appraisals are just a case of you’re doing great, keep doing it. So I can’t really relate to the people who are upset by this.

3. I feel uncomfortable going with Joel’s advice of giving everyone a glowing review, because unfortunately I believe we have a few lemons and I would prefer for them to realise sooner rather than later that they are not a good fit for our organisation, and move on.

4. We need to base pay increases on something. People believe what we do today is not fair. How can we implement something that is more fair and transparent. Dare I say something ‘objective’?

What we have done is set general objectives, with measurements specific to the products teams are working on. For example, we want to improve our unit test coverage. We use a tool to run all our unit tests on a nightly build. On one product the current coverage is 20%. So we set an objective to increase the coverage to 30% by the end of the year. I would expect the team to do this by aiming for higher coverage on new functionality, and targeting areas of the old code to add unit tests to as they refactor those areas, but that’s not specified, it’s up to them how they want to achieve this. It’s a metric that’s easily measurable and the team can see their own progress daily. Hopefully this kind of metric will drive the behaviour of being more conscious of the number of unit tests they write and also if these tests are effective in covering all code paths. I’m sure there is a way to game this metric which I haven’t thought of yet, but to me it is an honest expression that we are committed to quality and believe that developers should be writing unit tests. It is also a metric that teams can work on together, so that instead of doing Scrum where we promote team work, and then have individual objectives, we have focused on objectives that teams can achieve together.

The complaints we’ve had about the objectives are mostly around 3 topics:
1. How do I excel as an individual if my objectives are team based.

2. How do I know that the targets are achievable

3. I don’t have control over this.

My answers to these have been as follows:
1. Your individual performance is evaluated by your team members as your contribution to the overall team goal, and the way you go about your job. This is measured by feedback from your peers. The appraisal has 2 parts. How your team did vs targets, and how you did in your team. Both are important. If you are awesome, but your team sucks then you aren’t awesome enough to influence your team. If your team is awesome, but they think you suck, chances are your team did most of the work, but you are probably better that the guy who sucks, in the team that sucks, because at least you didn’t prevent your team from being awesome, and they haven’t thrown you out yet.

2. We don’t know either. This is the first time we’ve set measurable objectives. But we are reasonable, and will measure this on an ongoing basis and adjust it if the right behaviour is present, but the target is tougher than expected. Hopefully next year we will have a better view of what’s possible.

3. Yes you do. If your Product Owner tells you to ship it without unit tests, it is absolutely your right to stand up to him, or ask you Scrum Master to stand up to him. If you aren’t doing this, you aren’t doing your job as team member of owning quality.

However there are still lots of rumblings. I don’t expect they will go away until we actually go through an appraisal cycle with these objectives and people can judge for themselves if it is truly better or not. My biggest concern is that it is currently derailing some planning meetings because it’s all anyone can talk about. Clearly the problem is not solved. Any advice out there from someone has implemented something successfully?

About these ads
11 Comments Post a comment
  1. Sam #

    I was leaving a comment but then it became a bit long … so pleaser read my thoughts here: http://www.inevitable.co.za/2010/02/annual-reviews-objectives-and-all-the-objections/

    February 23, 2010
  2. annu #

    Quick one that makes sense for me as a product owner, and very easy to track – How many times does an acceptance test fail for a developer, also is there a pattern for this? Worth thinking about :-)

    February 23, 2010
    • I like it as a measure to tell how clear your communication about requirements is, however I’m not sure this is just in control of the developers. If you have an absent Product Owner who specifies vague acceptance criteria and then only shows up at the review there is not much the developer can do to improve this.

      February 23, 2010
  3. Damon #

    I’m not a fan of the unit test coverage metric on it’s own. It is too easy to game (test all setters/getters, simple code, etc.). I have seen code with 100% test coverage that was terrible code. What you need to do is have measures related to purpose. For example, the purpose of unit testing (and tdd) is to improve quality and make it easier for future change. I’d be interested therefore in whether cyclomatic complexity decreases as test coverage increases. Does the number of bugs encountered by customers decrease (Note: Not the number of bugs found by testers, as this is a false metric and again easily gamed).

    But I’d always be wary of linking any kind of measure to a target, especially if said target is related to performance appraisal. As John Seddon says “Targets make performance worse, always”. This is because there is then always an incentive to game the system. I’d rather have my teams creativity focused on improving the system rather than cheating the system. Measures should be used by the team to help them identify problems with the system and see if their improvements have a measurable impact.

    I understand your situation however, as it is one many face, so my advice would be to work with the team to identify measures related to purpose, start collecting data related to those measures, and then allow the team to make some changes that they believe will improve the system. Once they get an insight into what the impacts are, they will be better able to agree on an achievable target that they feel in control of, so will have no need to game the system.

    February 24, 2010
    • Thanks for the advice. I like the idea of trying out a few metrics with the teams first and seeing which ones drive the right behaviour and which they believe they are actually in control of. Will give that a try.

      February 24, 2010
  4. Bob #

    I hope I am not one of the bad lemons. Then again is it a poor fit for the lemons or perhaps poor management of the lemons?

    March 2, 2010
    • You pose an interesting question. I might even blog about the very topic soon. But I use the term lemon to mean someone who is not delivering value in their current environment. It could be for a number of reasons including that they are badly managed, but hopefully we can all agree it is not a desirable state for anyone, including the lemon. My comment in this post is that I believe if that person is not delivering value it really should be discussed openly with them, not in a blaming way, but in order to understand how to change that

      March 3, 2010

Trackbacks & Pingbacks

  1. the sour life of a lemon « Inevitable
  2. Annual reviews, objectives and all the objections « Inevitably Agile
  3. the sour life of a lemon « Inevitably Agile

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 47 other followers

%d bloggers like this: