Throughout its history, Microsoft frowned upon developers that were "just doing their jobs". It's review model used to have the following grades:
- 2.5 - "below expectations" (you're about to be fired)
- 3.0 - "at the level's expectations"
- 3.5 - "exceeds some level expectations"
- 4.0 - "exceeds most expectations"
- 4.5 - "one of a kind" (I've only seen this given out 3 times throughout my career as a manager - in all the teams that I've managed, and all the sister teams)
- 5.0 - "special service to the company" (I have never seen or heard of this given to a developer)
These numbers went on a curve. For levels below 65, 1/3 of all people would get 4.0, 1/3 - 3.0, and the rest 3.5. Ratings above and below this range were extremely rare.
Although HR mantra was that grading on the curve was completely unacceptable for smaller teams, and only teams of 25 people and above should fit the curve, it was extremely difficult for managers to reconcile performance across groups (and worse, across disciplines), so in reality most teams at least at the level of a discipline manager (dev manager, test manager, or GPM) and below were forced to the curve even if they were smaller.
Another HR mantra was that 3.0 was a good, passing grade, but in reality the bonus and merit raise numbers spoke for themselves. I've seen (although only once) a person being fired for being stuck at a level with a whole bunch of 3.0s. You couldn't get promoted until you got into 3.5-4.0 land, and stayed there for a while. And, of course, finding a new job was a lot easier if you were a 3.5-4.0 developer, rather than a 3.0 one.
The year before I left the company the review system changed to remove any ambiguity. The stock ratings (which were originally A through D and not revealed to the employees, and even the first level managers) had become "limited", "strong", and "outstanding", and focused strictly on promotability.
The distribution was 20-70-10 - 20% of the people were supposed to be "oustanding", 70% "strong", and 10% "limited". The problem was that despite being awarded for promotion potential they were labeled "contribution", so if you were to reach your (manager-perceived) zenith, you were told that your contribution was "limited".
I remember at the time I was absolutely furious - I was to have to tell some of my most senior (and most productive) people that their contribution was "limited". WTF!? Who invented this stupid review system? That person was definitely "limited"! (I was told - Ballmer invented these labels, but I do not know for sure).
So this review period (which, thankfully, I didn't have to do!) they renamed the buckets to just "20", "70", and "10" to eliminate the stigma. I don't know if it worked.
In other words, MSFT does not have a strict "up or out" rule that once you are no longer promotable you should get out, but it sends a strong hint. Which means that once you're at the top of your career, you're not particularly welcome here.
Incidentally, most full time test engineer (as opposed to developer in test) jobs were eliminated (and transferred to contract status) in part because there was not much career potential for engineers who were not developers at Microsoft.
The results of all this aren't particularly great. It saps the morale for both the most senior (and often the most productive) contributors and their immediate managers. Elimination of testers without a question negatively impacted the quality of the products, as well as the productivity of the test developers, who now had to do both automation AND manual testing (of course, at much higher pay scale than manual testers used to do).
I wonder what the top execs were thinking when they created this. Peter Principle, perhaps? Something else? Did this effect just slipped through the cracks when the company was young and most people retired before reaching their maximum level?
What do you think?