Cadence
← All posts

Performance ·

What Your Performance Review Ceiling Is Actually Telling Your Team

Capping performance scores to avoid grade inflation doesn't protect the system. It lies to your best people, and they notice.

What Your Performance Review Ceiling Is Actually Telling Your Team
Sean Davis
Sean Davis
Founder at Cadence · April 15, 2026 · 8 min read

I sat across from a manager once and received a 4 out of 5.

I had led my team through a reorg that year. Hit every goal we'd set in January. Got unsolicited praise from two other department heads. By every measure we'd agreed on, it had been a strong year. My manager walked through the form, gave me specific, positive feedback on most of it, and said: "I gave you a 4. You're definitely at the top."

I thanked him. Then I asked what a 5 would have looked like. He paused. "Honestly? It's more of a formality. Nobody really gets a 5. That's just how we do it here."

I didn't argue. But I thought about that conversation for months. Not because I needed a different number. Because I couldn't figure out what it was supposed to tell me. I had done exceptional work by every standard we'd both agreed on. The score said "very good." The policy said: very good is the ceiling.

I wasn't angry. I was confused. And confused is a dangerous state for a high performer, because it starts them asking questions you don't want them asking.

That's the annual review trap. A policy that looks like sound management (designed to keep ratings honest) that ends up doing the exact opposite.

Where this policy came from

The "nobody gets a perfect score" rule has real roots. Jack Welch popularized what he called the "vitality curve" at GE in the 1980s: a forced distribution where the top 20% of employees were rewarded, the middle 70% were developed, and the bottom 10% were let go. The logic was that performance follows a bell curve, and managers needed to stop inflating ratings to avoid hard conversations.

Microsoft adopted a version called stack ranking, where teams competed for a fixed number of top spots regardless of how they actually performed against an objective standard. The result, by Microsoft's own internal account, was what employees described as a "game of personal destruction," where the incentive was to outmaneuver colleagues rather than do excellent work. Microsoft dropped the system in 2013. GE officially walked away from the vitality curve in 2015.

The bell curve assumption only holds at scale, and even then it's often wrong. The deeper problem: forced distribution assumes poor performers are always present on any team in predictable proportions. But after several years of the system, the poor performers have already left. The people who remain are mostly all doing well. The quota still requires someone at the bottom, so a high-performing, motivated person gets designated a low performer, not because their work is poor, but because the math requires it.

For a small team of five or eight or ten people, the bell curve is mathematically indefensible. You might hire well, manage well, and build a team where three people have an exceptional year. Nothing about that is impossible. The policy forbids it anyway.

What the rating actually communicates

A performance rating, at its best, is honest information. It tells someone: here is how your work measured against the standard we agreed on at the start of the year.

When the top rating means "exceptional by any reasonable measure" and someone has done exceptional work by every reasonable measure, giving them the second-highest rating isn't calibration. It's a false signal. You are telling them, in formal writing, that their work was very good when you both know it was exceptional. They will notice the gap between what you said all year and what the document says in November.

Kim Scott writes in Radical Candor about what she calls "ruinous empathy," the impulse to stay quiet or soften the truth to protect someone's feelings, which ends up hurting them more in the long run. The "nobody gets a perfect score" policy is a structural version of that same failure. It doesn't protect the employee. It protects the organization from the discomfort of admitting that it doesn't always have something more to demand from its best people.

A second-place rating when someone deserved first doesn't motivate them to reach higher. It teaches them that the system isn't honest. Once they believe the system isn't honest, that belief doesn't stay contained to the review. It spreads.

"The people most likely to notice when the system isn't telling the truth are the people you can least afford to lose."

The people this hurts most

Artificial rating caps don't discourage your underperformers. They penalize your best ones.

Cornell University research published in 2025 found that high achievers are significantly more likely to leave organizations when top rankings are restricted. They're not leaving over a number. They're leaving because the number communicates something about whether exceptional work actually matters here.

Liz Wiseman writes in Multipliers about the difference between managers who grow their team's intelligence and those who just use it. A rating policy that systematically withholds recognition from your best people sends a clear signal: we'll use what you bring, but we won't fully acknowledge it. That's not an environment where exceptional people stay. It's an environment where they start doing math.

The math isn't complicated. A recruiter reaches out on LinkedIn. They think about whether to respond. If their last review told them they were very good in a system where very good is the ceiling, they don't have a strong internal argument for staying. You've handed them a reason to leave while thinking you were managing your review process responsibly.

Devon had been on my team for two years when I made this mistake. I knew the policy. I gave him the second-highest rating. He said thank you and left the meeting. Six months later he gave notice. In his exit conversation he said something I've thought about since: "I assumed that if I'd done everything right and still didn't get the top rating, either the bar was different than I thought, or it didn't really matter." He wasn't wrong to think that. I had told him exactly that. Just not in words.

What an honest review actually requires

The review should never be a surprise. If it is, the problem isn't the rating. It's the year of conversations that should have happened before the review arrived.

A few things that make reviews work:

  1. Rate against the standard, not the curve. At the start of the year, agree on what each rating level actually looks like for this role. Then rate against that definition. If two people hit exceptional, two people get the top rating. If no one does, no one does. The number means something because it connects to a real definition, not a quota.

  2. Separate the rating from the compensation conversation. One reason organizations cap ratings is that ratings drive compensation, and managers don't want to make promises the budget can't keep. That's a real constraint. The fix is to have two separate conversations: here's how I rated your performance, and separately, here's what that means for compensation and here's what it doesn't. Don't let a budget problem corrupt the honesty of the rating.

  3. If it's not the top, say exactly why. "You hit every goal we set and I gave you the second-highest rating. To get to the top, I need to see you leading work that crosses into other teams and building visibility with senior leadership" is useful. "Nobody really gets the highest score" is not. One gives someone something to work toward. The other tells them the ceiling is fixed regardless of what they do.

  4. Ask them to rate themselves first. Before you share your rating, ask them to assess their own performance. The gaps between their self-assessment and yours are where the real conversation is. If they say top rating and you say second, you now have to explain the difference specifically. That's not a comfortable conversation, but it's an honest one. Honest is what the review is supposed to be.

Common mistake: Using the annual review to deliver feedback for the first time. If someone is hearing in November about something that happened in March, the timing failed, not the review format. The review should summarize a year of conversations that already happened, not replace them.

The policy is protecting the wrong thing

Organizations that cap ratings are usually trying to solve a real problem. Grade inflation is real. When every manager gives everyone high scores to avoid hard conversations, the rating scale becomes meaningless and the organization loses its ability to differentiate at all. That's worth preventing.

But the solution isn't an arbitrary ceiling. The solution is to make the standards clear enough that ratings actually mean something. A top rating handed out carelessly is meaningless. A top rating that means "you did exactly what we both agreed exceptional looks like" is valuable, and withholding it when it's been earned is a form of dishonesty that compounds quietly over time.

Patrick Lencioni writes in The Five Dysfunctions of a Team that trust is the foundation of everything, not trust as warmth, but trust as the belief that the people around you will tell you the truth. A rating policy that systematically withholds honest scores doesn't build that foundation. It erodes it, one review at a time, until your best people decide they'd rather work somewhere that levels with them.

I built Cadence partly because the annual review problem is mostly a year-round problem in disguise. When 1:1 notes, goals, and feedback all live in the same place, the review stops being a moment where you surface eleven months of thinking. It becomes what it was always supposed to be: a summary of the honest conversation you've been having all along. The rating at the end is just the last sentence of a story the person already knows.

If someone on your team had an exceptional year, tell them. Write it down. Give them the top rating. They earned it. And if the policy says otherwise, the policy is wrong.


And if you want a tool that keeps the thread between your weekly conversations and your annual reviews, try Cadence free at app.cadencehq.co: 1:1 notes, goal tracking, and team visibility in one place. No credit card required.

Share this
Post on XLinkedInWhatsAppBlueskyiMessageEmail
Sean Davis
Sean Davis
Founder at Cadence

Sean Davis leads operations across multifamily, commercial, and mixed-use real estate portfolios. After years managing teams without the right tools, he built Cadence. He writes about clarity, accountability, and what it actually takes to lead well.

Make your reviews mean something.

Cadence keeps your 1:1 notes, goals, and feedback in one place so the annual review reflects the year, not just the moment. 14-day free trial.

Start free