Most teams start with one customer experience metric, then add another, then a third, until the dashboard sprouts numbers nobody quite knows what to do with. The choice between NPS vs CSAT vs CES isn't really three options. It's a question of where each one belongs in the experience you're trying to manage.
NPS, CSAT, and CES all measure something real, but they measure different things. Used well, they each play a distinct role in a measurement system. Used badly, they produce overlapping signals that nobody acts on.
What follows is a practical guide to deciding where each metric belongs, when to add a second one, and what to do when a score moves.
NPS vs CSAT vs CES at a glance
NPS, CSAT, and CES are the three most common customer experience metrics, each focused on a different aspect of the relationship.
- Net Promoter Score (NPS) measures relationship-level loyalty. How likely a customer is to recommend you.
- Customer Satisfaction (CSAT) measures satisfaction with a specific experience or interaction.
- Customer Effort Score (CES) measures how much effort a customer had to put in to get something done.
If you're picking one to start with, NPS works for relationship health, CSAT for moments that matter, and CES for friction in specific tasks.
| Metric | Focus | Scale | When to ask | Sample question |
|---|---|---|---|---|
| NPS | Relationship loyalty | -100 to +100 | Quarterly or biannual | How likely are you to recommend us to a friend or colleague? |
| CSAT | Specific experience satisfaction | 1-5 (most common) | After a key event | How satisfied were you with [interaction or experience]? |
| CES | Effort or friction in a task | 1-5 or 1-7 (agreement) | After a task completes | [Company] made it easy for me to [resolve or complete the task]. |
Each one answers a different question. The rest of this post is about where each belongs and what each should trigger when the score moves.
What each metric actually measures
Here's what each metric is and how it's calculated. The differences matter because they determine where each metric works and where it doesn't.
Net Promoter Score (NPS)
NPS asks one question. How likely are you to recommend [company or product] to a friend or colleague?
Customers respond on a 0-10 scale. The math is simple. Subtract the percentage of detractors (0-6) from the percentage of promoters (9-10). Passives (7-8) don't count. Scores range from -100 to +100, with most healthy ranges falling between 0 and 50 depending on industry.
NPS captures the strength of the relationship over time. It doesn't tell you why the score is what it is, just whether the customer is on your side. Run it quarterly or biannually. More often produces noise without signal.
Customer Satisfaction Score (CSAT)
CSAT asks how satisfied a customer was with a specific experience.
The phrasing varies. A common version is "How satisfied were you with [your purchase, the support interaction, your onboarding]?" on a 1-5 or 1-10 scale. The most common calculation is top-box, the percentage of respondents scoring at or near the top of the scale.
CSAT tells you whether a specific moment worked. Did support resolve the issue. Did onboarding deliver. It doesn't tell you about long-term loyalty or whether the customer renews.
Send it shortly after the experience it's measuring. The closer to the moment, the more accurate the response.
Customer Effort Score (CES)
CES asks how easy something was.
Two common phrasings. A direct ease question, "How easy was it to [complete this task]?" on a 1-5 scale. Or an agreement question, "[Company] made it easy for me to [resolve my issue]" on a 1-7 agreement scale. Calculation is usually an average score or a top-box percentage of agreement.
CES tells you where the friction is. It's the most actionable of the three because the answer often points directly at a specific UX or process problem. Gartner research has shown that low effort correlates strongly with future loyalty in transactional contexts, particularly support.
Relational vs transactional
Keep all three metrics straight by remembering that NPS is relational and the other two are transactional.
NPS measures the relationship at a point in time. The question doesn't ask about a specific experience. It asks how the customer feels about you overall.
CSAT and CES are transactional. They measure a specific moment, a stage, a step, an interaction.
- Measures the relationship at a point in time
- Asks about overall sentiment, not a specific moment
- Captures advocacy and loyalty signal
- Run on a cadence
- Measures a specific moment or interaction
- Asks about a stage, step, or task
- Captures whether that moment worked
- Triggered by an event
This split tells you where each metric belongs. A relational metric without transactional ones gives you the temperature of the relationship but never the cause. NPS dropped, but you don't know why. Transactional metrics without a relational one give you readings on every interaction without an overall direction. Customers rated their support 4 out of 5, but are they staying or leaving.
Most measurement systems need at least one of each. A relational anchor, usually NPS, plus one or two transactional probes for the moments that matter most.
Where each metric belongs on a customer journey
Once you accept the relational/transactional split, the next question is where each metric goes in your customer journey. The answer maps neatly to journey structure.
Customer journeys are layered. The whole relationship sits at the top. Stages sit underneath, things like awareness, onboarding, first value, renewal. Steps and touchpoints sit underneath the stages, the specific actions a customer takes. Each metric naturally belongs at one of these layers.
NPS at the relationship layer. Run it on a cadence, usually quarterly. The question is broad, and the answer reflects the entire experience over time. Don't tie it to a single stage or step.
CSAT at stages or after key interactions. After onboarding completes, after a customer reaches first value, or after renewal. The question captures whether the stage delivered what it promised. Trigger it once the stage has wrapped, not in the middle.
CES at steps and touchpoints. Signup, password reset, checkout, support ticket close, self-service flows. The question points at a specific step, and the answer points at a specific fix.
- Quarterly or biannual cadence
- Broad question, no specific moment
- Captures overall direction
- Reflects the relationship over time
- Triggered by stage completion
- Captures whether the stage worked
- Useful after onboarding, first value, renewal
- Tied to a specific stage outcome
- Triggered by task completion
- Captures friction at a specific point
- Useful for signup, support, self-service
- Points directly at a step-level fix
Take a B2B SaaS onboarding journey as an example. CES on the signup step (was creating the account easy). CES again on the first-value-action step (was activating the integration easy). CSAT after the onboarding stage as a whole (did onboarding deliver what we promised). NPS at the 30 or 60 day mark (how does the customer feel about the relationship now).
Knowing how to measure customer experience starts with knowing where each measurement belongs in the experience you're managing.
How to choose where to start
Most ranking posts on this topic eventually land on "use all three." That's accurate but not useful. If you're starting from zero, deploying three surveys at once produces noise, not insight.
A better question is which one to start with based on what you actually need to know.
Start with NPS if you need an executive-readable signal of relationship health and have no current measurement program. It's the most benchmarkable across industries and the easiest to explain at the leadership level.
Start with CSAT if you have specific touchpoints, support, sales handoff, onboarding, where you need to know whether the experience worked. CSAT surfaces problems faster than NPS but tells you less about overall direction.
Start with CES if your immediate concern is friction. Support is overloaded, churn is unexplained, or a specific journey step is dropping off. CES is usually the fastest to act on because it points directly at a fixable interaction.
A practical sequence for a maturing CX program.
- Implement one relational metric first, usually NPS, on a cadence
- Add CSAT to two or three high-stakes stages where you need to know the experience worked
- Add CES to specific steps where friction is suspected or where churn correlates
Each step adds operational cost. Stacking all three on day one without a plan produces a data pipeline nobody owns and a dashboard nobody reads.
How to combine all three without burning out customers
Survey fatigue is real. Response rates fall, the customers who answer skew toward the most engaged or the most annoyed, and the data degrades. The honest constraint of layering NPS, CSAT, and CES is that you have to coordinate them.
A few rules of thumb that hold up in practice.
- One survey per customer in a given short window. For active customers, a reasonable ceiling is one per month. For low-touch customers, less.
- Coordinate placements so they don't overlap. If NPS goes out at the 30 day mark, push the next CSAT trigger to a later point.
- Sample for relational surveys when your customer base is large. You don't need every customer answering every NPS round.
- Tie each survey to a specific decision or follow-up. If nobody is going to act on the data, don't ask the question.
This is part of a broader customer experience strategy. Each survey should live somewhere specific on the customer journey, with a clear purpose, and connect to a decision. Generic touchpoint check-ins, the "how are we doing" surveys that arrive without context, get ignored.
What each metric should trigger when it moves
Tracking a metric is half the work. The other half is knowing what to do when it changes. Most measurement programs stop at the dashboard, which is the main reason teams lose interest in their own data.
NPS drops at the relationship level. Treat as an executive-level signal. The drop usually doesn't tell you the cause, so the next step is a stage-by-stage diagnostic. Pull recent CSAT and CES movement by stage. Run a few qualitative interviews. Cross-reference with churn and renewal data. NPS is the alarm bell. The investigation happens elsewhere.
CSAT drops at a stage. Trigger a stage-level review. Pull recent verbatims. Check whether a recent product change, pricing change, or process change correlates with the dip. Look at CES at the underlying steps to see where the friction sits. CSAT tells you where the problem is, CES tells you what the problem is.
CES drops at a step. Trigger a touchpoint or step-level fix. CES drops are usually the easiest to act on because they point at a specific UX or process problem. Capture the finding as a portfolio item, a pain point or opportunity, so the fix lands in delivery rather than a slide deck.
A score that doesn't trigger anything is just a number. Teams that get value from these metrics treat each score change as a trigger for a specific kind of investigation, with a specific kind of follow-up.
Common mistakes
Six patterns we see often.
-
Treating NPS as a customer service metricNPS measures relationship health, not service quality. Use CSAT or CES for service.
-
Running NPS too oftenQuarterly or biannual is enough. Monthly produces noise and accelerates fatigue without revealing more.
-
Asking CSAT immediately after a long, frustrating interactionThe score reflects mood as much as experience. Give a small buffer.
-
Reporting CES as a single averageThe point of CES is identifying which steps are hard. A rolled-up number tells you "service is somewhat easy" and nothing else useful.
-
Comparing absolute scores to industry benchmarks without contextTrends within your own data over time tell you more than benchmarks. A 5-point gain on your own NPS matters more than knowing you're 3 points below an industry average.
-
Picking a metric to make leadership comfortableChoosing NPS because the CEO understands it, when CES would actually answer the question, is a common trap. Pick the metric that fits the question, then explain it to leadership.
Picking between NPS, CSAT, and CES isn't really a choice between three options. It's a question of where each one belongs and what each one is supposed to trigger.
Get the placement right and the metrics work as a system. NPS at the relationship layer tells you whether the customer is staying. CSAT at the stage tells you whether the experience worked. CES at the step tells you which interaction needs fixing. Each score points to a specific kind of investigation and a specific kind of decision.
That's the difference between a measurement system that drives change and a dashboard that just grows.
Frequently asked questions
Which metric matters most, NPS, CSAT, or CES?
None universally. NPS measures relationship strength. CSAT measures moment-level satisfaction. CES measures friction in specific tasks. Most mature CX programs use all three at different layers of the journey.
Can you use just one metric?
Yes, especially when starting out. NPS is the most common starting point because it's easy to benchmark and easy to explain at the executive level. Layer in CSAT or CES later to diagnose what's driving the relational signal.
How often should you survey for each?
NPS works on a cadence, quarterly or biannually. CSAT is triggered by specific events like post-purchase or post-support. CES is triggered by task completion, after a support ticket closes or an onboarding step finishes. Avoid stacking surveys in the same short window or response rates will fall.
Does CES really predict loyalty better than CSAT?
Gartner research has shown that low effort correlates strongly with future loyalty in transactional contexts, particularly support. CES is especially useful for support and self-service journeys, where reducing friction has a measurable impact on retention.
How do these metrics fit into a customer journey map?
NPS sits at the lifecycle or relationship layer. CSAT sits at the stage level. CES sits at the step or touchpoint level. Mapping them this way means each score has a clear location in the experience and a clear set of decisions it informs.
What's the difference between B2B and B2C measurement?
B2B usually runs NPS at the account level, one score per customer relationship. B2C typically runs higher-volume transactional CSAT and CES surveys. The frameworks are the same. The cadence and segmentation are what differ.


