emotional journey of three personas on grey background

Ask Marc – measuring service design impact

February 16, 2021

How do you know if your service design project has been successful? How do you prove if your changes and improvements had impact? How can you use numbers to convince others of the effect of prior service design activities so you get budget for the next ones? In this session, we'll talk about measuring service design, the horrible experiences NPS and driver analysis can cause, and the importance of impact controlling.

This series was initiated as a place for you to learn more about service design and journey mapping software. Our co-founder Marc Stickdorn and the Smaply team share their experience on how to embed and scale service design in organizations. The sessions usually kick off with a short introduction to the focus topic to bring everybody to the same page, followed by your questions and deep discussions of best practice examples.

On this page you find the recording as well as the transcript. Additionally this session is also available as a podcast on Spotify, iTunes and Google Podcasts.

Overview

Transcript

Introduction

Nicole: Today we are going to talk about measuring the outcomes of service design. We're going to talk a little bit about some of the measures out there that don't always work well, i.e. net promoter score and some other things. Also we're going to answer your questions. I think it's going to be a hot topic. We had a lot of people sign up for this session. As a practitioner, I know it can be hard and it can be slippery.

examples how service design can impact ROI

Marc: I thought it might be nice to start with a few examples. Here are a few examples from the last years. From touchpoint, the service design magazine by The Service Design Network and from the service design award winner from 2018 and 2019. Just three examples: what do we hear about the outcome of a service design project?

It can be costs, like in the first example here from the touchpoint journal. 240% return on investment. With an investment of £180,000 the client saved £435,000.

The next one 77% decrease in waiting time and that resulted in a drop of abandonment rates of shopping carts – that obviously results in higher revenue.

The last example: 41% more complaints could be resolved. The time taken to resolve the complaints reduced by 63% can be translated into a value like cost saving and of course a better customer experience. But we're going to get to that.

annual growth through outperformance by design

What do we actually measure? What do cast companies value if we talk about the impact of service design in general? Design-led organizations all know the McKinsey quarterly report from 2018 – the business value of design showing a correlation between how much organizations invest into design →  how design-led they are, and revenue or the return to shareholders. We also know these numbers from employee experience. That is just a little overview from Harvard Business Review from research published in 2017. A quote I like there is:

Companies that invest in employee experience outperform those that don't.
Harvard Business Review

They looked at different organizations, how much they invested into employee experience and whether  there are any correlations with employee growth, with average profit revenue per employee etc. Also, if you look at an entire market, you see correlations. If you look at the development of stock prices and you put the different companies into different indices, like the fortune best companies to work for, the glassdoor best companies to work for, or the selection they took for this research you see a clear correlation. Now the problem about all these general things is that it's always a correlation and not a causality. This makes it hard to argue against a very critical person that might agree until a certain point but only sees the possibility to invest into service design for very successful companies.

visualization of metrics in which companies outperform others that don't invest in employee experience

Then we’re at the level we're often talking about when we talk about it at such an abstract high level. The more concrete we get, the easier it is to actually talk about the outcome of a project.

visualization of how investing in employee experience affects stock prices. in graphs: experiential, glassdoor, fortune, NASDAQ, S&P 500

We try to improve customer satisfaction and employee satisfaction. Just as a reminder: there actually is an academic model behind that, it is called the confirmation–disconfirmation paradigm.

We compare our expectation with our experience – if they match, we're satisfied. If the expectation is too high, we're dissatisfied and if the experience is higher than our expectation, we are very satisfied. When we measure customer satisfaction it is always a comparison of these different values and we need to keep that in mind. To increase customer satisfaction we can work both on the experience side and also on the expectation side. Some companies are doing that strategically – increasing satisfaction through decreasing the expectation instead of increasing the experience. Now what I wanted to end up with is the emotional journey because reality is a bit more complex than this simple theory. At every moment of the customer journey we are actually comparing our expectation with our experience and the result is the emotional journey. This is a lane that we typically add to a journey map. Now, if we look at three potential emotional journeys: if we now just measure customer satisfaction once and we try to put everything on this one value – my question is: when do we typically measure customer satisfaction? When do we do that typically? It's rather at the end of an experience right? If you are in a hotel, if you had a service of whatever kind you get a little questionnaire and get asked if you can you please rate the experience. Or if you’d recommend the service.

grey background with squares, lines and arrows visualizing how expectation management can improve customer satisfaction

The problem is, picking one moment in time and aggregating all the data to just one value – the average value on this very simple example. Some customers who rate it very positively will outweigh others who rate it very negatively. Then the average rating is zero and that doesn't help us to improve. We use KPIs with measurements often to understand whether we are decreasing or increasing in the long run. That also has implications. Can we actually tie it back to one particular project? Perhaps customer satisfaction in general is too vague, too big to point you to one specific project.

How can emotional impact be measured the best? Especially when thinking about the emotional journey in a customer journey map. As a service designer it's also about generating those signature Wow-moments and I was wondering how emotions – when changing negative experiences to positive ones – can be measured with this? I do not refer to analyzing emotions by using face analysis or similar, but something that is more tangible to use in our daily business.

Marc: You already mentioned a topic, there are loads of very interesting research approaches on how to really quantify that. I know of research projects where it's about senses to put on the skin to track your emotions. Of course we know about eye tracking and stuff like that. And face recognition as you mentioned.

However in the day-to-day business this is not what we typically use. That's because we either don't have the budget or the technology to do this in a proper academic sound quantitative way.

A few ideas, here we actually talk about is research.

And in research we always talk about triangulation of methods. If we just use one method we probably fall into the trap of certain biases.

We should use different methods looking at the same topic. If it is about the experience of going through a theme park and we should use different methods to come up with this emotional journey. Then we would triangulate between the different methods and see if they align.

You can do for example a co-creative workshop, where you put up the big phases of a day, or the big steps people go through. Then you ask your clients to put their ratings, positive or negative, retrospectively. We've done that in a ski resort for example. At the end of a slope we had a big board next to it with smileys from positive to negative and the phases of a day.  Over the course of a day we just went through this journey with 100 people. We always did it in teams of three to five people. Just by listening to the reasoning why they thought something was positive – like parking your car – and others thought this was horrible, we learned not only how the rating was but also the reasoning behind that.

Co-creative workshop could be one approach to do that retrospectively. We also need to focus on a contextual research method for which we could use shadowing. The classic traditional ethnographic research where we actually follow customers like a shadow. Of course with their consent.

Throughout the experience and while we're doing that, we keep track. You can think of that like a talk aloud usability study. That is when someone sits in front of the computer and navigates to a website or through a software and while they're doing that, they're talking out loud what they're thinking and the reason why they click it and describe their emotion.

We can use the same in contextual research, where we do shadowing and the research participants actually do talk aloud while they're walking through. It might not always be exactly in the moment because they have interaction with other people and they can't just turn around and talk with you at the same time. But at least it's retrospectively right after the moment when it has happened when the memory is still fresh.

The third one could be a self-ethnographic approach. It could be a diary study, it could be auto ethnography, it could be mobile ethnography.

If you put all the three in the mix you actually have triangulation between research methods and you get very sound results of the emotional journey.

Nicole: We really do need the qualitative along with the quantitative to truly understand what's going on across our journeys and if we're really just looking at a quantitative measure. I think we're going to be missing something.

Marc: I mean what I described – you can do that in representable sample sizes even. And if you then compare an emotional journey before you did a project and then after a project, to actually show the impact, it's a very visual but pretty sound way to do that. Think about approaches like mobile ethnography where you can easily have hundreds of people in your sample size. The co-creative workshop, if you do it like I just described, we did a hundred people in just one day. Well if you have the budget do it for three, four, five days, you have 500 people – that's representative.

In working with some of your clients, do you have any insight into what a reasonable expectation is for a company in their first year of their journey mapping journey? We're in the process of creating our 2021 goals and objectives and I'm trying to develop some reasonable performance outcomes.

Marc: It describes one of the biggest problems we have – often we get the question: what is the return of investment of service design? What can we expect when we put this amount of money into doing service design or using a specific tool like journey mapping? And I always like to answer with a counter question: what is the return of investment of management in general? What is the return of investment of marketing in general? If you talk to a marketeer and ask this question you will get a very specific answer but the answer will sound something like “Our three best performing campaigns last year were…”. They will tell you exactly how much money they put into a campaign and what the outcome of this was. That is the logic we should use in service design as well.

You can measure the impact of a specific project but it's hard to tell in general what the return of investment of service design is.

Answering these questions: what do they expect? What are the use cases for using a journey? Why do they use a journey map and what do they want to improve? What do they want to impact? These would be the interesting numbers.

Nicole: I think it's interesting because a lot of times, when you start doing a journey mapping exercise in an organization for the first time and you try to set up some of these measures, you realize that a lot of the existing data doesn't fit. That’s  because there isn't a good initial understanding of the journey and you have to figure out what some things are to get a quick baseline measure. Maybe there's just data missing and you're starting from this place of not knowing.

Linn Vizard is a great service designer from Canada. She sent out a little email blog post the other day and talks a lot about the outcomes of service design and how they are slippery sometimes. And I think it totally is valid to measure. Our people went from not understanding our customer journey to understanding it. People in our organization know what our customers go through and they didn't before. That's a valid measure – it doesn't necessarily translate directly into revenue right away, but I think it's a really good starting place. Also it's just realizing what's missing.

Marc: You have a really good point there because you need to invest first into a baseline to also understand your organization and your customers better. But as soon as you start a concrete project to fix concrete pain points or anything, you should think about a measurable impact. You should think before you start the project, what does it actually pay into? As a rough reference or guideline, something we can learn from startups and how they measure success is the pirate metrics. The pirate metrics because the abbreviation of it is the Aarrr-matrix, the A-A-R-R-R. A for acquisition, A for activation, R for retention, R for referral and R for revenue.

That follows a high level customer journey map – the more you zoom into a very specific moment of the journey, the easier it is often to actually understand what we need to measure, what can this actually pay into.

What do you think about the frequency of surveys applied to users after getting or using a service? What would you think is convenient and useful in your experience?

Marc: Speaking from a customer perspective I would say the answer is zero. Who as a customer actually likes to answer these? These phone calls that you get once you brought your car into a shop to repair something. You add a step to a customer journey and that is a step that is only giving value to the company and no value to the customer. What do I as a customer get for that if I share my information, if I invest time into that? Reasonable I would say from a customer perspective is zero, from a company perspective you have to ask your question. How much pain can I add to the customer to get the numbers that I need? And what do I do with these numbers? Are they really helping me then? That is often a thing where organizations are too focused on certain KPIs. To get these KPIs and to have more KPIs and a bigger dashboard without actually using it. I usually ask my clients what they actually do with these KPI dashboards.

Nicole: I think there's also opportunities sometimes to bake it in. I'm working on a project right now and we've just been putting together. Also this is really a digital context, very UX focused. But thinking about ways we can get what we need through mechanisms that are actually going to improve the experience. As well we're adding good and getting good back.

Marc: At least add value to the customer. If you put something in a survey after a service, i.e. a phone call that you get where people get asked how the service was. Be aware that whatever you measure, it gets done in your organization.

I always tell the story from my car. When I brought it to my shop a while ago, the whole shop was plastered with posters and stickers saying: “We provide five-star service”, and they would always point it out. After you collected your car, you would get a phone call and then you had to rate it in stars, from one to five. Obviously everyone was pushing you towards five stars. And actually when you picked up the car, the person who gave you the key asked specifically to be rated five stars as their bonus depended on that.

When I paid my invoice that person again asked me to be rated five stars. Looking at that from the customer's perspective, it is horrible. Everything is focused on the KPI and not on the actual experience of the customer and that is happening too often. I’d suggest trying to find a way around it without disturbing customers and adding pain points to the customers, just for the value of KPIs.

Nicole: My local hardware store has a handwritten packing tape note by the service desk saying “Fill out our survey and get a free ice cream!” That’s when you know they're getting pressure to get those surveys answered.

I’m designing services for involuntary consumers. Do you have any tips on what kind of approach one should follow for conducting research in such an environment? Consumers are debtors and we provide debt repayment and collection services.

Marc: You want to measure your impact. You always have two sides, on the one hand you ask yourself if customers are satisfied. Do they come back, do they increase revenue? From an organizational perspective you think about either increasing revenue or decreasing cost. At the end of the day all the stuff we're doing is impacting that. And also for governmental services costs are an important thing. We should try to convert that into a number. Into a graspable KPI that makes sense for the organization. Unfortunately, in the end it will be a financial number. For any project we try to convert that into a financial number for the organization because that is the universal language of an organization. Perhaps it's not an increase of revenue, but maybe it’s a decrease of cost. Or it is customer lifetime value.

Nicole: In government we talk a lot about failure waste and sort of like ferreting out if a certain way of doing something just isn't working for customers or users, so they're defaulting to some more expensive channel. Then understanding the costs behind that channel can help you find opportunities for cost savings, that are off the back of improvements.

When trying to report out qualitative / emotional data to clients who are more engineering / quantitatively oriented, do you have advice for sharing information in a way that is compelling for them?

Marc: Do not quantify qualitative data! Don't say like “three of our five respondents said that” because then people will immediately switch to a quantitative mindset. This may result in people telling you that asking five people is not enough.

I saw a meme this morning, I think it was brilliant. It was a picture of a street with a big hole in it. Three people around it pointing towards the hole. If there are three people pointing towards a hole in this street, what should you do? Should you ask 97 more if there is a hole in the street or should you fix the damn hole?

The answer is: show the hole. That means instead of talking about your data in an abstract quantified way, show the hole, show the problem. And that is often what makes people click – if they see a customer trying to use your product and they're failing. If they see them, if they listen into a phone call in the call center where the customer is in tears in the end because this system doesn't allow the agent to do something, that is what really makes a difference.

That also means that we as researchers have quite some responsibility. Ethical responsibility. What kind of data do we show? We can nudge people into a certain direction. That's a whole topic of another discussion, but ethics and service design is a really interesting bit. Just wanted to mention that.

Would you measure the service design impact on employees in the same way that you would do for customers?

Marc: Again we're getting to ethics here. If you have an organization with just a few dozens of employees, that has real ethical implications. Because that probably means that in the end you'll be measuring things like how long one person needs for something versus another.

In general for projects, yes. I would do that. But probably we need different metrics and you need to be careful how personalized information can get. You should also measure the impact on employee experience, absolutely.

If there are several related improvement projects ongoing with overlapping KPIs, while the service designer input is only on one of those projects. How do you measure the value of service designer contribution?

Marc: It's called real life – it always happens. In academia we have ceteris paribus, which means only one thing changes, while every other condition stays the same. That really helps you see the effect. But that never happens in real life. You will always have different things impacting something.

What can help you is prototyping. When you focus on really specific aspects of it. Through prototyping you might get rather clean data. Again we can learn from usability studies, do A/B tests. If you change something, try not to change too much at the same time, but leave some time in between or change something, change it back again and do another change. If possible it always depends on the project. But in the end you will always have the issues of overlaps or that you could at least argue: "It's nice but that doesn't come from this project, it comes from a completely different initiative" and it's really hard to point it back to that.

Prototyping helps you because there you can create a safe environment to measure impact.

In the pre-service phase what are some practical ways that companies can manage customer expectations?

Marc: I just want to clarify with managing expectations real quick: What I don’t mean by it is that if you’re a company selling pencils that you should promote your pencils as if they were a bad product. Because then no one will buy your pencil.

Managing expectations means that you check in the end what people were dissatisfied with. If you then look at the level of experience and the level of expectations, you realize that this is because the expectation was set too high. Then you can rather tweak the expectation, try to understand where this high expectation actually comes from. Does it come from a certain campaign? Does it come from shared experience from previous customers?

A practical way could also be that you look at your success metrics within your organization. Often these kinds of problems arise in an organization if you have very siloed parts of the organization. Sometimes marketing is measured by how many people they bring into the shops or on the website. And sales gets measured by how many sales they achieve and not if people actually stay and come back or promote the company afterwards, but only for the sale itself.

If marketing and sales are distinct silos and they measure success only within their own silos, and they get their bonuses accordingly, strange things might happen. Because people then try to increase their KPIs instead of looking at the longer run that you need satisfied customers to come back and buy again.

Often these problems also come from a systemic thing: how the organization is set up and how you measure success in the different teams.

Would you say that service design must always have some measurable impact or are there moments when it can be justified by knowing that a redesign is important?

Marc: Often when you know that a redesign is important, there's also a measurable effect I would say. The question is more: Do you want to measure it or not? Do you want to put the effort into measuring it or not? And frankly, there are topics that if you see it's broken and it's clear, then we should just fix it. I.e. the hole in the street. This Kano-model thing you probably know. How well is something implemented and how much does it contribute to satisfaction or to dissatisfaction?

What we should always do is fix the basics first. What becomes clear when looking out there is that most services are broken. And we actually still need to fix the basics.

Fixing basics however means you actually need to have management buy-in for it and you need people in your organization to understand. You have to back up to actually do this but in a small organization you don't talk around these things, like should we measure that. If there is a big problem, you just try to solve it.

Nicole: Sometimes I think too, getting buy-in for this work takes you from unknowing to knowing. You all of a sudden go out, you talk to some people, you do research in a way that you haven't before and you realize that there are problems that you didn't realize before. And that of itself is a measure. We now know something we didn't know and now we can make a difference. We can fix it, it's like measuring your impact.

Marc: It really depends on the organization, it depends on your maturity, it depends on how large your organization is. Also it depends how much impact, or how big your standing is.

Think about this one project that fits into your longer road map of embedding and scaling service design within the organization. How can this one project actually contribute to that? What is your goal, what are you working towards? What is missing, is management buy-in missing? Then you should probably focus on measurement of impacts. If you have that, if it's really a clear focus we need to fix and it's rather speed of delivery, then probably you don't check your organization or check your roadmap.


And now, what's next?

Check out the other Ask Marc sessions about different topics of human-centric work, like multi-persona maps, creating CX insight repositories, and many more.