The following information was created in collaboration with the Data Collection and Measurement Working Group of the DHS CX Steering Committee.
As CX Practitioners we identify outcomes that matter for our customers. CX Metrics help us create benchmarks and measure the progress our organization makes in achieving better outcomes for the people we serve.
Defining the desired outcomes of our customers is often iterative. By applying a human-centered approach and using tools from the CX Toolkit, we can stay in tune with our customers' needs and preferences and adapt our metrics to do the same.
Getting Started
After identifying your customers and their desired outcomes you can pinpoint metrics that show whether they are achieving those outcomes.
Start this process by working with what you have already collected. There may be more to your existing operational data than you expected.
Ask yourself:
- What data are we already collecting that speaks to the customer’s experience?
Once you have identified some potential metrics, then ask yourself:
- What story does this tell me about my customer’s experience?
- Is this story useful and, if so, to whom?
In 2023, the CX Steering Committee’s Data Collection and Measurement Working Group identified two area of focus for CX Metrics:
- Wait times (including time to decision or next step)
- Requests for service (including requests for information or participation)
CX metrics that reflect your customer desired outcomes bring you closer to a customer-centric organizations. Effective metrics play a critical role in ensuring your products and services are meeting your customers' needs.
Vanity Metrics
Avoid vanity metrics. These are metrics that may look good for the organization without saying anything useful about the customer experience.
Indication & Anti-Indication Metrics
The Data Collection and Measurement Working Group identified metrics that might show movement toward or away from desired outcomes. These are sometimes called indication and anti-indication metrics.
For indication metrics, ask yourself:
- If we were moving towards our goal, what would the metrics show?
For anti-indication metrics, ask yourself:
- If we were moving away from our goal, what would the metrics show?
Direct and Indirect Metrics
Metrics can include both direct and indirect measures. Direct metrics specifically measure something you’re interested in. User research done through observations and surveys can offer direct metrics about your customer’s experience. That research can show specific moments of user frustration or confusion. Indirect metrics look at separate but related areas that still offer useful information, such as your service’s drop-off rates. High drop-off rates don’t tell you directly that your website has usability problems, but they may signal in that direction. Indirect metrics are largely what you explore when it comes to operational data.
Returning to the ice cream store example, indirect metrics for customer interest could include things such as:
- Fingerprints that curious customers leave behind on windows or glass casings
- Flavors that are running low or need replacing
- The number of samples requested
- The time customers spent in the store
On their own, none of these metrics paint a complete picture. However together these metrics can offer helpful insights into your customers’ experience.
Balancing Metrics
Metrics can sometimes do more harm than good by causing a metrics fixation. When this happens, organizations celebrate numbers without substance. Employees can also feel pressure to distort their work in ways that do not truly benefit customers.
For example, a company decides to set a target metric of one minute call times to the help desk. Their logic is that faster service benefits customers. The company’s help desk employees are evaluated and rewarded based on their call times. To ensure a reward, help desk employees begin to hang up or transfer calls just before the one minute deadline. While the company is meeting their target on paper, the actual customer experience suffers. This example shows why balanced metrics grounded in customer outcomes are so important.
To avoid metrics fixation, aim for a balanced environment of metrics. Balanced metrics are harder to game, avoid harmfully distorting employee work, and offer shared value to both leadership and employees.
Here are some exercises that might help you identify CX outcomes and metrics.
Zoom Out
If you are having trouble thinking of metrics, outcomes, or customers to focus on, try zooming out. Ask yourself:
- What are the outcomes for the people you’re making software/services for?
- If you do the best possible job of designing and delivering your software/service, how’re you improving users’ lives?
Note: Our focus at the CXD is on the customers’ experience with government services (beyond the software development life cycle).
Example: Digitizing Case Management
A government agency has struggled with its call centers being constantly overwhelmed with questions from applicants about three big things:
- What’s the status of my case?
- How do I update my mailing address with the agency?
- Can I change my appointment for biometrics?
None of those things were served digitally, but they account for a large percentage of the calls coming into the call center.
This agency decides to make it possible for applicants to log in to an account to:
- See the status of their case
- Keep their address up to date
- Change their biometrics appointment date, time, and place
The assumption is that calls to the call center will decrease. This allows qualified officers who staff the call centers to help more people with sophisticated and complex questions or challenges.
In theory, this change should lower administrative burden at the call center. The service centers will also benefit because they’re no longer doing data entry on forms.
Allowing these actions to be done online offers a better experience for applicants. Their requests happen almost in real time; they don’t have to wait days or weeks for processing. Applicants also have more information about their cases (which helps with expectation setting).
- What metrics could tell the agency whether administrative burden is decreasing?
- How would you describe the customer outcome(s) the agency is seeking to attain?
Example: Digitizing Personnel Management
An agency is putting a paper personnel management process online. This change also gives personnel more control of their work- and performance-related data, arguably creating a more equitable workplace.
Previously, some staff felt intimidated about asking their boss or a chief of staff for their own file. They limited themselves and their job growth because they were too scared to ask for their SF-50, a form which is used for federal employees when applying to another federal job.
- What outcomes matter for the personnel who now have more control of their work?
- What metrics could indicate that staff experience is improving?
Now, apply that thinking to the services and processes that your own agency is improving.
Consider looking at complaint data. Every Component has at least one mechanism for allowing the public to complain about its services. Some relevant questions might be:
- Where do complaints come in?
- What do they say?
- What gets done about them?
- Do we tell the complainant when we have acted on a complaint and what has changed because of the complaint? Do we tell the public?
- How well are we responding to Public Complaint and Feedback?
- What do complaint and feedback data tell us about the experiences of our customers?