The Sensemaker’s Guide to Measurement
Ah, measurement. Just saying the word can make a room of smart people start to squirm. We’ve all been there – trying to prove that our sensemaking work matters while grappling with metrics that don’t quite capture the whole story.
Maybe you’ve watched a team’s eyes glaze over during a metrics review, or felt your stomach drop when someone asked you to “quantify the impact” of your information architecture work.
The truth is, measurement in sensemaking isn’t just about numbers in a spreadsheet or checkboxes on a task list. It’s about noticing when people start asking different questions, spotting the moment teams begin to share understanding, and yes, sometimes even counting things.
Let’s break this down piece by piece, starting with the basics of what we mean by measurement in sensemaking work.
What is Measurement in Sensemaking?
Before we dive into the how-to, let’s talk about what we mean when we say “measurement” in sensemaking work. After all, you can’t measure what you haven’t defined, and you can’t improve what you haven’t measured.
Measurement is the practice of tracking both what we can count and what we can observe to understand if our work is helping people make better sense of things. Most folks think measurement means numbers in spreadsheets. But in sensemaking work, we’re tracking both the things we can count and the things that matter most – like when teams start speaking the same language or people stop getting lost in their work.
Measurement in sensemaking combines clear metrics (like how long it takes to find something) with careful observation of how people work and think. It means watching for tiny shifts that signal big changes – like when team meetings get shorter because everyone’s using the same terminology, or when your inbox gets quieter because people are finding answers on their own.
Think of it as gathering evidence of change, not just collecting data. Sometimes your best measure might be a story about how someone used information differently, or a pattern in the questions people stop asking. The trick is knowing what to track, when to look for impact, and how to spot the signs that your work is making a real difference.
Reasons to Measure
Teams measure their sensemaking work to understand what’s helping and what isn’t. Whether you’re organizing content, building new systems, or helping groups work better together, here’s why tracking changes matters:
To Know Where You’re Starting
You need to understand what’s really happening now before you can make it better. This means capturing how things work today – messy parts and all.
To Spot What’s Actually Changing
Change sneaks up on you. Sometimes it’s obvious, like when your bounce rate drops. But often it’s subtle, like when you notice people starting to use the same words to describe their work, or when the new hire figures something out without asking for help.
To Show Your Work Matters
We all need to prove our work has impact. Good measurement gives you concrete ways to show how sensemaking work makes a difference – not just in numbers, but in real stories about how work gets easier or problems get solved faster.
To Know When to Adjust Course
Measurement tells you if your changes are helping or if you need to try something different. It’s like having a compass – it helps you know if you’re heading in the right direction or if it’s time to redraw the map.
To Build Trust With Teams
When you measure thoughtfully, you show teams you care about what actually helps them work better, not just what looks good in a report. This builds the trust you need to keep making improvements. For instance, when a team tells you they’re struggling with a new process, measuring both their efficiency metrics and their actual experience shows you’re listening – not just checking boxes.
The point isn’t to measure everything – it’s to track enough of the right things to know if you’re making work better for real people. Sometimes that means counting things, but often it means noticing how work changes and collecting stories about what’s different.
Common Measurement Use Cases
Listen, measuring stuff in organizations is messy. We often track things just because we can, not because we should. I’ve spent years watching teams collect numbers that nobody uses and create reports nobody reads. Let’s talk about what really matters when you’re trying to figure out if your work made things better or worse.
System Changes and Migrations
Here’s the thing about system changes – they’re not really about the system. They’re about people trying to get their work done. Think of measurement like a before-and-after photo of how work happens. You need to understand both the messy reality of how people work around the old system and their struggles with the new one. Don’t get caught up in uptime metrics and server stats if people still can’t find the “save” button.
Process Improvements
Processes are just the paths people take to get work done. When you measure process changes, you’re really measuring if you’ve made those paths clearer or more confusing. The trick is to watch how work actually flows, not how it’s supposed to flow on paper. People are really good at finding workarounds – your measurements should help you understand why they need them.
Knowledge Management Initiatives
Knowledge management is a fancy way of saying “helping people find stuff they need to know.” Don’t get lost measuring the size of your knowledge base or how many documents you have. What matters is whether people can find answers when they need them, and if they trust what they find. Watch for the signs that tell you if you’re making the puzzle easier or harder to solve.
Training and Onboarding Programs
New people joining your organization are like visitors trying to navigate a city without a map. Your measurements should tell you if you’re giving them good directions or sending them in circles. Forget about training completion rates – focus on understanding if people feel lost or confident after you’ve tried to help them find their way. Watch for moments that matter: when a new hire completes their first project without asking for help, when their questions shift from ‘where do I find this?’ to ‘how can we improve this?’, or when they start helping others find their way around.
Cross-team Collaboration Efforts
Teams are just groups of people trying to build something together. When you measure collaboration, you’re really measuring if you’ve made it easier for people to understand each other and work together. Look for signs that teams are speaking the same language and building trust, not just meeting deadlines.
Remember, measurement isn’t about proving you’re right – it’s about understanding if you’re helping. Sometimes the most important measurements are the stories people tell about how their work has changed. Don’t get so caught up in collecting data that you forget to listen to what people are actually telling you about their experience.
Types of Measurement
We spend so much time reinventing wheels in our organizations that we can forget to look around and see what others have already figured out. Back in 2019, when I was working with Kristin Skinner and Kamdyn Moore on the DesignOps Summit, we noticed everyone was struggling with the same thing: how do we measure if our work matters?
So we did what sensemakers do – we dug into the mess. We gathered insights from hundreds of people doing design operations work and created a measurement framework that anyone could use. Not because we’re measurement wizards, but because someone needed to help people stop starting from scratch every time.
Let me share the eight types of measurement that kept showing up in our work back in 2019. Unsurprisingly – six years later, I am left without any new wheels to invent:
Output
Output measurements track what we produce and its direct impact. This includes both positive results and areas where we reduce waste or inefficiency.
Examples:
- Revenue generated from new product features
- Number of successfully completed projects per quarter
- Reduction in customer support tickets after documentation updates
- Cost savings from process improvements
Cost
Cost measurements look at both direct expenses and hidden costs that accumulate over time.
Examples:
- Monthly operating expenses for team tools and software
- Training and onboarding costs for new team members
- Technical debt from temporary solutions
- Resource allocation across projects
Sentiment
Sentiment measurements capture how people feel about and respond to changes, products, or processes.
Examples:
- Customer satisfaction scores for new features
- Employee feedback on process changes
- Social media sentiment analysis
- Internal team satisfaction surveys
Adoption
Adoption measurements track how readily people accept and use new tools, processes, or systems.
Examples:
- Percentage of team members using new collaboration tools
- Time to reach X% user adoption of new features
- Spread of new practices across departments
- Training completion and implementation rates
Engagement
Engagement measurements focus on ongoing interaction and sustained use over time.
Examples:
- Repeat usage rates for new tools
- Active participation in team processes
- Continued adherence to new workflows
- Regular contribution to shared resources
Time
Time measurements focus specifically on identifying and reducing wasted effort.
Examples:
- Time saved by automating manual processes
- Meeting time reduced through better coordination
- Task completion time before and after changes
- Time spent searching for information
Attrition
Attrition measurements track where and why we lose people, resources, or momentum.
Examples:
- Customer drop-off points in new processes
- Team member turnover rates
- Abandoned projects or initiatives
- Declining usage of tools or systems
Extensibility
Extensibility measurements evaluate how well solutions can adapt and grow over time.
Examples:
- Ability to scale processes with team growth
- Adaptation of systems to new requirements
- Compatibility with other tools and processes
- Flexibility in handling unexpected changes
Approaches to Measurement
Now that we have covered the types of measurement you might use, let’s talk about how to actually do this measurement thing. There’s no one right way, but there are some approaches that tend to work better than others.
Quantitative Methods
Sure, you can count things. Sometimes you should! But don’t get stuck thinking numbers are the only way to show impact. When you do count things, make sure you’re counting stuff that matters:
- Track how long it takes people to find things before and after changes
- Count how many times people ask the same questions
- Measure how many steps it takes to complete common tasks
- Note how often people need help or get stuck
Qualitative Methods
This is where the real gold often lives. It’s about watching, listening, and noticing patterns:
- Pay attention to the language people use to describe their work
- Notice when questions start to change (or stop coming altogether)
- Watch how people move through their work
- Listen for stories about what’s different
Hybrid Approaches
The sweet spot is usually somewhere in the middle. Mix your counting with your observing:
- Combine usage stats with user stories
- Track both completion times and confidence levels
- Notice both what people do and how they feel about it
- Look for patterns in both numbers and narratives
Tips for Getting Started
Listen, I know measurement can feel overwhelming. Here’s how to begin without losing your mind:
Step 1: Start with Your Intention
- Write down what you want to do, but be super specific and time-bound
- Make it something you can actually wrap your head around, not some vague wish
- Pro tip: Start the statement with an action verb Example: “I intend to reduce the number of confusing terms in our product documentation by 50% by March 1st”
Step 2: Get Real About Your Why
- Fill in the “because” part with something that actually matters to real people
- Skip the corporate speak – what’s the human reason?
- Think about who benefits and how
Step 3: Ask a Measurable Question
- This is your big “how will I know?” question
- Make it something you can actually answer with data
- Avoid yes/no questions – they’re usually too simple Example: “How many support tickets include requests for basic term clarification?”
Step 4: Set Up Your Measurement
- Pick ONE thing you can count or track (your metric)
- Write down where you are now (your baseline)
- Write down where you want to be (your goal)
Step 5: Create Your Warning System
- Think about what could go wrong or what you need to watch for
- Set up two specific flag conditions
- For each flag, write down exactly what you’ll do when it happens
Remember: The goal isn’t to create a perfect measurement system – it’s to make something that helps you see what’s actually happening so you can make it better.
Measurement Hot Takes
After years of helping teams figure this out, here are some truthiest truths I’ve learned:
- Perfect metrics don’t exist. Stop looking for them. Focus on “good enough to help us make better decisions.”
- The best insights often come from the margins. Pay attention to the unexpected patterns and the stories that don’t quite fit. Edge cases may be small in number, but they are so often the canary in a coal mine of other use cases you haven’t run into yet.
- Context beats numbers every time. A story about how someone’s work got easier is worth more than a dozen charts showing efficiency improvements.
- Small signals matter. Sometimes the biggest changes start with tiny shifts in how people talk about their work.
Answering the Tough Questions
Here’s what people often ask me about measurement:
“How long should we measure baseline performance?” Long enough to see normal patterns, including the ugly parts. Usually at least a few weeks, sometimes months.
“When should we adjust our metrics?” When they stop telling you useful things about your work. Don’t keep measuring stuff just because you always have.
“What if our measurements show unexpected results?” Celebrate! That’s where the learning happens. Unexpected results often point to things we didn’t know we needed to know.
“How do we handle conflicting metrics?” Dig into the conflict – it’s usually telling you something important about how different parts of your organization see success differently. Take a documentation project where page views are up but so are help desk tickets. One metric says success, the other says problem. The conflict tells you people might be finding the docs but not understanding them – that’s useful information you’d miss if you only looked at one number.
Avoiding the Common Pitfalls
Let me save you some pain. Here’s what not to do:
- Don’t measure everything You’ll drown in data and miss the important stuff. Be picky about what you track.
- Don’t ignore the lag Change takes time to show up in measurements. Be patient and keep watching.
- Don’t forget the humans Behind every metric is a person trying to get their work done. Keep that in perspective.
- Don’t skip the context Documentation without context is just numbers on a page. Capture the story behind the changes.
The Path Forward
Remember, the goal isn’t to become a measurement expert. The goal is to understand if your sensemaking work is actually helping people make better sense of things. Sometimes that means fancy metrics, but more often it means paying attention and asking good questions.
Start small, stay curious, and always remember – you’re measuring to make things better, not to make things perfect.
—
If you want to learn more about my approach to measurement, consider attending my workshop on February 21st from 12 PM to 2 PM ET. “Reality Checks & Ripple Effects: How to Measure Before Change” — this workshop is free to premium members of the Sensemakers Club along with a new workshop each month.
Thanks for reading, and stay tuned for our focus area in March – Conducting a Heuristic Evaluation