Learning and Measurement
How to know if your knowledge engineering is actually working
Learning and Measurement
If you can’t measure it, you can’t improve it. But if you only measure what’s easy, you’ll improve the wrong things.
The Measurement Problem
Knowledge engineering initiatives often die not from failure, but from an inability to prove success.
Six months in, someone asks: “Is this actually working?” And the project team realizes they can’t answer. They have anecdotes. They have happy users (and unhappy ones). They have a sense that things are better. But they don’t have proof.
This happens because knowledge systems produce value in ways that are hard to measure directly. You can’t easily count “good decisions made” or “knowledge successfully transferred.” The outcomes are diffuse, delayed, and entangled with other factors.
The solution isn’t to give up on measurement — it’s to get smarter about what and how you measure.
Three Layers of Measurement
Effective measurement for knowledge engineering operates at three distinct layers, each answering different questions:
graph TB TITLE["`MEASUREMENT LAYERS`"] TITLE --> L3 L3["`LAYER 3 BUSINESS OUTCOMES ──────── Is the organization performing better? ⚖️ Hardest to attribute 💎 Most valuable ⏱️ Longest lag time`"] L2["`LAYER 2 OPERATIONAL METRICS ──────── Is the knowledge system being used effectively? 📊 Easier to measure 🔮 Leading indicator ✅ Actionable`"] L1["`LAYER 1 KNOWLEDGE QUALITY ──────── Is the knowledge itself good? 🎯 Most direct control 🏗️ Foundational ⚠️ Easiest to game`"] L1 --> L2 L2 --> L3 style TITLE fill:#fef2f2,stroke:#dc2626,stroke-width:4px,color:#991b1b style L1 fill:#fef2f2,stroke:#dc2626,stroke-width:3px style L2 fill:#fff7ed,stroke:#f97316,stroke-width:3px style L3 fill:#d1fae5,stroke:#10b981,stroke-width:4px
Layer 1 tells you if your knowledge assets are healthy. Layer 2 tells you if people and systems are actually using them. Layer 3 tells you if all of this is making a difference.
Most organizations only measure Layer 1 — and then wonder why stakeholders aren’t impressed by “we documented 500 decision rules!”
The Learning Loop
Measurement without action is just monitoring. The point isn’t to know how you’re doing — it’s to get better.
Effective knowledge systems build in feedback loops that convert measurement into improvement:
graph TB TITLE["`THE LEARNING LOOP`"] TITLE --> M M["`📊 MEASURE Collect data on all three layers`"] M --> A["`🔍 ANALYZE Identify patterns, gaps, issues`"] A --> D["`🎯 DECIDE Prioritize improvements`"] D --> AC["`⚡ ACT Update knowledge, retrain, refine`"] AC -->|REPEAT| M style TITLE fill:#fef2f2,stroke:#dc2626,stroke-width:4px,color:#991b1b style M fill:#fef2f2,stroke:#dc2626,stroke-width:3px style A fill:#fff7ed,stroke:#f97316,stroke-width:3px style D fill:#fef2f2,stroke:#dc2626,stroke-width:3px style AC fill:#d1fae5,stroke:#10b981,stroke-width:3px linkStyle 4 stroke:#dc2626,stroke-width:4px,color:#991b1b
Each stage of the loop serves a distinct purpose: Measure collects the data, Analyze finds the patterns, Decide sets priorities, and Act closes the loop with concrete improvements.
What Good Looks Like
Organizations that excel at learning and measurement share characteristics:
They define success upfront
Before the pilot starts, they answer:
- What metrics will we track?
- What’s the baseline?
- What would “success” look like in numbers?
- When will we measure?
They instrument everything
The knowledge system is built to generate the data needed for measurement. This isn’t an afterthought — it’s part of the architecture:
- Every query is logged
- Every recommendation is tracked
- Every user action is captured
- Every feedback signal is recorded
They create feedback mechanisms
Users can easily report issues:
- “This answer was wrong”
- “This didn’t address my situation”
- “This was helpful” (positive feedback matters too)
And there’s a process to act on that feedback — not just collect it.
They review regularly
Not just when things go wrong. Scheduled reviews that ask:
- What’s the knowledge quality score this month?
- What are users telling us?
- Are the business metrics moving?
- What should we prioritize next?
They celebrate and communicate
When measurement shows success, they tell stakeholders. When it shows problems, they show how they’re addressing them. Transparency builds trust and support.
The Meta-Learning
The best organizations don’t just learn from their knowledge systems — they learn about how to build better knowledge systems.
After each project, they capture:
- What extraction methods worked best for which knowledge types?
- Which metrics were most useful? Which were misleading?
- What surprised us about how users engaged?
- What would we do differently next time?
This meta-learning compounds. Each project makes the next one better. This is how organizations build genuine capability in knowledge engineering — not by running projects, but by learning from them.
The Honest Truth
Many knowledge initiatives fail to measure effectively because the sponsors are afraid of what they’ll find. What if it’s not working? What if the investment wasn’t worth it?
This fear creates a vicious cycle: poor measurement leads to no evidence of value leads to reduced support leads to failure.
The alternative is to embrace measurement as a learning tool, not a judgment. Early indicators that something isn’t working are gifts — they let you course-correct before it’s too late.
The question isn’t “are we succeeding?” It’s “what are we learning, and what will we do with it?”
Want to go deeper?
Understanding what to measure and how is specific to your use case and organization. In a Design Workshop, we help you define success metrics, design feedback mechanisms, and plan for the learning loops that will make your knowledge system continuously better.
Ready to Get Started?
Take our assessment to find the right knowledge engineering approach for your organization.
Start Assessment