2
Module 2 · 6 min

Human vs. LLM vs. Agentic Knowledge

Three fundamentally different types of knowledge that must work together

fundamentals ai-agents llm tacit-knowledge knowledge-types

Human vs. LLM vs. Agentic Knowledge

Your senior expert, ChatGPT, and an AI agent walk into a meeting. They’re each brilliant — and they’re each blind to what the others know.


The Three Knowledge Problem

When organizations think about “AI and knowledge,” they usually imagine a simple transfer: take what humans know, give it to AI, done.

This fundamentally misunderstands both human knowledge and how AI systems actually work.

There are three distinct types of knowledge in play — each with different characteristics, different strengths, and different failure modes. Getting knowledge engineering right means understanding all three.


Human Knowledge: Deep, Contextual, Fragile

Human experts carry knowledge that took decades to accumulate. It’s embedded in experience, shaped by context, and often impossible to articulate fully.

What human knowledge does well:

  • Handles truly novel situations
  • Integrates emotional and social intelligence
  • Adapts instantly to changing context
  • Knows when something “feels wrong”

Where human knowledge fails:

  • Can’t be copied or scaled
  • Walks out the door when people leave
  • Inconsistent under pressure or fatigue
  • Hard to audit or explain

The most valuable human knowledge is often tacit — Maria doesn’t just know what to do, she knows what to do in this specific situation with this specific client given what happened last quarter. Ask her to write it down and you’ll get a fraction of what she actually knows.

graph TB
TITLE["HUMAN KNOWLEDGE BREAKDOWN"]

TITLE --> EXPLICIT["<b>EXPLICIT ~15%</b><br/>Documented"]
TITLE --> TACIT["<b>TACIT ~85%</b><br/>Undocumented"]

EXPLICIT --> E1["Procedures"]
EXPLICIT --> E2["Policies"]
EXPLICIT --> E3["Training Materials"]
EXPLICIT --> E4["Process Docs"]

TACIT --> T1["Intuition"]
TACIT --> T2["Judgment Calls"]
TACIT --> T3["Exception Handling"]
TACIT --> T4["Relationship Knowledge"]
TACIT --> T5["Feel for Situations"]

style TITLE fill:#fef2f2,stroke:#dc2626,stroke-width:4px,color:#991b1b,font-size:18px
style EXPLICIT fill:#fff7ed,stroke:#f97316,stroke-width:3px,color:#c2410c,min-width:150px
style TACIT fill:#fef2f2,stroke:#dc2626,stroke-width:4px,color:#991b1b,min-width:250px
style E1 fill:#fff7ed,stroke:#f97316
style E2 fill:#fff7ed,stroke:#f97316
style E3 fill:#fff7ed,stroke:#f97316
style E4 fill:#fff7ed,stroke:#f97316
style T1 fill:#fef2f2,stroke:#dc2626
style T2 fill:#fef2f2,stroke:#dc2626
style T3 fill:#fef2f2,stroke:#dc2626
style T4 fill:#fef2f2,stroke:#dc2626
style T5 fill:#fef2f2,stroke:#dc2626

The uncomfortable ratio: Most knowledge management initiatives capture maybe 15% of what experts actually know — the explicit, documentable part. The 85% that drives real performance stays locked in their heads.


LLM Knowledge: Broad, Probabilistic, Stateless

Large Language Models like GPT-4 or Claude have consumed vast amounts of human knowledge — essentially the documented output of civilization. They can reason, synthesize, and generate with remarkable fluency.

What LLM knowledge does well:

  • Broad general knowledge across domains
  • Pattern recognition at scale
  • Language understanding and generation
  • Available 24/7, infinitely scalable

Where LLM knowledge fails:

  • No knowledge of your specific context
  • Can’t learn from your proprietary data (without fine-tuning)
  • Confidently wrong when knowledge is missing
  • No memory between conversations (stateless)

An LLM knows what “good customer service” looks like in general. It doesn’t know that your top client, Acme Corp, has a standing agreement to expedite all orders, that their CFO hates being called “Robert,” or that last month’s billing error means you should be extra accommodating right now.

The context gap: LLMs have world knowledge but not your knowledge. Every organization has proprietary context that shapes how general knowledge should be applied. Without that context, LLMs give you generic — often wrong — answers.


Agentic Knowledge: Operational, Integrated, Evolving

AI agents are different from both humans and standalone LLMs. They operate within systems, take actions, and maintain state across interactions. Their knowledge needs are fundamentally operational.

What agentic knowledge requires:

  • Understanding of your specific business rules
  • Access to real-time data from your systems
  • Memory of past interactions and decisions
  • Ability to know when to escalate

Where agentic knowledge differs:

  • Must be explicit enough to be encoded
  • Needs clear boundaries and guardrails
  • Requires continuous updating as context changes
  • Must integrate with human oversight
graph TB
TITLE["AGENTIC KNOWLEDGE REQUIREMENTS"]

TITLE --> WHAT["<b>KNOW WHAT</b>"]
TITLE --> WHEN["<b>KNOW WHEN</b>"]

WHAT --> W1["Business Rules"]
WHAT --> W2["Decision Logic"]
WHAT --> W3["Process Steps"]

WHEN --> WH1["To Act"]
WHEN --> WH2["To Escalate"]
WHEN --> WH3["To Wait"]

W1 --> HOW["<b>KNOW HOW</b>"]
W2 --> HOW
W3 --> HOW
WH1 --> HOW
WH2 --> HOW
WH3 --> HOW

HOW --> H1["Access Data"]
HOW --> H2["Use Tools"]
HOW --> H3["Verify Results"]

style TITLE fill:#fef2f2,stroke:#dc2626,stroke-width:4px,color:#991b1b,font-size:18px
style WHAT fill:#fff7ed,stroke:#f97316,stroke-width:3px,color:#c2410c,min-width:150px
style WHEN fill:#fff7ed,stroke:#f97316,stroke-width:3px,color:#c2410c,min-width:150px
style HOW fill:#fef2f2,stroke:#dc2626,stroke-width:3px,color:#991b1b,min-width:150px
style W1 fill:#fef2f2,stroke:#dc2626
style W2 fill:#fef2f2,stroke:#dc2626
style W3 fill:#fef2f2,stroke:#dc2626
style WH1 fill:#fef2f2,stroke:#dc2626
style WH2 fill:#fef2f2,stroke:#dc2626
style WH3 fill:#fef2f2,stroke:#dc2626
style H1 fill:#fff7ed,stroke:#f97316
style H2 fill:#fff7ed,stroke:#f97316
style H3 fill:#fff7ed,stroke:#f97316

An agent handling customer requests needs to know what your policies are, when to apply exceptions versus escalate, and how to pull the relevant data from your systems. This is operational knowledge — less about understanding and more about doing.


The Integration Challenge

Here’s where it gets interesting: effective knowledge engineering requires all three types working together.

Knowledge TypeContributionLimitation
HumanProvides judgment, handles exceptions, trains the systemDoesn’t scale, eventually leaves
LLMProvides reasoning, language, general knowledgeLacks your specific context
AgenticExecutes consistently, scales infinitely, never forgetsOnly as good as what it’s given

The knowledge engineering task is to:

  1. Extract tacit human knowledge and make it explicit
  2. Contextualize LLM capabilities with your proprietary information
  3. Encode what agents need to operate autonomously
  4. Design feedback loops so all three improve together

What This Means For Your Organization

You can’t just “train the AI on our documents.” Documents capture only the explicit fraction of human knowledge. The tacit knowledge — the judgment, the exceptions, the “feel” — requires deliberate extraction methods.

You can’t just “plug in an LLM.” Without your context, LLMs give you generic answers dressed up as expertise. Retrieval-Augmented Generation (RAG) helps, but only if you have the right knowledge assets to retrieve.

You can’t just “deploy an agent.” Agents without proper knowledge foundations are automation of ignorance. They’ll do the wrong thing consistently, at scale, with confidence.


The Path Forward

The organizations winning at this recognize that knowledge engineering is a discipline, not a one-time project. They’re building systematic capabilities to:

  • Surface tacit human knowledge before it walks out the door
  • Structure that knowledge so LLMs can use it effectively
  • Operationalize it so agents can act on it autonomously
  • Evolve it as the business and context change

This isn’t about replacing humans with AI. It’s about creating a knowledge architecture where humans, LLMs, and agents each contribute what they do best.


Want to go deeper?

Understanding these three knowledge types is the first step. The next is mapping which types your specific use cases require — and designing the extraction and integration approach.

That’s exactly what our Use Case Assessment does. In 10 minutes, you’ll see how your situation maps to this framework.

Take the Assessment →

Ready to Get Started?

Take our assessment to find the right knowledge engineering approach for your organization.

Start Assessment