// SLIDE 01 — HOOK
You are already superhuman at the wrong things.
YOU ARE
ALREADY
SUPERHUMAN
// AT THE WRONG THINGS

INFO 7375 — Computational Skepticism for AI  ·  Nik Bear Brown  ·  Irreducibly Human

// SLIDE 02 — LEARNING OUTCOMES
By the end of this deck, you will be able to do three things.
By the end of this…Bloom's Level
Distinguish between Tier 1 AI fluency and the irreducibly human capacities this course developsAnalyze
Identify at least one supervisory capacity you already exercise and one you need to developApply
Explain why using AI tools is necessary but not sufficient for succeeding in the AI economyEvaluate
// SLIDE 03 — THE PROBLEM
The curriculum spent twelve years training you to do what machines now do better.
What the curriculum trained
Pattern recognition
Fact retrieval
Syntactic correctness
Replicating established reasoning
Producing artifact on demand
What AI now does — superhumanly
Pattern recognition
Fact retrieval
Syntactic correctness
Replicating established reasoning
Producing artifact on demand
This is not an accusation. The curriculum was rational. It optimized for an economy that no longer exists.
// SLIDE 04 — THE FORKLIFT ARGUMENT
The intelligent response to a forklift is not to practice lifting heavier objects.
The intelligent response to a forklift is not to practice lifting heavier objects. It is to learn to operate the machine, understand what it can and cannot lift, and develop the judgment to know what needs lifting in the first place.

01
Operate it

Learn the tools. Use them well. This is Tier 1 — and it is necessary.

02
Understand its limits

Know what the machine cannot lift. That knowledge is irreducibly human.

03
Develop the judgment

Decide what needs lifting in the first place. No algorithm formulates that question for you.

// SLIDE 05 — THE TIER MAP
Seven tiers of human intelligence. AI is superhuman at one. The rest are yours.
#NameTagAI Status
1Pattern & AssociationUSE THE TOOLSuperhuman
2Embodied & SensorimotorWHAT THE BODY KNOWSWeak / Emerging
3Social & PersonalNO STAKES, NO SKINSimulates, doesn't feel
4Metacognitive & SupervisoryTHE SUPERVISOR'S GAP← THIS COURSE
5Causal & CounterfactualWHAT WOULD HAPPEN IF← THIS COURSE
6Collective & DistributedCANNOT BE POSSESSEDAbsent by definition
7Existential & WisdomNO LIVED TIMEAbsent
// SLIDE 06 — CHECK FOR UNDERSTANDING
PAUSE.
PAUSE.

Which tier describes most of what you were asked to demonstrate in your last degree program?

Which tier describes most of what your current job actually requires?

// CHECK FOR UNDERSTANDING
// SLIDE 07 — NECESSARY BUT NOT SUFFICIENT
Tier 1 fluency is the entry ticket. It is not the game.
Tier 1 AI fluency
NECESSARY

Every graduate in this room can use AI tools. So can everyone else. Tier 1 fluency is rapidly becoming table stakes, not differentiation.

Irreducibly human capacity
NOT
SUFFICIENT

The economy will pay a premium for judgment AI cannot replicate. This course develops the specific capacities that remain scarce even as AI scales.

When the AI's output is wrong — who catches it? When the problem is badly formulated — who sees that before the build begins? That person is worth more. That person is who this course builds.
// SLIDE 08 — TIER 4: THE FIVE SUPERVISORY CAPACITIES
Five things no AI can do. You will develop all five.
PA
Plausibility Auditing

Hearing the wrong note before you can prove it's wrong. Evaluating AI output for domain-grounded implausibility before any verification step.

PF
Problem Formulation

Deciding what question is worth asking before you delegate it. The judgment prior to delegation — no algorithm formulates this for you.

TO
Tool Orchestration

Which AI task, in what order, with what context, at what level of trust. Choosing the right tool — and knowing when not to trust its output.

IJ
Interpretive Judgment

Supplying meaning, moral legitimacy, or accountability to AI output that the AI cannot supply itself.

EI
Executive Integration

Holding multiple concurrent AI outputs toward a unified goal. The capacity that operates across all others.

// SLIDE 09 — TIER 5: PEARL'S LADDER
AI is superhuman at Rung 1. Rungs 2 and 3 are yours.
Rung 1
AI: SUPERHUMAN
Observe — What correlates with what?

Pattern recognition in data. Most dashboards, metrics, and model outputs live here. AI is superhuman. So is your Excel pivot table.

Rung 2
AI: WEAK
Intervene — What would change if we acted?

Requires a causal model, not just a correlation. If I do X, what happens to Y? Introduced Ch 3, deepened Ch 6.

Rung 3
AI: ABSENT
Imagine — What would have happened if things were different?

Counterfactual reasoning about a world that did not occur. Introduced Ch 8. The arc closes at Ch 13.

Most organizational decision-making is Rung 1 dressed as Rung 2. This course corrects that.
// SLIDE 10 — CHECK FOR UNDERSTANDING
PAUSE.
PAUSE.

Name a real decision from your work where the data gave you Rung 1 — a correlation — and someone treated it as Rung 2.

What question should have been asked before acting on it?

// CHECK FOR UNDERSTANDING
// SLIDE 11 — GLIMMER EXERCISES
Fourteen exercises. None of them are prompts with a rubric.
01
Something new must exist at the end

An artifact, argument, model, or understanding the learner could not have produced before — that required genuine encounter with resistance to produce.

02
A real judgment that could be wrong

Somewhere in the work, you make a call where being wrong has a consequence. Even a small one. If there's no wrong answer, it's not a glimmer exercise.

03
AI cannot complete it without you

The anti-substitution test: accept AI output uncritically → the exercise fails. Your judgment must be load-bearing. Finding where that is — is part of the learning.

Use AI as much as you want — up to the point where your judgment becomes decisive. That point is the exercise.
// SLIDE 12 — THE FRICTIONAL FRAMEWORK
The artifact is not the evidence. The process is.
/predict
BEFORE
Before the work — timestamped, locked

Commit to a specific, reasoned prediction before you begin. The tool timestamps it. It cannot be revised after the outcome is observed. Predictions written after the fact are an integrity issue.

/journal
DURING / AFTER
During or immediately after — what happened

What occurred, what was unexpected, what changed in your understanding. Written while the encounter is still alive in working memory.

/trace
END OF WEEK
End of week — GLP evidence synthesis

Synthesizes entries into a Genuine Learning Probability evidence record. The gap between /predict and /journal is where the learning lives.

AI decoupled artifact from process. Frictional restores the second evidence stream — the process traces that genuine learning leaves, independent of artifact quality.
// SLIDE 13 — THE COURSE ARC
Six parts. Fourteen chapters. One argument.
Wks 1–4
PART I
Foundations of Computational Skepticism

Why doubt AI — and what rigorous doubt requires. Pearl's Ladder Rungs 1 & 2 introduced at Ch 3.

Wks 5–8
PART II
Data, Models, and Validation Systems

Is your dataset what you think it is? Rung 2 deepened Ch 6. Rung 3 introduced Ch 8. Midterm milestone.

Wks 9–10
PART III
Agency, Delegation, and Human-AI Systems

When to trust the tool — and when to override it. Full five supervisory capacities deployed.

Wks 11–12
PART IV
Visualization, Communication, and Epistemics

Does a visualization make you understand — or feel like you do?

Wks 13–14
PART V
Ethics, Governance, and the Limits of the Technical

Who is responsible when it fails? Pearl's Ladder Rung 3 completes at Ch 13.

Wk 15
PART VI
Final Presentations — The Full Validation Pipeline

What a practitioner deploying this system needs to know that performance metrics alone would not tell them.

// SLIDE 14 — THE IRREDUCIBLY HUMAN SERIES
This is course one of six. Five more follow.
#CourseTier(s)Core Capacity
1Computational Skepticism for AI4, 5← YOU ARE HERE
2Causal Reasoning5Interventional & counterfactual reasoning
3AImagineering3, 4, 7Post-AI design thinking & judgment
4Ethical Play3Moral reasoning under uncertainty
5Conducting AI4The five supervisory capacities in full
6The Collective6Intelligence that cannot be possessed — only accomplished together

This course builds the validation and causal reasoning foundation every subsequent course depends on. It teaches you to audit what you produce in every course that follows.

// SLIDE 15 — CHECK FOR UNDERSTANDING
PAUSE.
PAUSE.

Before this session: which of the five supervisory capacities did you think was most important in your work?

After this session: has that changed — and if so, what changed it?

// OPEN THE FRICTIONAL TOOL  ·  RUN /PREDICT  ·  YOUR ANSWER IS NOW LOCKED
// SLIDE 16 — RESOLUTION
Pattern recognition got you here. It is not enough for what comes next.
THE TOOLS
ARE SUPERHUMAN.
AT TIER 1.
▸ AUDIT WHAT AI PRODUCES ▸ FORMULATE WHAT AI CANNOT ▸ INTEGRATE WHAT AI CANNOT HOLD

This course is about what happens after the tool runs — who catches the failure, who formulates the real problem, who decides what the output means in this specific human context. That person is irreducibly human. That person is the goal.

▶ tap for audio
01 / 16