INFO 7375 — Computational Skepticism for AI · Nik Bear Brown · Irreducibly Human
| By the end of this… | Bloom's Level |
|---|---|
| Distinguish between Tier 1 AI fluency and the irreducibly human capacities this course develops | Analyze |
| Identify at least one supervisory capacity you already exercise and one you need to develop | Apply |
| Explain why using AI tools is necessary but not sufficient for succeeding in the AI economy | Evaluate |
Learn the tools. Use them well. This is Tier 1 — and it is necessary.
Know what the machine cannot lift. That knowledge is irreducibly human.
Decide what needs lifting in the first place. No algorithm formulates that question for you.
| # | Name | Tag | AI Status |
|---|---|---|---|
| 1 | Pattern & Association | USE THE TOOL | Superhuman |
| 2 | Embodied & Sensorimotor | WHAT THE BODY KNOWS | Weak / Emerging |
| 3 | Social & Personal | NO STAKES, NO SKIN | Simulates, doesn't feel |
| 4 | Metacognitive & Supervisory | THE SUPERVISOR'S GAP | ← THIS COURSE |
| 5 | Causal & Counterfactual | WHAT WOULD HAPPEN IF | ← THIS COURSE |
| 6 | Collective & Distributed | CANNOT BE POSSESSED | Absent by definition |
| 7 | Existential & Wisdom | NO LIVED TIME | Absent |
Which tier describes most of what you were asked to demonstrate in your last degree program?
Which tier describes most of what your current job actually requires?
Every graduate in this room can use AI tools. So can everyone else. Tier 1 fluency is rapidly becoming table stakes, not differentiation.
The economy will pay a premium for judgment AI cannot replicate. This course develops the specific capacities that remain scarce even as AI scales.
Hearing the wrong note before you can prove it's wrong. Evaluating AI output for domain-grounded implausibility before any verification step.
Deciding what question is worth asking before you delegate it. The judgment prior to delegation — no algorithm formulates this for you.
Which AI task, in what order, with what context, at what level of trust. Choosing the right tool — and knowing when not to trust its output.
Supplying meaning, moral legitimacy, or accountability to AI output that the AI cannot supply itself.
Holding multiple concurrent AI outputs toward a unified goal. The capacity that operates across all others.
Pattern recognition in data. Most dashboards, metrics, and model outputs live here. AI is superhuman. So is your Excel pivot table.
Requires a causal model, not just a correlation. If I do X, what happens to Y? Introduced Ch 3, deepened Ch 6.
Counterfactual reasoning about a world that did not occur. Introduced Ch 8. The arc closes at Ch 13.
Name a real decision from your work where the data gave you Rung 1 — a correlation — and someone treated it as Rung 2.
What question should have been asked before acting on it?
An artifact, argument, model, or understanding the learner could not have produced before — that required genuine encounter with resistance to produce.
Somewhere in the work, you make a call where being wrong has a consequence. Even a small one. If there's no wrong answer, it's not a glimmer exercise.
The anti-substitution test: accept AI output uncritically → the exercise fails. Your judgment must be load-bearing. Finding where that is — is part of the learning.
Commit to a specific, reasoned prediction before you begin. The tool timestamps it. It cannot be revised after the outcome is observed. Predictions written after the fact are an integrity issue.
What occurred, what was unexpected, what changed in your understanding. Written while the encounter is still alive in working memory.
Synthesizes entries into a Genuine Learning Probability evidence record. The gap between /predict and /journal is where the learning lives.
Why doubt AI — and what rigorous doubt requires. Pearl's Ladder Rungs 1 & 2 introduced at Ch 3.
Is your dataset what you think it is? Rung 2 deepened Ch 6. Rung 3 introduced Ch 8. Midterm milestone.
When to trust the tool — and when to override it. Full five supervisory capacities deployed.
Does a visualization make you understand — or feel like you do?
Who is responsible when it fails? Pearl's Ladder Rung 3 completes at Ch 13.
What a practitioner deploying this system needs to know that performance metrics alone would not tell them.
| # | Course | Tier(s) | Core Capacity |
|---|---|---|---|
| 1 | Computational Skepticism for AI | 4, 5 | ← YOU ARE HERE |
| 2 | Causal Reasoning | 5 | Interventional & counterfactual reasoning |
| 3 | AImagineering | 3, 4, 7 | Post-AI design thinking & judgment |
| 4 | Ethical Play | 3 | Moral reasoning under uncertainty |
| 5 | Conducting AI | 4 | The five supervisory capacities in full |
| 6 | The Collective | 6 | Intelligence that cannot be possessed — only accomplished together |
This course builds the validation and causal reasoning foundation every subsequent course depends on. It teaches you to audit what you produce in every course that follows.
Before this session: which of the five supervisory capacities did you think was most important in your work?
After this session: has that changed — and if so, what changed it?
This course is about what happens after the tool runs — who catches the failure, who formulates the real problem, who decides what the output means in this specific human context. That person is irreducibly human. That person is the goal.