BOONDOGGLING.AI · NIK BEAR BROWN · IRREDUCIBLY HUMAN
| By the end of this, you will be able to… | Bloom's |
|---|---|
| Distinguish vibe coding from conducting | Analyze |
| Identify the five supervisory capacities | Analyze |
| Paste the Gru prompt and run /help | Apply |
| Complete /v1 and produce a confirmed Problem Summary | Create |
Each outcome is a behavior you can check — not a feeling you might have. The last row is the deliverable. Everything else is the path.
Keystrokes. Prompts. Output. The visible layer. The part most tools — and most vibe coders — optimize for.
Decisions. Sequencing. Verification. Failure modes. This is invisible — until it breaks everything.
The AI executes exactly what it understood you to mean. That gap — between what you meant and what it understood — is where all the damage lives.
Not all inputs are equal. Deciding which ones govern the output is human work. Claude doesn't know which ones you care about.
Claude will not tell you what it doesn't know it doesn't know. Surfacing failure modes requires domain grounding the prompt doesn't have.
Handoff conditions. Without them, each step inherits every unverified assumption from the last.
Dependency sequencing. The AI doesn't know what breaks if you skip step 3. You do — if you've thought it through.
That's not prompting. That's thinking. And it's the part nobody talks about because it's invisible.
Describe what you want
AI writes the code
You check it
You ship it
Something breaks
Prompt harder
Repeat until frustrated
Decide what the mission is
Sequence tasks by dependency
Assign each task to the right labor
Set explicit handoff conditions
Verify before the next step runs
Integrate across threads
Ship something that holds
20+ years shipping systems. Has been in the incident review when a missing decision caused a production outage.
Your job: design the mission, assign the minions, check their work, decide what the mission IS, and take responsibility for the outcome.
Claude tasks: exact prompts, expected output, handoff conditions. Human tasks: named supervisory capacities — not "review it."
Claude does what it's superhuman at. You do what is irreducibly human. The Boondoggle Report makes that distinction impossible to miss.
No step N+1 runs until step N's handoff condition is met. Not what felt right. Not what seemed done. What is actually verified.
What's one decision you made on your last AI project that you couldn't have delegated to Claude — even if you wanted to? Name it specifically.
Available at boondoggling.ai · irreducibly.xyz · GitHub. The full system prompt — not a summary, not a shortened version. The whole thing.
Claude.ai, any tier. Claude Code works too. Better: create a Project in Claude and paste Gru as the system prompt — persistent across sessions, no re-pasting.
Gru responds with the full welcome menu automatically. No configuration. No API key. No setup script. The prompt is the tool.
The interface is the channel. The prompt is the tool. These are not the same thing.
| Command | What it does |
|---|---|
| /v1 /intake | Problem intake — start here, always |
| /v2 → /v4 | Architecture principles, user flows, needs |
| /s1 → /s4 | Components, integrations, data, edge cases |
| /g1 /fulldoc | Compile the full SDD draft |
| /claude /boondoggle | Generate the Boondoggle Score — Claude tasks + human tasks, sequenced by dependency |
Two modes. Default: interactive — Gru asks before building, pushes back on weak input, holds phase gates. Append /silent for immediate output, no friction.
For your first session: stay interactive. The pushback is the value.
"In one sentence — not a paragraph." Gru rejects anything that could describe ten different systems.
One specific user. Their current workflow. What breaks for them today. Not a persona — a person.
The constraints that govern every architecture decision downstream. Gru will not skip them even if you try.
A locked format. One paragraph. The single biggest unresolved question named. Gru waits for your confirmation before /v2.
Gru asks one question at a time. The friction is intentional. Pasting a wall of context is vibe coding.
Open a real project — or one you abandoned. Can you fill in the Problem Summary template right now? Where does it break down?
Hearing the wrong note before verification. Sensing domain-grounded implausibility before running a single test.
Deciding what the mission is before Claude sees it. Not how to prompt — what to hand the minion and why.
Choosing which Claude task, in what order, with what trust level, at this specific step in the build.
Supplying meaning, moral legitimacy, or accountability to Claude's output that Claude cannot supply itself.
Holding all four simultaneously toward a unified goal. This is what conductors do. This is the whole job.
The tools are excellent. They will execute exactly what they understood you to mean. Your job is to be precise about what you mean — in the right sequence — and to verify before the next step begins.
That's not a consolation prize. That's the whole job.