BOONDOGGLE SCORE

System: The Cobra Effect — Consequentialism template for Ethical Play
SDD Completion: Engine built and validated. Generator validated. Two hidden feedbacks implemented. Canvas package needed for PNGs.
Score generated: 2026-04-01
Team Claude fluency: Level III — Claude Code
Deployment target: Vercel (Zebonastic) → public/games/cobra-effect/
Context: The deploy pipeline is now proven. Evolution of Trust and Veil of Ignorance are live. This score is tighter than previous ones — the infrastructure works, the pattern is known, the risks are specific to this engine.
Step 1 · Phase F · Claude Task

LABOR: Claude Code
CONTEXT REQUIRED: glimmer-consequentialism/generate.js, glimmer-consequentialism/index.html, glimmer-consequentialism/example-game.json, Zebonastic project root

PROMPT:

Deploy the Cobra Effect game to the Zebonastic site.

1. Install the canvas package if not already present:
   npm list canvas || npm install canvas

2. Copy the three game files into the Zebonastic project:
   - glimmer-consequentialism/index.html → keep as source engine
   - glimmer-consequentialism/generate.js → the generator
   - glimmer-consequentialism/example-game.json → use as game.json

3. Run the generator with --validate first:
   node glimmer-consequentialism/generate.js \
     glimmer-consequentialism \
     glimmer-consequentialism/index.html \
     --validate

4. If validation passes, deploy:
   node glimmer-consequentialism/generate.js \
     glimmer-consequentialism \
     glimmer-consequentialism/index.html \
     --nextjs .

5. Verify output:
   ls -la public/games/cobra-effect/
   ls -la public/games/cobra-effect/assets/

   Expected:
   - index.html (with title "The Cobra Effect", description, keywords meta tags)
   - user-config.js (GAME_CONFIG from game.json, no _mechanic_argument)
   - assets/ containing a.png, b.png, c.png, d.png (placeholder PNGs)

6. Confirm user-config.js does NOT contain _mechanic_argument:
   grep "_mechanic_argument" public/games/cobra-effect/user-config.js

   Expected: only the comment line, not an object key.

7. Commit and push:
   git add public/games/cobra-effect/
   git commit -m "feat: add cobra-effect starter template (consequentialism, Goodhart's Law)"
   git push origin main

Report: validation output, directory listing, grep result, commit hash.

EXPECTED OUTPUT: public/games/cobra-effect/ with index.html, user-config.js, and 4 placeholder PNGs. Committed to main.

HANDOFF CONDITION: Directory exists. All four PNGs generated. _mechanic_argument appears only in the comment, not as a GAME_CONFIG key. Commit pushed.

DEPENDENCY: None — deploy infrastructure is proven.

Step 2 · Phase F · Human Task

LABOR: Human
SUPERVISORY CAPACITY: [PA] — Plausibility Auditing: after Vercel deploys (1–2 minutes), verify the card and game load correctly.

ACTION:

Check 1 — Card on /games:

Check 2 — Game loads at /games/cobra-effect:

Check 3 — One full play-through:

FAIL TRIAGE:

HANDOFF CONDITION: Card visible with correct tags. Game loads. Pure-D strategy produces high_metric_depleted reveal state. Mechanic argument is engine text.

DEPENDENCY: Step 1.

Step 3 · Phase H · Human Task

LABOR: Human
SUPERVISORY CAPACITY: [IJ] — Interpretive Judgment: does the mechanic produce the glimmer, or does it feel like a resource management puzzle with a surprise ending?

ACTION:

Play the game three ways and assess:

Run 1 — Pure D optimization (all 5 tokens in Satisfaction Score every turn):
Does the collapse feel discovered or telegraphed? The bar color change on Staff Morale is intentional — a visual signal without an explanation. Does the player feel implicated when Patient Throughput crashes, or does it feel like the game "punished" them?

Run 2 — Balanced (distribute tokens across all four nodes):
Does this produce high_metric_intact? Does the reveal text for that state feel honest — "you found a balance most administrators do not find" — or does it feel like a prize?

Run 3 — Protect A heavily (3–4 tokens in Staff Morale every turn):
Does this produce low_metric_intact? Does the reveal — "you protected the system at the cost of the metric. Whether that was the right trade is not for this screen to say." — actually feel like it's not saying?

The test: after reading the reveal, does the player feel like they received a mirror or a verdict? If it feels like a verdict, the state text needs revision.

HANDOFF CONDITION: At least two of the three reveal states produce the glimmer. If any state text reads as a verdict, note the specific text that needs revision before student use.

DEPENDENCY: Step 2.

Step 4 · Phase R · Human Task

LABOR: Human
SUPERVISORY CAPACITY: [EI] — Executive Integration: confirm coherence across the full games system.

ACTION:

  1. /games — Evolution of Trust, Veil of Ignorance, and The Cobra Effect all appear. Tag filter "ethical play" returns all three. Tag filter "consequentialism" returns only Cobra Effect.
  2. /games/cobra-effect — game plays correctly on fresh browser (cleared cache).
  3. Both placeholder PNGs (node A–D) are visible in the assignment phase as neutral gray squares with node key labels.
  4. Course taxonomy check: Week 2 of the Ethical Play syllabus covers "Consequentialism as Mechanic — the VE causal chain display." The keywords in the deployed game (consequentialism, goodharts law, metrics, measurement, ethical play) match what a student in Week 2 would search for.

HANDOFF CONDITION: All three games visible and tagged correctly. Cobra Effect keywords match Week 2 course vocabulary. Released.

DEPENDENCY: Step 3.

Score Summary

Count%
Total steps4
Claude tasks125%
Human tasks375%

The score is this short because the pattern is proven. The generator is validated, the pipeline works, the risks are known. Step 1 is mechanical. Steps 2–4 are irreducibly human: browser verification, ethical judgment, and integration check.

Critical Path

Step 1 → 2 → 3 → 4

Linear. Step 3 is the only gate that could send work backward — if the reveal text reads as a verdict, the state strings in game.json need revision before the game is used in course. That revision is 5 minutes of editing; catching it at Step 3 instead of after student use is the value.

The One Thing That Will Fail and How to Fix It

Canvas package on the Zebonastic deploy machine.

If npm install canvas fails (native dependency compilation issues are common), the placeholder PNGs won't generate. The game will still work — placeholder text appears inline. But the assignment phase will show text labels instead of image boxes.

Fix: generate the PNGs locally first, commit them to the repo alongside the game files, and add "placeholder": false to each node in game.json to tell the generator to skip generation. The PNGs travel with the game.