TAB 4 — THE WORKFLOW

Claude as your copilot.

How a non-technical operator used Claude Code to build production infrastructure.
This tab isn't about the agent board. It's about the workflow that built it. How a non-technical person used AI tools to build production infrastructure that would normally require a team of engineers. The patterns here apply to any technical build, not just this one.

The setup: three Claudes, three jobs

This build used three separate Claude instances, each with a different role:

InstanceWhere it runsWhat it doesHow it's authenticated
Claude (chat)claude.ai in a browser or the Claude appStrategic planning, content creation, decision-making, long-form writingYour Claude subscription
Claude Code (server)Terminal on your cloud serverExecutes commands, reads files, troubleshoots errors, writes configClaude subscription (OAuth) or API key
Claude Code (local)Terminal on your Mac/PCLocal tasks like file transfers, testing configs, driving setupSame authentication

Which model to use for each

Model choice matters more than you'd think. Use the best available model for strategic work and the most capable execution model for Claude Code:

  • Claude (chat): Claude Opus (or the newest, most capable model). Strategic planning, architecture decisions, and writing benefit from the deepest reasoning.
  • Claude Code (server and local): Claude Opus if available, otherwise the best Sonnet. Claude Code is doing real infrastructure work — you want the model most likely to catch its own mistakes.
The critical insight

These three instances don't share context automatically. Claude in the browser doesn't know what Claude Code on the VPS just did. You are the bridge. You carry context between them by copying output into the chat for review, copying decisions into instructions for Claude Code, and maintaining a CLAUDE.md file on the VPS for persistent context.

The CLAUDE.md pattern

Every project that uses Claude Code should have a CLAUDE.md file at the project root. Claude Code reads this automatically on startup. Think of it as the brief you'd give a new contractor before they start work.

What goes in a good CLAUDE.md

CLAUDE.mdmarkdown
# CLAUDE.md — [Project Name]

## What this project is
One paragraph. What are we building, why, who's it for.

## Current state
What's already done. What's next. Where we left off last session.
Update this after every significant session.

## Hard rules — never violate
The things that would cause real damage if broken.
- Never touch SSH config without explicit approval
- Never spend money
- Never push to git without showing diffs
- [Project-specific rules]

## Working style
How you want Claude Code to behave.
- Show commands before running them
- Group related changes into single approvals
- When something fails, try the obvious fix once before asking
- Be direct, no preamble

## Context
Background that helps Claude Code make better decisions.

## Don't do
Explicit list of things Claude Code should refuse even if asked.

Why this matters

Without CLAUDE.md, every Claude Code session starts cold. You spend the first 10 minutes re-explaining the project. With CLAUDE.md, Claude Code reads the brief automatically and picks up where you left off.

Update the “Current state” section after every session. This is the most important maintenance task. If you don't update it, the next session's Claude Code will have stale context.

The approval-gate pattern

This is the core workflow that makes Claude Code safe for non-technical users:

  1. You describe what you want in plain English
  2. Claude Code proposes a plan — specific commands, config changes, files to create
  3. You review the plan — does it make sense? Does anything look dangerous?
  4. You approve or push back — “yes, run it” or “wait, explain that third command”
  5. Claude Code executes and shows you the output
  6. You verify — did it work? Is the output what you expected?

When to approve immediately

  • Read-only commands (ls, cat, grep, systemctl status)
  • Package installations from official repositories
  • Creating new files in project directories
  • Config changes Claude Code has explained and you understand

When to pause and verify

  • Anything that touches SSH, firewall, or network config
  • Anything that restarts a critical service
  • Anything that modifies permissions on system files
  • Anything involving secrets, tokens, or API keys
  • Any command with rm, delete, or drop

When to refuse and do it yourself

  • SSH lockdown (sshd_config changes)
  • Pasting secrets into .env files
  • Financial transactions or account creation
  • Anything you don't understand after asking for an explanation

Prompt examples: the approval flow in practice

A well-framed prompt that invites approvals naturally:

▶ Claude Code Prompt
I need to set up automatic security updates on this Ubuntu server. Please install and configure unattended-upgrades with sensible defaults. Show me the config before applying it, and walk me through what each setting does in one sentence so I can approve each one.

A prompt that pushes back on something risky:

▶ Claude Code Prompt
Wait — before you run that `ufw enable` command, confirm you've already added the SSH allow rule. The order matters: if UFW enables before port 22 is allowed, I lose my SSH session. Show me both commands in the order they'll run, then I'll approve.

A prompt that asks for explanation rather than blind approval:

▶ Claude Code Prompt
You're proposing to modify `/etc/fstab`. Before I approve, explain in one sentence what this change does, what happens if it's wrong, and what the recovery path is if it breaks.

The two-terminal pattern

For security-sensitive operations (especially SSH and firewall changes), always have two terminals open:

Terminal 1 (safety net): An existing SSH session that's already connected. Don't touch it. It's your escape route.

Terminal 2 (working session): Where you make changes. If you accidentally lock yourself out, Terminal 1 is still connected and can fix it.

The moment you verify the operation worked from a third fresh connection, you can close the safety net.

The “trust but verify” discipline

The single most important lesson from this build: the playbook said X, but reality said Y. This happened repeatedly:

What the playbook saidWhat reality said
Both frameworks use SOUL.mdOpenClaw uses workspace/ with multiple template files
OGP federation protocol existsIt doesn't exist
Node.js 20 is sufficientOpenClaw requires Node 24
Hermes uses Anthropic API by defaultHermes defaults to OpenRouter base_url
Location: FalkensteinFalkenstein unavailable, Helsinki instead
Cost: ~€6.50/monthActually ~€11.10/month

The verification sequence:

  1. Check the actual filesystem (ls, cat, find)
  2. Check the actual configuration (read the config file, don't assume)
  3. Check the actual documentation (fetch the current docs, not cached versions)
  4. Test the actual behaviour (don't assume it works because it should work)

How to recover when Claude Code gets something wrong

It will happen. Here's how to handle it:

Step 1: Don't panic, read the error

Most errors tell you exactly what's wrong. Copy the full error message and paste it back to Claude Code — it can usually diagnose from the error alone.

Step 2: Check if the damage is reversible

  • Config file change? Claude Code can revert it.
  • Package installed? Can be uninstalled.
  • Service restarted? Check systemctl status.
  • File deleted? This is why you don't let Claude Code rm -rf things.

Step 3: If Claude Code loops on the same error

The most common failure mode: Claude Code tries a fix, it fails, tries the same fix again. When you see this pattern:

  1. Stop it. Say “Stop. That approach isn't working. Let's think about this differently.”
  2. Give it more context. Paste the relevant documentation.
  3. Escalate to Claude (chat). Copy the error into your browser Claude session for strategic guidance.

Step 4: When to start fresh

If a component is thoroughly broken after 15+ minutes of debugging: stop the service, back up the config, delete the component's directory, reinstall from scratch. Reinstallation is deterministic; debugging is not.

Recovery prompts

When Claude Code is looping:

▶ Claude Code Prompt
Stop. You've tried this approach three times and it's not working. Let's reset. Read the actual error message again and tell me what it says literally. Then tell me what you think the root cause is, and propose a completely different approach before making any more changes.

When you need to roll back:

▶ Claude Code Prompt
The last change you made broke the service. Please: 1. Show me the last 3 file modifications you made in this session 2. Revert them one at a time, verifying the service after each revert 3. Once the service is healthy again, stop and wait for me before trying a different fix

When you need to escalate context:

▶ Claude Code Prompt
I'm going to paste the official documentation for this component. Before you try anything else, read it carefully and tell me what the docs say about my specific error. Don't propose a fix until you've confirmed the docs match my situation.

The section-by-section execution pattern

For any playbook-style project, this is the rhythm that works:

  1. Read the section. Tell Claude Code: “Read section X. Confirm you understand what it involves.”
  2. Review Claude Code's summary. Does it match your understanding?
  3. Execute step by step. “Start executing. Show me each command before running.”
  4. Verify after each sub-step. Did the command produce the expected output?
  5. Checkpoint at section end. Update CLAUDE.md's “Current state.” Commit to git.
  6. Take a break. Seriously. The quality of your approvals degrades after 90 minutes.

How long things actually take

Honest timings from the real build:

PhaseOriginal estimateActual timeWhat ate the time
Phase 1: Server + SSH lockdown15 min45 minExplaining every concept; safety-net terminal pattern
Phase 2: Hardening + runtime deps30 min90 minopenssh-server debconf stall; Node 20→24; Docker APT repo
Phase 3: Cloudflare Tunnel20 min40 minBrowser OAuth flow; DNS propagation wait
Phase 4: Agent install60 min150 minConfig troubleshooting; shared.env loading; framework quirks
Phase 5: Identity deployment60 min180 minTemplate system discovery; SOUL.md content writing
Phase 6: Channels15 min60 minBot-to-bot visibility research; thread disabling
Phase 7: Backup + cleanup10 min20 minMostly straightforward

Total original estimate: ~3.5 hours. Actual time: ~10 hours across two days. The guide you're reading now has all the corrections baked in.

The copilot conversation model

The most powerful pattern from this build was the three-way conversation model between the human, the strategic Claude (chat), and the tactical Claude Code (terminal).

When to use Claude (chat)

  • Making architectural decisions
  • Writing content (SOUL.md, identity files, this guide)
  • Strategic planning
  • Reviewing what Claude Code produced before deploying
  • Troubleshooting that requires the full build context

When to use Claude Code

  • Executing commands on the server
  • Reading and writing config files
  • Installing packages and frameworks
  • Debugging specific errors with access to logs and filesystem

The handoff pattern

Chat → Code: “I've decided we should [X]. Here's what Claude Code needs to do: [specific instructions].”

Code → Chat: “Claude Code produced this output: [paste]. Does this look right?”

The human is the bridge, the context carrier, the decision maker. Claude (chat) is the strategist. Claude Code is the executor. The build quality comes from the quality of the handoffs.

Honest lessons from a non-technical builder

1. You don't need to understand every command. You need to understand every decision.

When Claude Code says “run sudo DEBIAN_FRONTEND=noninteractive apt upgrade -y”, you don't need to parse the dpkg options. You need to understand: “this updates the server and keeps our current config files.” The command is Claude Code's job. The decision is yours.

2. The approval gate is your most powerful tool.

Every time Claude Code waits for approval, that's your moment to think. The 5 seconds you spend reading the command is worth more than any post-hoc debugging.

3. Screenshots and copy-paste are your bridge between worlds.

When something goes wrong, copy the exact error message between Claude instances. The quality of the bridge determines the quality of the build.

4. “I don't understand” is the most productive thing you can say.

Never approve something you don't understand. Say “explain what that command does in one sentence.” The build that takes an extra hour because you asked questions is infinitely better than the one that takes three hours to recover from a mistake.

5. Take breaks. Your approval quality degrades.

After 90 minutes of continuous approvals, you start glazing over. Set a timer. Walk away. Come back with fresh eyes.

6. The playbook is a starting point, not scripture.

The playbook went through four revisions during the build. Every revision was triggered by reality contradicting the plan. This is normal. The value of the playbook is that it gives you a structure to deviate from intentionally.

The meta-lesson

A non-technical person built production AI infrastructure — hardened VPS, Cloudflare Zero Trust, personalised AI agents, multi-channel communication — in two days using Claude as a copilot.

Not because the tools made it easy. It wasn't easy. There were 12 incidents, multiple wrong turns, and at least three moments where the right answer was “the playbook is wrong, let's figure out what the system actually needs.”

It worked because the copilot pattern — strategic Claude for decisions, Claude Code for execution, human for approval and bridging — creates a workflow where you don't need to know how to do everything. You need to know what you want, why you want it, and when something doesn't look right.

That last part — the instinct for “this doesn't look right” — is something you develop faster than you'd think. By the end of day two, you're catching things before Claude does. That's the real skill you're building. Not Linux commands. Not YAML syntax. The ability to read a situation and ask the right question.

That's what a copilot enables. Not automation. Collaboration.