Skip to content

Claude Code — Shared Infrastructure Context

Created 2026-03-11
Updated 2026-03-11
Tags infrastructureclaude-codeshared-context

This file is read by all Claude Code instances (Patrick’s Mac, VPS patrick, VPS asia, Asia’s machines) at the start of infrastructure-related sessions. It provides a single source of truth for the current state of server infrastructure, recent changes, and known issues.

Protocol: After making infrastructure changes from any machine, update this file, commit, and push. Other instances will pick it up on their next git pull.


AliasIP (public)Tailscale IPRoleSSH User
baseworks-agents46.224.129.16100.104.239.92Automation VPS (Claude Code, backups, scripts)patrick / asia
baseworks-n8n167.235.236.99100.109.87.70n8n + PostgreSQLpatrick
baseworks-web5.180.253.171WordPress hosting (baseworks.com, practice, staging, crm)bwsite_primo_82 (passwordless sudo)
synology (NAS)100.106.46.99Synology DS920+ NAS (4x 10TB IronWolf Pro, RAID 5)“Patrick Oancia”
ServiceTypePortRestart PolicyNotes
FileBrowserstandalone docker run127.0.0.1:8090unless-stoppedVolumes: /srv/baseworks/uploads:/srv, filebrowser-db:/database, branding at /srv/baseworks/filebrowser-branding:/branding:ro
OpenClawdocker-composeSeparate project, may be stopped
ServiceTypeConfig LocationNotes
n8ndocker-compose/opt/baseworks-config/docker-compose.yml.env file at same path (chmod 600)
PostgreSQL 16docker-composesame as aboveUsed by n8n

All backups must go to Backblaze B2 off-site storage. Do not leave backups only on the server’s local disk (e.g. /tmp/).

Each backup script uses its own per-bucket B2 application key via environment variables. The global b2 CLI auth on baseworks-agents is only valid for cbbaseworksite. Do not assume one key works for all buckets.

Terminal window
# Ad-hoc backup before a risky DB operation on baseworks.com:
ssh baseworks-web "cd /var/www/baseworks.com && wp db export /tmp/adhoc-$(date +%Y%m%d-%H%M%S).sql 2>/dev/null"
scp baseworks-web:/tmp/adhoc-*.sql /tmp/
b2 file upload cbbaseworksite baseworks-web/adhoc-backups/adhoc-$(date +%Y%m%d-%H%M%S).sql /tmp/adhoc-*.sql
# Restore from B2:
b2 ls cbbaseworksite baseworks-web/weekly-backups/
b2 file download b2://cbbaseworksite/baseworks-web/weekly-backups/<filename>.sql /tmp/restore.sql
SiteBucket
baseworks.comcbbaseworksite
staging.baseworks.comxCloud1
practice.baseworks.comcbbpracticesite
crm.baseworks.comcbcrmbaseworks
n8n + agents VPS configsbaseworks-n8n-server

Automated backups (weekly for sites, daily for n8n) already run via cron on the agents VPS at ~/scripts/. 30-day retention on all buckets.

SiteScriptBucketScheduleTypical Size
baseworks.comweekly-backup-baseworks.shcbbaseworksiteSun 4:00 AMDB ~375MB, Files ~141MB
stagingweekly-backup-staging.shxCloud1Sun 4:15 AMDB ~254MB, Files ~141MB
practiceweekly-backup-practice.shcbbpracticesiteSun 4:30 AMDB ~490MB, Files ~181MB
crmweekly-backup-crm.shcbcrmbaseworksSun 4:45 AMDB ~17MB, Files ~16MB
n8ndaily-backup-n8n.shbaseworks-n8n-serverDaily 3:30 AM~14MB (6 files)
agents configweekly-backup-agents-config.sh(in script)Sun 5:00 AM~30KB

Logs: ~/scripts/weekly-backup.log (all weekly sites share one log), ~/scripts/daily-backup-n8n.log

Script: ~/scripts/daily-infra-updates.sh — runs daily at 3:45 AM ET.

Updates: system packages (both VPSes), Docker images (n8n, PostgreSQL, FileBrowser), Claude Code CLI, b2 CLI. Checks backup health. Tests automation pipeline health (Claude CLI auth, GitHub token, n8n SSH bridge, vault-capture.sh). Posts report to Patrick’s inbox AND sends summary to #agent-alerts in Slack. Any failures in the automation checks appear in the “Needs your attention” section of both the inbox report and the Slack alert.

Log: ~/scripts/daily-infra-updates.log

Script: ~/scripts/vault-audit-slack.sh — runs daily at 4:15 AM ET (after infra updates).

Runs vault-audit.py --json on the VPS vault clone and sends a formatted summary to #agent-alerts in Slack. Reports: vault-index.db health, broken links, orphan files, qmd embedding status, sync status, CLAUDE.md health, and growth trends. Green/yellow/red status based on severity.

Log: ~/scripts/vault-audit-slack.log

Pulls BuddyBoss forums, topics, replies, groups, activity, and direct messages from practice.baseworks.com into the vault as markdown. Then runs translate-community-content.py to append English translations of any non-English content.

Scripts run from the git-tracked repo, NOT from ~/scripts/. This is intentional and different from other cron scripts — it prevents drift between the running copy and the committed copy.

PathPurpose
/srv/baseworks/knowledge-base/scripts/forum-content-sync.shCron entry point — called by */15 * * * *
/srv/baseworks/knowledge-base/scripts/forum-content-sync.pyMain sync script (SSH → wp eval via bwsite_primo_82@5.180.253.171)
/srv/baseworks/knowledge-base/scripts/translate-community-content.pyPost-sync translation of non-English messages

Log: ~/scripts/forum-content-sync.log (log stays in ~/scripts/ to keep it out of the git-tracked vault)

Why repo-hosted, not ~/scripts/: On 2026-04-09 the ~/scripts/ running copies drifted from the git-tracked copies (DM sync, SQL-based activity sync, and translation were added to ~/scripts/ but never committed). The drift went unnoticed until 2026-04-13. The fix (committed 2026-04-13) moves the canonical location to the git repo so git pull via vault-sync.sh automatically keeps the running script fresh. Edits to these scripts must go through git. Do not create copies in ~/scripts/ — if you see them there, they’re stale backups (.bak-*) and should not be edited.

Helper: ~/scripts/slack-notify.sh — sends a message to #agent-alerts via Slack Incoming Webhook. Webhook URL stored at: ~/.config/baseworks/slack-webhook-agent-alerts (chmod 600). Used by: daily-infra-updates.sh, vault-audit-slack.sh, and available for future scripts.

Usage: ~/scripts/slack-notify.sh "message text" or ~/scripts/slack-notify.sh --file /path/to/file

SiteConfigUpstream
files.baseworks.com/etc/nginx/sites-enabled/files.baseworks.com127.0.0.1:8090 (FileBrowser)
claw.baseworks.com/etc/nginx/sites-enabled/claw.baseworks.com(OpenClaw)

SSL: Cloudflare origin certs at /etc/ssl/cloudflare/baseworks.com.{pem,key}

Cloudflare blocks the VPS from reaching claude.ai’s OAuth endpoint (bot protection on Anthropic’s side — we cannot fix this). claude login and claude setup-token always time out on the VPS.

Solution: A long-lived token (1 year, generated via claude setup-token on Patrick’s Mac) lives in a single source-of-truth file on baseworks-agents:

~/.config/baseworks/claude-token # chmod 600, patrick user, contains: CLAUDE_CODE_OAUTH_TOKEN="sk-ant-oat01-..."

Both ~/.bashrc AND ~/scripts/daily-infra-updates.sh source this file, so:

  • SSH-invoked paths (n8n container → patrick@agents → vault-capture.sh) see the token via .bashrc sourcing before the interactive guard.
  • Cron-invoked paths (daily-infra-updates.sh) source the file explicitly at the top of the script.

Rotation: edit ~/.config/baseworks/claude-token — that’s the only place the token lives now. No need to touch bashrc. Token expires ~2027-03-11.

  • Setup/renewal guide: 00-inbox/claude-vps-credential-sync.md
  • Do NOT attempt claude login or claude setup-token on the VPS — it will always time out
  • If the daily inbox report says “Claude CLI authentication failed”: generate a new token on any Mac with claude setup-token and overwrite ~/.config/baseworks/claude-token on baseworks-agents. The daily health check sources this same file, so a failing check means the token is genuinely bad.

SSH Key Canonical Lists (authorized_keys integrity)

Section titled “SSH Key Canonical Lists (authorized_keys integrity)”

Both VPSes’ ~patrick/.ssh/authorized_keys are monitored daily by daily-infra-updates.sh (Test 5, added 2026-04-14 after the 2026-04-05 wipe incident). If any expected key goes missing, the daily inbox report flags it within 24h.

baseworks-agents expected keys:

FingerprintLabelPurpose
SHA256:EBHZWtVBBIDuLidufPGmGVFTp/WTjhOnLl/KOBst1FMPatrick Mac (shared RSA on Mini + MBP)Patrick SSH from either laptop/desktop
SHA256:BnJ5fYDCa9Khkh8VqMKztnGKEWFAkAekANq74/T+d8Aasia-mac-book-air-32721 (server-access)Asia SSH from MacBook Air
SHA256:z90oq4ZHqDlokXmCnMCL+J7HcS3pkoY665qO76c5ROYasia-mac-mini-34941 (as@baseworks.com)Asia SSH from Mac Mini
SHA256:zkdMFooEvQq4cMSxzfTV2GDO/SqcdhjpkG6StSQ8yWIn8n-baseworks-n8nn8n container SSH for vault capture pipeline

baseworks-n8n expected keys:

FingerprintLabelPurpose
SHA256:EBHZWtVBBIDuLidufPGmGVFTp/WTjhOnLl/KOBst1FMPatrick Mac (shared RSA on Mini + MBP)Patrick SSH from either laptop/desktop
SHA256:BnJ5fYDCa9Khkh8VqMKztnGKEWFAkAekANq74/T+d8Aasia-mac-book-air-32721 (server-access)Asia SSH from MacBook Air
SHA256:z90oq4ZHqDlokXmCnMCL+J7HcS3pkoY665qO76c5ROYasia-mac-mini-34941 (as@baseworks.com)Asia SSH from Mac Mini
SHA256:VtJOj/S2Uzr3d5tjKIs63VmXijZMj8Xyx9jUGiwoAOQbaseworks-agents-vpsAgents → n8n SSH for daily n8n backup

To add a new machine:

  1. On the new machine: ssh-keygen -lf ~/.ssh/id_ed25519.pub — note the SHA256 fingerprint.
  2. Append the public key to ~patrick/.ssh/authorized_keys on the target VPS (always >>, never > — use ssh-copy-id or manual append).
  3. Add the fingerprint to the appropriate table above.
  4. Add the fingerprint to the corresponding array in ~/scripts/daily-infra-updates.sh on baseworks-agents (arrays AGENTS_EXPECTED_KEYS and N8N_EXPECTED_KEYS).
  5. Commit this file.

To rotate an existing key: replace the entry in both the table above and the script array, replace the key in authorized_keys on the target VPS.

All shared repos use main as the ONLY branch. This was standardized on 2026-03-19 after discovering that the baseworks-changelog repo had silently diverged into two branches (master on VPS, main on Macs) for months.

RepoVPS pathGitHub defaultBranch
baseworks-kb-shared-brain/srv/baseworks/knowledge-base/mainmain
baseworks-changelog/srv/baseworks/changelog/mainmain
yogajaya-changelog/srv/baseworks/yogajaya-changelog/mainmain

Every repo has a CLAUDE.md in its root that enforces this rule automatically. Both VPS users’ ~/.claude/CLAUDE.md files also list all repos with the branch rule.

When setting up a new shared repo:

  1. Create on GitHub with default branch main
  2. Clone on VPS: git clone URL /srv/baseworks/REPO_NAME
  3. Set git config core.fileMode false
  4. Add CLAUDE.md to repo root with branch rule
  5. Add repo to both VPS users’ ~/.claude/CLAUDE.md
  6. Clone on Macs as needed

All servers and the NAS are connected via Tailscale (WireGuard-encrypted mesh VPN). SSH alias synology is configured on all machines (Patrick’s Mac, baseworks-agents, baseworks-n8n) — just run ssh synology to connect to the NAS.

MachineTailscale IPCan SSH to NAS
hutch-mini (Patrick’s Mac)100.121.145.38Yes (key: ~/.ssh/id_ed25519)
baseworks-agents (VPS)100.104.239.92Yes (key: ~/.ssh/id_ed25519)
baseworks-n8n (n8n VPS)100.109.87.70Yes (key: ~/.ssh/id_ed25519)
shasposerver (Synology NAS)100.106.46.99

The NAS user "Patrick Oancia" has passwordless sudo configured (/etc/sudoers.d/patrick_nopasswd).

2026-04-14 — Vault capture pipeline recovery + durability fixes (by: Patrick’s Mac)

Section titled “2026-04-14 — Vault capture pipeline recovery + durability fixes (by: Patrick’s Mac)”

Incident: Vault inbox email capture pipeline silently broken since 2026-04-05 03:23 UTC. Patrick’s 2026-04-14 morning Slack post to #vault-inbox did not land in the vault. Investigation found three issues.

Root cause 1 (SSH plumbing): At 2026-04-05 03:23:52 UTC an unidentified provisioning-style operation simultaneously rewrote /etc/passwd, /etc/shadow, ~patrick/.bashrc, and ~patrick/.ssh/authorized_keys on BOTH VPSes. It reset the password, rewrote bashrc from a template, and rewrote authorized_keys to a “Mac devs only” list — dropping the machine-to-machine keys (baseworks-agents-vps key from the n8n VPS, and n8n-baseworks-n8n container key from the agents VPS). Only patrick was affected; Asia’s account untouched. Cause never identified — no entry in auth.log (rotated) or systemd journal (volatile), no bash history match, no cron match. Last Patrick activity before the wipe was commit ba4e75d at 2026-04-04 22:52 ET, 31 min before the event. Most likely a provisioning/Ansible-style script run from a control machine.

Fix 1 (SSH keys): Restored all four missing keys:

  • Appended n8n-baseworks-n8n pubkey to ~patrick/.ssh/authorized_keys on baseworks-agents → vault capture pipeline unblocked.
  • Appended baseworks-agents-vps pubkey to ~patrick/.ssh/authorized_keys on baseworks-n8n → n8n daily backup unblocked.
  • Also restored Asia’s two keys (asia-mac-book-air-32721 and asia-mac-mini-34941) to baseworks-n8n, which had also been dropped.
  • Backup copies preserved at ~/.ssh/authorized_keys.bak-20260414 on both VPSes.
  • Ran daily-backup-n8n.sh manually — first successful n8n backup in 10 days (previous: 2026-04-04).

Root cause 2 (Slack edits): The n8n “Vault Capture via Slack” workflow’s Parse Event node read event.text and event.user directly, which work for new message events but return undefined for message_changed (edit) events — for edits, the real content is nested at event.message.text and event.message.user. When Patrick re-tested by editing his morning message, the workflow saw empty text and sender = literal 'unknown', passed an empty prompt to Claude, Claude refused to write a file, and the Slack confirmation came back with empty File/Tags fields.

Fix 2 (workflow patch): Updated the Parse Event node jsCode in the n8n Postgres DB (workflow_entity.nodes where id = 'A0hTmPJN38HRe3Ch') to skip subtype === 'message_changed' and subtype === 'message_deleted'. Restarted baseworks-n8n container for hot-reload. Implication: edits to previously-captured messages no longer create duplicate captures. If you need to re-capture a message after fixing a typo, post it as a NEW message in #vault-inbox.

Root cause 3 (false-positive health check): The daily inbox report’s “Claude CLI authentication failed” warning had been firing every single day since 2026-03-17 — 4 weeks of false positives. The 2026-03-16 fix (move export CLAUDE_CODE_OAUTH_TOKEN= above the interactive guard in ~/.bashrc) works for SSH-invoked sessions (non-interactive SSH sources bashrc), but cron does NOT source .bashrc at all. So when daily-infra-updates.sh ran the auth test from cron, the env var was unset, claude fell back to the stale ~/.claude/.credentials.json, and the 401 was reported every day even though the actual vault capture path (when SSH was working) was fine.

Fix 3 (token refactor): Moved the OAuth token to a single-source-of-truth file at ~/.config/baseworks/claude-token (chmod 600, CLAUDE_CODE_OAUTH_TOKEN="..." format). Updated ~/.bashrc to source that file via set -a; . ~/.config/baseworks/claude-token; set +a instead of hardcoding the token. Updated ~/scripts/daily-infra-updates.sh to source the same file near the top (after the PATH export) so cron-invoked runs have the env var. Verified by running the script’s auth-test block in a stripped cron-like environment — all three checks now pass: Claude CLI auth: OK, n8n SSH bridge: OK, vault-capture.sh: OK. Also rewrote the “Claude CLI authentication failed” warning text to point to the new token file and the shared-context doc. Bashrc backup at ~/.bashrc.bak-20260414; script backup at ~/scripts/daily-infra-updates.sh.bak-20260414.

Fix 4 (integrity monitor, NEW — the durability piece): Added Test 5 to daily-infra-updates.sh — an authorized_keys integrity check that runs daily at 03:45 ET. It computes SHA256 fingerprints of all keys in ~patrick/.ssh/authorized_keys on each VPS and compares against a canonical list hardcoded in the script (AGENTS_EXPECTED_KEYS and N8N_EXPECTED_KEYS arrays). Any missing key shows up in the daily inbox report and #agent-alerts Slack alert the next morning. Smoke-tested all three cases: agents passes (4 expected keys present), n8n passes (4 expected keys present), negative test (fake fingerprint) correctly reports missing. The canonical key list is also documented in the SSH Key Canonical Lists section of this file above — to add a new machine, update both places.

Net effect: If the 2026-04-05 wipe happens again (or anything else removes a key), Patrick/Asia find out within 24h instead of 9 days. No more false-positive Claude auth warnings. Slack edits no longer produce empty “Vault Capture” confirmations. Token rotation is a single-file edit.

2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)

Section titled “2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)”
  • Installed Tailscale on baseworks-agents VPS and baseworks-n8n VPS, joined both to the Tailscale network
  • Configured SSH access from all three machines (Mac, baseworks-agents, baseworks-n8n) to Synology NAS via ssh synology alias
  • Set up passwordless sudo for “Patrick Oancia” on the NAS (/etc/sudoers.d/patrick_nopasswd)
  • NAS: Synology DS920+, 4x Seagate IronWolf Pro 10TB in RAID 5, ~27TB usable (22TB free), all drives SMART PASSED
  • Tailscale CLI added to PATH on Patrick’s Mac (/usr/local/bin/tailscale)

2026-03-19 — Branch standardization: all repos renamed to main (by: Patrick’s Mac)

Section titled “2026-03-19 — Branch standardization: all repos renamed to main (by: Patrick’s Mac)”
  • Merged diverged master/main branches on baseworks-changelog (31 + 116 commits reunified)
  • Renamed mastermain on baseworks-kb-shared-brain and yogajaya-changelog
  • Updated KB deploy workflow (.github/workflows/deploy.yml) to trigger on main
  • Set GitHub default branch to main on all three repos, deleted master on all three
  • Switched all VPS repos to main, set core.fileMode=false on changelog + yogajaya
  • Rewrote both VPS users’ ~/.claude/CLAUDE.md with repo table, branch rules, and new-repo setup guide
  • Added CLAUDE.md to baseworks-changelog and yogajaya-changelog repos (KB already had one)
  • Backup tags preserved in baseworks-changelog: backup/pre-merge/main, backup/pre-merge/master

2026-03-16 — Vault capture fix + automation health checks + GitHub PAT renewal (by: Patrick’s Mac)

Section titled “2026-03-16 — Vault capture fix + automation health checks + GitHub PAT renewal (by: Patrick’s Mac)”
  • GitHub PAT renewed. Token “baseworks-agents-vps” was expiring March 21. Regenerated with no-expiration (actually set to 2027-03-16). Updated remote URLs for all three repos on baseworks-agents: knowledge-base, changelog, yogajaya-changelog.
  • Added automation pipeline health checks to daily-infra-updates.sh (section 10). Four tests run daily: (1) Claude CLI auth, (2) GitHub push token, (3) n8n SSH bridge to baseworks-agents, (4) vault-capture.sh executable. Failures appear in “Needs your attention” in the daily inbox report.

2026-03-16 — Vault capture fix: OAuth token placement in .bashrc (by: Patrick’s Mac)

Section titled “2026-03-16 — Vault capture fix: OAuth token placement in .bashrc (by: Patrick’s Mac)”
  • Vault capture pipeline was broken since ~March 10. Root cause: CLAUDE_CODE_OAUTH_TOKEN was set at line 125 of ~/.bashrc, after the interactive guard (case $- in *i*)...return). When n8n SSHed in non-interactively, bash sourced .bashrc but hit the guard and returned before reaching the export. Claude CLI fell back to .credentials.json which had an expired short-lived OAuth token → 401 errors.
  • Fix: moved the export CLAUDE_CODE_OAUTH_TOKEN= line to before the interactive guard (line 5) in .bashrc for both patrick and asia users on baseworks-agents. Removed the duplicate at the old location.
  • Rule for future token updates: The export MUST remain before the interactive guard in .bashrc. Non-interactive SSH sessions (n8n vault-capture, cron scripts) will not see environment variables set after the guard.
  • Verified end-to-end: n8n container → SSH bridge → baseworks-agents → vault-capture.sh → Claude Haiku → git push all working.

2026-03-11 — FileBrowser fix + script hardening + credential sync (by: Patrick’s Mac)

Section titled “2026-03-11 — FileBrowser fix + script hardening + credential sync (by: Patrick’s Mac)”
  • FileBrowser container was destroyed on Mar 7 by a bug in daily-infra-updates.sh. Recreated with --restart unless-stopped.
  • Fixed 5 bugs in daily-infra-updates.sh: duplicate logs, empty versions in cron (PATH fix), compose file in /tmp/ moved to /opt/baseworks-config/, inbox insertion position, git push error handling.
  • Added backup health check table to daily report.
  • Created this shared context file.
  • Set up long-lived token (1 year) for VPS authentication via CLAUDE_CODE_OAUTH_TOKEN env var for both users.

2026-04-13 — Forum content sync: canonical location moved from ~/scripts/ to git repo (by: Patrick’s Mac)

Section titled “2026-04-13 — Forum content sync: canonical location moved from ~/scripts/ to git repo (by: Patrick’s Mac)”
  • Discovered drift: ~/scripts/forum-content-sync.py was 1069 lines (with DM sync, SQL-based activity sync, translation) but /srv/baseworks/knowledge-base/scripts/forum-content-sync.py was stuck at 714 lines. The running copy had been edited directly on 2026-04-09 without being committed back to the repo.
  • Fix: Committed the 1069-line running copy + .sh wrapper + translate-community-content.py to the repo (vault auto-sync commit 9a2233c). Then repointed the crontab entry from /home/patrick/scripts/forum-content-sync.sh to /srv/baseworks/knowledge-base/scripts/forum-content-sync.sh. Renamed the ~/scripts/ copies to .bak-20260414-015531 (kept as safety backup, do not run).
  • New rule: Forum sync scripts are edited through git only. See the “Forum Content Sync (System 3)” section above for path details.
  • Crontab remains paused (# PAUSED 2026-04-13 consolidation migration: prefix) while the community-forums-groups folder consolidation migration is in progress (see 03-resources/plans/community-forums-consolidation-plan.md). Cron will be re-enabled after the migration completes.

2026-03-31 — OPcache enabled + passwordless sudo on baseworks-web (by: Patrick’s Mac)

Section titled “2026-03-31 — OPcache enabled + passwordless sudo on baseworks-web (by: Patrick’s Mac)”
  • OPcache was disabled for PHP 8.3 on the Cevabero server (baseworks-web). Enabled with optimized settings (256MB memory, 20K files). Origin response times dropped from ~2.2s to ~0.7s.
  • Passwordless sudo granted to bwsite_primo_82 via /etc/sudoers.d/bwsite_primo_82. Claude Code now has full server access for PHP config, nginx, service restarts.
  • Cloudflare cache fully purged via API. SSL mode confirmed Full (Strict).
  • Config backup: /etc/php/8.3/mods-available/opcache.ini.bak.20260331

2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)

Section titled “2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)”

Last updated: 2026-04-14 (Vault capture pipeline recovery + durability fixes) by Claude Code on Patrick’s Mac