Claude Code — Shared Infrastructure Context
This file is read by all Claude Code instances (Patrick’s Mac, VPS patrick, VPS asia, Asia’s machines) at the start of infrastructure-related sessions. It provides a single source of truth for the current state of server infrastructure, recent changes, and known issues.
Protocol: After making infrastructure changes from any machine, update this file, commit, and push. Other instances will pick it up on their next git pull.
Servers
Section titled “Servers”| Alias | IP (public) | Tailscale IP | Role | SSH User |
|---|---|---|---|---|
| baseworks-agents | 46.224.129.16 | 100.104.239.92 | Automation VPS (Claude Code, backups, scripts) | patrick / asia |
| baseworks-n8n | 167.235.236.99 | 100.109.87.70 | n8n + PostgreSQL | patrick |
| baseworks-web | 5.180.253.171 | — | WordPress hosting (baseworks.com, practice, staging, crm) | bwsite_primo_82 (passwordless sudo) |
| synology (NAS) | — | 100.106.46.99 | Synology DS920+ NAS (4x 10TB IronWolf Pro, RAID 5) | “Patrick Oancia” |
Docker Services
Section titled “Docker Services”baseworks-agents VPS
Section titled “baseworks-agents VPS”| Service | Type | Port | Restart Policy | Notes |
|---|---|---|---|---|
| FileBrowser | standalone docker run | 127.0.0.1:8090 | unless-stopped | Volumes: /srv/baseworks/uploads:/srv, filebrowser-db:/database, branding at /srv/baseworks/filebrowser-branding:/branding:ro |
| OpenClaw | docker-compose | — | — | Separate project, may be stopped |
baseworks-n8n VPS
Section titled “baseworks-n8n VPS”| Service | Type | Config Location | Notes |
|---|---|---|---|
| n8n | docker-compose | /opt/baseworks-config/docker-compose.yml | .env file at same path (chmod 600) |
| PostgreSQL 16 | docker-compose | same as above | Used by n8n |
Backups — Backblaze B2
Section titled “Backups — Backblaze B2”All backups must go to Backblaze B2 off-site storage. Do not leave backups only on the server’s local disk (e.g. /tmp/).
Each backup script uses its own per-bucket B2 application key via environment variables. The global b2 CLI auth on baseworks-agents is only valid for cbbaseworksite. Do not assume one key works for all buckets.
Ad-hoc backup commands
Section titled “Ad-hoc backup commands”# Ad-hoc backup before a risky DB operation on baseworks.com:ssh baseworks-web "cd /var/www/baseworks.com && wp db export /tmp/adhoc-$(date +%Y%m%d-%H%M%S).sql 2>/dev/null"scp baseworks-web:/tmp/adhoc-*.sql /tmp/b2 file upload cbbaseworksite baseworks-web/adhoc-backups/adhoc-$(date +%Y%m%d-%H%M%S).sql /tmp/adhoc-*.sql
# Restore from B2:b2 ls cbbaseworksite baseworks-web/weekly-backups/b2 file download b2://cbbaseworksite/baseworks-web/weekly-backups/<filename>.sql /tmp/restore.sqlB2 buckets by site
Section titled “B2 buckets by site”| Site | Bucket |
|---|---|
| baseworks.com | cbbaseworksite |
| staging.baseworks.com | xCloud1 |
| practice.baseworks.com | cbbpracticesite |
| crm.baseworks.com | cbcrmbaseworks |
| n8n + agents VPS configs | baseworks-n8n-server |
Automated backups (weekly for sites, daily for n8n) already run via cron on the agents VPS at ~/scripts/. 30-day retention on all buckets.
| Site | Script | Bucket | Schedule | Typical Size |
|---|---|---|---|---|
| baseworks.com | weekly-backup-baseworks.sh | cbbaseworksite | Sun 4:00 AM | DB ~375MB, Files ~141MB |
| staging | weekly-backup-staging.sh | xCloud1 | Sun 4:15 AM | DB ~254MB, Files ~141MB |
| practice | weekly-backup-practice.sh | cbbpracticesite | Sun 4:30 AM | DB ~490MB, Files ~181MB |
| crm | weekly-backup-crm.sh | cbcrmbaseworks | Sun 4:45 AM | DB ~17MB, Files ~16MB |
| n8n | daily-backup-n8n.sh | baseworks-n8n-server | Daily 3:30 AM | ~14MB (6 files) |
| agents config | weekly-backup-agents-config.sh | (in script) | Sun 5:00 AM | ~30KB |
Logs: ~/scripts/weekly-backup.log (all weekly sites share one log), ~/scripts/daily-backup-n8n.log
Daily Infrastructure Updates
Section titled “Daily Infrastructure Updates”Script: ~/scripts/daily-infra-updates.sh — runs daily at 3:45 AM ET.
Updates: system packages (both VPSes), Docker images (n8n, PostgreSQL, FileBrowser), Claude Code CLI, b2 CLI. Checks backup health. Tests automation pipeline health (Claude CLI auth, GitHub token, n8n SSH bridge, vault-capture.sh). Posts report to Patrick’s inbox AND sends summary to #agent-alerts in Slack. Any failures in the automation checks appear in the “Needs your attention” section of both the inbox report and the Slack alert.
Log: ~/scripts/daily-infra-updates.log
Vault Health Audit (Slack)
Section titled “Vault Health Audit (Slack)”Script: ~/scripts/vault-audit-slack.sh — runs daily at 4:15 AM ET (after infra updates).
Runs vault-audit.py --json on the VPS vault clone and sends a formatted summary to #agent-alerts in Slack. Reports: vault-index.db health, broken links, orphan files, qmd embedding status, sync status, CLAUDE.md health, and growth trends. Green/yellow/red status based on severity.
Log: ~/scripts/vault-audit-slack.log
Forum Content Sync (System 3)
Section titled “Forum Content Sync (System 3)”Pulls BuddyBoss forums, topics, replies, groups, activity, and direct messages from practice.baseworks.com into the vault as markdown. Then runs translate-community-content.py to append English translations of any non-English content.
Scripts run from the git-tracked repo, NOT from ~/scripts/. This is intentional and different from other cron scripts — it prevents drift between the running copy and the committed copy.
| Path | Purpose |
|---|---|
/srv/baseworks/knowledge-base/scripts/forum-content-sync.sh | Cron entry point — called by */15 * * * * |
/srv/baseworks/knowledge-base/scripts/forum-content-sync.py | Main sync script (SSH → wp eval via bwsite_primo_82@5.180.253.171) |
/srv/baseworks/knowledge-base/scripts/translate-community-content.py | Post-sync translation of non-English messages |
Log: ~/scripts/forum-content-sync.log (log stays in ~/scripts/ to keep it out of the git-tracked vault)
Why repo-hosted, not ~/scripts/: On 2026-04-09 the ~/scripts/ running copies drifted from the git-tracked copies (DM sync, SQL-based activity sync, and translation were added to ~/scripts/ but never committed). The drift went unnoticed until 2026-04-13. The fix (committed 2026-04-13) moves the canonical location to the git repo so git pull via vault-sync.sh automatically keeps the running script fresh. Edits to these scripts must go through git. Do not create copies in ~/scripts/ — if you see them there, they’re stale backups (.bak-*) and should not be edited.
Slack Webhook (agent-alerts)
Section titled “Slack Webhook (agent-alerts)”Helper: ~/scripts/slack-notify.sh — sends a message to #agent-alerts via Slack Incoming Webhook.
Webhook URL stored at: ~/.config/baseworks/slack-webhook-agent-alerts (chmod 600).
Used by: daily-infra-updates.sh, vault-audit-slack.sh, and available for future scripts.
Usage: ~/scripts/slack-notify.sh "message text" or ~/scripts/slack-notify.sh --file /path/to/file
Nginx on baseworks-agents
Section titled “Nginx on baseworks-agents”| Site | Config | Upstream |
|---|---|---|
| files.baseworks.com | /etc/nginx/sites-enabled/files.baseworks.com | 127.0.0.1:8090 (FileBrowser) |
| claw.baseworks.com | /etc/nginx/sites-enabled/claw.baseworks.com | (OpenClaw) |
SSL: Cloudflare origin certs at /etc/ssl/cloudflare/baseworks.com.{pem,key}
Claude Code VPS Authentication
Section titled “Claude Code VPS Authentication”Cloudflare blocks the VPS from reaching claude.ai’s OAuth endpoint (bot protection on Anthropic’s side — we cannot fix this). claude login and claude setup-token always time out on the VPS.
Solution: A long-lived token (1 year, generated via claude setup-token on Patrick’s Mac) lives in a single source-of-truth file on baseworks-agents:
~/.config/baseworks/claude-token # chmod 600, patrick user, contains: CLAUDE_CODE_OAUTH_TOKEN="sk-ant-oat01-..."Both ~/.bashrc AND ~/scripts/daily-infra-updates.sh source this file, so:
- SSH-invoked paths (n8n container →
patrick@agents→ vault-capture.sh) see the token via.bashrcsourcing before the interactive guard. - Cron-invoked paths (
daily-infra-updates.sh) source the file explicitly at the top of the script.
Rotation: edit ~/.config/baseworks/claude-token — that’s the only place the token lives now. No need to touch bashrc. Token expires ~2027-03-11.
- Setup/renewal guide:
00-inbox/claude-vps-credential-sync.md - Do NOT attempt
claude loginorclaude setup-tokenon the VPS — it will always time out - If the daily inbox report says “Claude CLI authentication failed”: generate a new token on any Mac with
claude setup-tokenand overwrite~/.config/baseworks/claude-tokenon baseworks-agents. The daily health check sources this same file, so a failing check means the token is genuinely bad.
SSH Key Canonical Lists (authorized_keys integrity)
Section titled “SSH Key Canonical Lists (authorized_keys integrity)”Both VPSes’ ~patrick/.ssh/authorized_keys are monitored daily by daily-infra-updates.sh (Test 5, added 2026-04-14 after the 2026-04-05 wipe incident). If any expected key goes missing, the daily inbox report flags it within 24h.
baseworks-agents expected keys:
| Fingerprint | Label | Purpose |
|---|---|---|
SHA256:EBHZWtVBBIDuLidufPGmGVFTp/WTjhOnLl/KOBst1FM | Patrick Mac (shared RSA on Mini + MBP) | Patrick SSH from either laptop/desktop |
SHA256:BnJ5fYDCa9Khkh8VqMKztnGKEWFAkAekANq74/T+d8A | asia-mac-book-air-32721 (server-access) | Asia SSH from MacBook Air |
SHA256:z90oq4ZHqDlokXmCnMCL+J7HcS3pkoY665qO76c5ROY | asia-mac-mini-34941 (as@baseworks.com) | Asia SSH from Mac Mini |
SHA256:zkdMFooEvQq4cMSxzfTV2GDO/SqcdhjpkG6StSQ8yWI | n8n-baseworks-n8n | n8n container SSH for vault capture pipeline |
baseworks-n8n expected keys:
| Fingerprint | Label | Purpose |
|---|---|---|
SHA256:EBHZWtVBBIDuLidufPGmGVFTp/WTjhOnLl/KOBst1FM | Patrick Mac (shared RSA on Mini + MBP) | Patrick SSH from either laptop/desktop |
SHA256:BnJ5fYDCa9Khkh8VqMKztnGKEWFAkAekANq74/T+d8A | asia-mac-book-air-32721 (server-access) | Asia SSH from MacBook Air |
SHA256:z90oq4ZHqDlokXmCnMCL+J7HcS3pkoY665qO76c5ROY | asia-mac-mini-34941 (as@baseworks.com) | Asia SSH from Mac Mini |
SHA256:VtJOj/S2Uzr3d5tjKIs63VmXijZMj8Xyx9jUGiwoAOQ | baseworks-agents-vps | Agents → n8n SSH for daily n8n backup |
To add a new machine:
- On the new machine:
ssh-keygen -lf ~/.ssh/id_ed25519.pub— note the SHA256 fingerprint. - Append the public key to
~patrick/.ssh/authorized_keyson the target VPS (always>>, never>— usessh-copy-idor manual append). - Add the fingerprint to the appropriate table above.
- Add the fingerprint to the corresponding array in
~/scripts/daily-infra-updates.shon baseworks-agents (arraysAGENTS_EXPECTED_KEYSandN8N_EXPECTED_KEYS). - Commit this file.
To rotate an existing key: replace the entry in both the table above and the script array, replace the key in authorized_keys on the target VPS.
Git Branch Convention
Section titled “Git Branch Convention”All shared repos use main as the ONLY branch. This was standardized on 2026-03-19 after discovering that the baseworks-changelog repo had silently diverged into two branches (master on VPS, main on Macs) for months.
| Repo | VPS path | GitHub default | Branch |
|---|---|---|---|
| baseworks-kb-shared-brain | /srv/baseworks/knowledge-base/ | main | main |
| baseworks-changelog | /srv/baseworks/changelog/ | main | main |
| yogajaya-changelog | /srv/baseworks/yogajaya-changelog/ | main | main |
Every repo has a CLAUDE.md in its root that enforces this rule automatically. Both VPS users’ ~/.claude/CLAUDE.md files also list all repos with the branch rule.
When setting up a new shared repo:
- Create on GitHub with default branch
main - Clone on VPS:
git clone URL /srv/baseworks/REPO_NAME - Set
git config core.fileMode false - Add
CLAUDE.mdto repo root with branch rule - Add repo to both VPS users’
~/.claude/CLAUDE.md - Clone on Macs as needed
Tailscale Mesh Network
Section titled “Tailscale Mesh Network”All servers and the NAS are connected via Tailscale (WireGuard-encrypted mesh VPN). SSH alias synology is configured on all machines (Patrick’s Mac, baseworks-agents, baseworks-n8n) — just run ssh synology to connect to the NAS.
| Machine | Tailscale IP | Can SSH to NAS |
|---|---|---|
| hutch-mini (Patrick’s Mac) | 100.121.145.38 | Yes (key: ~/.ssh/id_ed25519) |
| baseworks-agents (VPS) | 100.104.239.92 | Yes (key: ~/.ssh/id_ed25519) |
| baseworks-n8n (n8n VPS) | 100.109.87.70 | Yes (key: ~/.ssh/id_ed25519) |
| shasposerver (Synology NAS) | 100.106.46.99 | — |
The NAS user "Patrick Oancia" has passwordless sudo configured (/etc/sudoers.d/patrick_nopasswd).
Recent Infrastructure Changes
Section titled “Recent Infrastructure Changes”2026-04-14 — Vault capture pipeline recovery + durability fixes (by: Patrick’s Mac)
Section titled “2026-04-14 — Vault capture pipeline recovery + durability fixes (by: Patrick’s Mac)”Incident: Vault inbox email capture pipeline silently broken since 2026-04-05 03:23 UTC. Patrick’s 2026-04-14 morning Slack post to #vault-inbox did not land in the vault. Investigation found three issues.
Root cause 1 (SSH plumbing): At 2026-04-05 03:23:52 UTC an unidentified provisioning-style operation simultaneously rewrote /etc/passwd, /etc/shadow, ~patrick/.bashrc, and ~patrick/.ssh/authorized_keys on BOTH VPSes. It reset the password, rewrote bashrc from a template, and rewrote authorized_keys to a “Mac devs only” list — dropping the machine-to-machine keys (baseworks-agents-vps key from the n8n VPS, and n8n-baseworks-n8n container key from the agents VPS). Only patrick was affected; Asia’s account untouched. Cause never identified — no entry in auth.log (rotated) or systemd journal (volatile), no bash history match, no cron match. Last Patrick activity before the wipe was commit ba4e75d at 2026-04-04 22:52 ET, 31 min before the event. Most likely a provisioning/Ansible-style script run from a control machine.
Fix 1 (SSH keys): Restored all four missing keys:
- Appended
n8n-baseworks-n8npubkey to~patrick/.ssh/authorized_keyson baseworks-agents → vault capture pipeline unblocked. - Appended
baseworks-agents-vpspubkey to~patrick/.ssh/authorized_keyson baseworks-n8n → n8n daily backup unblocked. - Also restored Asia’s two keys (
asia-mac-book-air-32721andasia-mac-mini-34941) to baseworks-n8n, which had also been dropped. - Backup copies preserved at
~/.ssh/authorized_keys.bak-20260414on both VPSes. - Ran
daily-backup-n8n.shmanually — first successful n8n backup in 10 days (previous: 2026-04-04).
Root cause 2 (Slack edits): The n8n “Vault Capture via Slack” workflow’s Parse Event node read event.text and event.user directly, which work for new message events but return undefined for message_changed (edit) events — for edits, the real content is nested at event.message.text and event.message.user. When Patrick re-tested by editing his morning message, the workflow saw empty text and sender = literal 'unknown', passed an empty prompt to Claude, Claude refused to write a file, and the Slack confirmation came back with empty File/Tags fields.
Fix 2 (workflow patch): Updated the Parse Event node jsCode in the n8n Postgres DB (workflow_entity.nodes where id = 'A0hTmPJN38HRe3Ch') to skip subtype === 'message_changed' and subtype === 'message_deleted'. Restarted baseworks-n8n container for hot-reload. Implication: edits to previously-captured messages no longer create duplicate captures. If you need to re-capture a message after fixing a typo, post it as a NEW message in #vault-inbox.
Root cause 3 (false-positive health check): The daily inbox report’s “Claude CLI authentication failed” warning had been firing every single day since 2026-03-17 — 4 weeks of false positives. The 2026-03-16 fix (move export CLAUDE_CODE_OAUTH_TOKEN= above the interactive guard in ~/.bashrc) works for SSH-invoked sessions (non-interactive SSH sources bashrc), but cron does NOT source .bashrc at all. So when daily-infra-updates.sh ran the auth test from cron, the env var was unset, claude fell back to the stale ~/.claude/.credentials.json, and the 401 was reported every day even though the actual vault capture path (when SSH was working) was fine.
Fix 3 (token refactor): Moved the OAuth token to a single-source-of-truth file at ~/.config/baseworks/claude-token (chmod 600, CLAUDE_CODE_OAUTH_TOKEN="..." format). Updated ~/.bashrc to source that file via set -a; . ~/.config/baseworks/claude-token; set +a instead of hardcoding the token. Updated ~/scripts/daily-infra-updates.sh to source the same file near the top (after the PATH export) so cron-invoked runs have the env var. Verified by running the script’s auth-test block in a stripped cron-like environment — all three checks now pass: Claude CLI auth: OK, n8n SSH bridge: OK, vault-capture.sh: OK. Also rewrote the “Claude CLI authentication failed” warning text to point to the new token file and the shared-context doc. Bashrc backup at ~/.bashrc.bak-20260414; script backup at ~/scripts/daily-infra-updates.sh.bak-20260414.
Fix 4 (integrity monitor, NEW — the durability piece): Added Test 5 to daily-infra-updates.sh — an authorized_keys integrity check that runs daily at 03:45 ET. It computes SHA256 fingerprints of all keys in ~patrick/.ssh/authorized_keys on each VPS and compares against a canonical list hardcoded in the script (AGENTS_EXPECTED_KEYS and N8N_EXPECTED_KEYS arrays). Any missing key shows up in the daily inbox report and #agent-alerts Slack alert the next morning. Smoke-tested all three cases: agents passes (4 expected keys present), n8n passes (4 expected keys present), negative test (fake fingerprint) correctly reports missing. The canonical key list is also documented in the SSH Key Canonical Lists section of this file above — to add a new machine, update both places.
Net effect: If the 2026-04-05 wipe happens again (or anything else removes a key), Patrick/Asia find out within 24h instead of 9 days. No more false-positive Claude auth warnings. Slack edits no longer produce empty “Vault Capture” confirmations. Token rotation is a single-file edit.
2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)
Section titled “2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)”- Installed Tailscale on baseworks-agents VPS and baseworks-n8n VPS, joined both to the Tailscale network
- Configured SSH access from all three machines (Mac, baseworks-agents, baseworks-n8n) to Synology NAS via
ssh synologyalias - Set up passwordless sudo for “Patrick Oancia” on the NAS (
/etc/sudoers.d/patrick_nopasswd) - NAS: Synology DS920+, 4x Seagate IronWolf Pro 10TB in RAID 5, ~27TB usable (22TB free), all drives SMART PASSED
- Tailscale CLI added to PATH on Patrick’s Mac (
/usr/local/bin/tailscale)
2026-03-19 — Branch standardization: all repos renamed to main (by: Patrick’s Mac)
Section titled “2026-03-19 — Branch standardization: all repos renamed to main (by: Patrick’s Mac)”- Merged diverged
master/mainbranches on baseworks-changelog (31 + 116 commits reunified) - Renamed
master→mainon baseworks-kb-shared-brain and yogajaya-changelog - Updated KB deploy workflow (
.github/workflows/deploy.yml) to trigger onmain - Set GitHub default branch to
mainon all three repos, deletedmasteron all three - Switched all VPS repos to
main, setcore.fileMode=falseon changelog + yogajaya - Rewrote both VPS users’
~/.claude/CLAUDE.mdwith repo table, branch rules, and new-repo setup guide - Added
CLAUDE.mdto baseworks-changelog and yogajaya-changelog repos (KB already had one) - Backup tags preserved in baseworks-changelog:
backup/pre-merge/main,backup/pre-merge/master
2026-03-16 — Vault capture fix + automation health checks + GitHub PAT renewal (by: Patrick’s Mac)
Section titled “2026-03-16 — Vault capture fix + automation health checks + GitHub PAT renewal (by: Patrick’s Mac)”- GitHub PAT renewed. Token “baseworks-agents-vps” was expiring March 21. Regenerated with no-expiration (actually set to 2027-03-16). Updated remote URLs for all three repos on baseworks-agents: knowledge-base, changelog, yogajaya-changelog.
- Added automation pipeline health checks to
daily-infra-updates.sh(section 10). Four tests run daily: (1) Claude CLI auth, (2) GitHub push token, (3) n8n SSH bridge to baseworks-agents, (4) vault-capture.sh executable. Failures appear in “Needs your attention” in the daily inbox report.
2026-03-16 — Vault capture fix: OAuth token placement in .bashrc (by: Patrick’s Mac)
Section titled “2026-03-16 — Vault capture fix: OAuth token placement in .bashrc (by: Patrick’s Mac)”- Vault capture pipeline was broken since ~March 10. Root cause:
CLAUDE_CODE_OAUTH_TOKENwas set at line 125 of~/.bashrc, after the interactive guard (case $- in *i*)...return). When n8n SSHed in non-interactively, bash sourced.bashrcbut hit the guard and returned before reaching the export. Claude CLI fell back to.credentials.jsonwhich had an expired short-lived OAuth token → 401 errors. - Fix: moved the
export CLAUDE_CODE_OAUTH_TOKEN=line to before the interactive guard (line 5) in.bashrcfor bothpatrickandasiausers on baseworks-agents. Removed the duplicate at the old location. - Rule for future token updates: The export MUST remain before the interactive guard in
.bashrc. Non-interactive SSH sessions (n8n vault-capture, cron scripts) will not see environment variables set after the guard. - Verified end-to-end: n8n container → SSH bridge → baseworks-agents → vault-capture.sh → Claude Haiku → git push all working.
2026-03-11 — FileBrowser fix + script hardening + credential sync (by: Patrick’s Mac)
Section titled “2026-03-11 — FileBrowser fix + script hardening + credential sync (by: Patrick’s Mac)”- FileBrowser container was destroyed on Mar 7 by a bug in
daily-infra-updates.sh. Recreated with--restart unless-stopped. - Fixed 5 bugs in
daily-infra-updates.sh: duplicate logs, empty versions in cron (PATH fix), compose file in/tmp/moved to/opt/baseworks-config/, inbox insertion position, git push error handling. - Added backup health check table to daily report.
- Created this shared context file.
- Set up long-lived token (1 year) for VPS authentication via
CLAUDE_CODE_OAUTH_TOKENenv var for both users.
2026-04-13 — Forum content sync: canonical location moved from ~/scripts/ to git repo (by: Patrick’s Mac)
Section titled “2026-04-13 — Forum content sync: canonical location moved from ~/scripts/ to git repo (by: Patrick’s Mac)”- Discovered drift:
~/scripts/forum-content-sync.pywas 1069 lines (with DM sync, SQL-based activity sync, translation) but/srv/baseworks/knowledge-base/scripts/forum-content-sync.pywas stuck at 714 lines. The running copy had been edited directly on 2026-04-09 without being committed back to the repo. - Fix: Committed the 1069-line running copy +
.shwrapper +translate-community-content.pyto the repo (vault auto-sync commit9a2233c). Then repointed the crontab entry from/home/patrick/scripts/forum-content-sync.shto/srv/baseworks/knowledge-base/scripts/forum-content-sync.sh. Renamed the~/scripts/copies to.bak-20260414-015531(kept as safety backup, do not run). - New rule: Forum sync scripts are edited through git only. See the “Forum Content Sync (System 3)” section above for path details.
- Crontab remains paused (
# PAUSED 2026-04-13 consolidation migration:prefix) while thecommunity-forums-groupsfolder consolidation migration is in progress (see03-resources/plans/community-forums-consolidation-plan.md). Cron will be re-enabled after the migration completes.
2026-03-31 — OPcache enabled + passwordless sudo on baseworks-web (by: Patrick’s Mac)
Section titled “2026-03-31 — OPcache enabled + passwordless sudo on baseworks-web (by: Patrick’s Mac)”- OPcache was disabled for PHP 8.3 on the Cevabero server (baseworks-web). Enabled with optimized settings (256MB memory, 20K files). Origin response times dropped from ~2.2s to ~0.7s.
- Passwordless sudo granted to
bwsite_primo_82via/etc/sudoers.d/bwsite_primo_82. Claude Code now has full server access for PHP config, nginx, service restarts. - Cloudflare cache fully purged via API. SSL mode confirmed Full (Strict).
- Config backup:
/etc/php/8.3/mods-available/opcache.ini.bak.20260331
2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)
Section titled “2026-03-21 — Tailscale mesh network + Synology NAS integration (by: Patrick’s Mac)”Last updated: 2026-04-14 (Vault capture pipeline recovery + durability fixes) by Claude Code on Patrick’s Mac