Claude Code In My Pocket
A forever-running autonomous server I can talk to from anywhere, built on a Raspberry Pi 4.
Work in progress — documenting my setup journey.
Update 2026-02-06: ✅ Token expiration solved! The original 6-12 hour authentication issue has been resolved using
claude setup-token, which generates a 1-year OAuth token. This is now automated in the Ansible playbook. For manual setup, see the Long-lived authentication section below.
Why bother?#
I found myself carrying my laptop everywhere for pretty menial tasks - research queries, ultrathinking through processes. The Ralph Wiggum plugin had me wondering if I could push more “human out of the loop” agentic work to a dedicated machine.
In terms of mobility, I have also tested the Happy mobile app when I’m on the go, both using its voice mode and text mode. To me it is a really underrated app, but it still depended on me having my laptop running Claude Code somewhere, and it could not easily spawn new sessions as I needed them.
I kept hearing that others found success with tmux and Raspberry Pi setups, so I decided to dust off my old RPi 4 and explore agentic workflows without me always being involved.
At the same time I decided that I would dive into the rabbit hole of early-days sandboxing, meaning I would be able to let my Claude Code instances run completely wild without needing any approvals from me. I could be like those startup gurus on LinkedIn who built a million-dollar company in an hour. (-:
The goal: Trivial work, attended or not — kick off tasks and walk away, or stay and pair, with a trivial switch between modes.
Sub-goals#
Subscription maximization#
Reuse my existing Claude Code subscription instead of paying per-token API costs. Get the most out of the flat fee.
Always-on availability#
Claude Code running 24/7 on a dedicated machine, accessible from anywhere via SSH or the Happy mobile app.
Autonomous execution#
Run with --dangerously-skip-permissions so Claude can work on tasks without waiting for my approval at every step.
Real sandboxing#
Docker-based isolation that I control, not Claude Code’s built-in sandbox. The agent shouldn’t guard its own cage.
Multi-session orchestration#
A control plane that can spawn, monitor, and kill parallel Claude sessions — each working on different repos or tasks.
Decisions#
OS choice: Ubuntu 24 LTS Server Edition. Wanted something slightly more bleeding edge than the usual Debian-based distros. (Note: Raspberry Pi OS is Debian-based, tends to lag behind on packages due to Debian’s stability-focused release cycle. Tradeoff: RPi OS has better hardware-specific optimizations and lighter resource usage — but for a headless server running Node.js, fresh packages mattered more to me. Most “Ubuntu is slow on Pi” complaints are about the desktop; headless Server Edition sidesteps that entirely.)
Pre-flight check: Verified my MCPs support ARM64 — GitHub MCP notably confirmed working.
Update (2025-01-08): The Ansible playbook fully automates deployment including long-lived OAuth tokens (via
vault_claude_oauth_token), eliminating the 6-12 hour re-authentication issue. The article documents the manual approach for learning purposes.
Preparing the SD card#
Using Raspberry Pi Imager - the simplest and safest method.
- Download and open Raspberry Pi Imager
- Choose OS: Other general-purpose OS → Ubuntu → Ubuntu Server 24.04 LTS (64-bit)
- Select your SD card as storage
- Open the ⚙️ Advanced settings (crucial step):
- Enable SSH
- Set username/password
- Configure Wi-Fi (if not using Ethernet)
- Set hostname (e.g.
rpi-dev) - Set locale/timezone
- Write image and eject card
First boot#
Imaged Ubuntu Server, booted up, logged in. First task: sudo apt update && sudo apt upgrade.
Already clear the RPi 4 is not the fastest - especially for IO operations. Patience required.
Essentials installed:
sudo apt install -y \
tmux \
git \
curl \
build-essential \
unzip \
htop \
neovim
Again, slow - took 2-3 minutes. Get used to it.
Node.js via nvm#
It is more future-proof to not have to install new versions of Node.js on your host machine without some kind of manager for it. The tool called “nvm” has been a go-to for many years now.
nvm for Node.js version management.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install --lts
pnpm#
pnpm is a faster, more disk-efficient package manager. The maintainer is also very attentive to issues and feature requests.
pnpm is unfortunately not compatible with the happy-coder app that we will be using, so we have to mix npm and pnpm a bit. But since I personally use pnpm for all my projects, I know that I’ll need to bundle it in anyway.
corepack enable
corepack prepare pnpm@latest --activate
pnpm setup
source ~/.bashrc
pnpm --version
Removing snapd#
Ubuntu ships with snapd, but on a Pi 4 it’s pure overhead. Snap Store alone can consume 250-400MB RAM even when idle, and there are known memory growth issues concerning for embedded devices. The squashfs loopback mounts also slow boot/shutdown.
We won’t be using snaps here - apt and npm cover everything we need.
sudo systemctl disable snapd --now
sudo apt purge snapd -y
sudo apt autoremove -y
Enabling zram#
SD cards are slow for swap. zram creates compressed swap space in RAM instead - typically ~3:1 compression, so 432MB of swapped data uses only ~165MB actual RAM. Users report going “from unusable and freezing to performant and stable”.
Personal experience: I’ve owned Raspberry Pi models 2, 3, and 4 over the years, and every time I’ve tried to use one for anything beyond light scripting it’s felt sluggish. I once tried setting up a Pi 3 as a simple workstation for my son - it would hang or run out of resources regularly. That history made me paranoid, so I proactively searched for community best practices before even booting this one up. No benchmarks to share, but with zram enabled the system subjectively feels snappier - fewer moments where I’m waiting on the cursor. I’m still running from microSD rather than an SSD, which I suspect would help further.
sudo apt install -y zram-tools
Works out of the box. Verify it’s running:
$ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4 256M 4K 63B 20K 4 [SWAP]
$ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 256M 0B 100
Small CPU overhead for compression, but the Pi 4 handles it fine.
CPU governor#
Ubuntu Server defaults to ondemand, which can lag when ramping up for bursty workloads. The modern schedutil governor integrates tightly with the scheduler and is only ~1% slower than locking at max frequency - a sensible middle ground.
Same caveat as above: I haven’t benchmarked, but users on the Raspberry Pi forums report that schedutil feels more responsive than ondemand because it uses all available clock speeds rather than jumping between a few [2] and responds faster to load changes. One quirk: schedutil’s IO-wait boosting means it rarely drops below 1000 MHz even when idle — slightly higher power draw, but for a plugged-in server that’s fine. Given my track record of Pis freezing under load, any low-effort tweak that might help is worth trying. See [3] and [4] for deeper dives into these optimizations.
# Check current
cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
# Switch to schedutil
echo schedutil | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
# Make persistent
echo 'GOVERNOR="schedutil"' | sudo tee /etc/default/cpufrequtils
sudo apt install -y cpufrequtils
# Verify
cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
tmux configuration#
Since we want our Claude Code sessions to be resilient and re-attachable, tmux is the timeless tool of choice for this.
tmux is the go-to terminal multiplexer for headless servers. Alternatives like Zellij have memory issues that make them unsuitable for resource-constrained devices.
True color support#
Claude Code’s UI uses 24-bit color for syntax highlighting and status indicators. The color signal must pass through every layer: terminal → SSH → tmux → Docker → Claude. We configure tmux here (via terminal-overrides), and Docker later (via TERM and COLORTERM env vars). Skip either and you’ll get washed-out 256-color fallback.
cat > ~/.tmux.conf << 'EOF'
# True 24-bit color support for Claude Code
# default-terminal MUST be tmux-* or screen-*, never xterm-*
# https://github.com/tmux/tmux/wiki/FAQ
set -g default-terminal "tmux-256color"
set -ag terminal-overrides ",xterm-256color:RGB"
# Interaction
set -g mouse on
setw -g mode-keys vi
# Claude outputs are verbose - increase history (default is 2000) [6]
# https://code.claude.com/docs/en/terminal-config
set -g history-limit 50000
# Start numbering at 1 (easier to reach)
# https://builtin.com/articles/tmux-config
set -g base-index 1
setw -g pane-base-index 1
# Faster escape response (default 500ms is sluggish) [5]
# 10-20ms recommended for remote; 0 locally
set -s escape-time 15
# Required for iTerm2 control mode (-CC)
# https://gitlab.com/gnachman/iterm2/-/issues/2585
setw -g aggressive-resize off
# Clipboard integration via OSC 52 (works over SSH + Docker)
# https://github.com/tmux/tmux/wiki/Clipboard
set -g set-clipboard on
set -g allow-passthrough on
set -ag terminal-overrides ",xterm-256color:Ms=\\E]52;c;%p2%s\\7"
# Easy config reload
bind r source-file ~/.tmux.conf \; display "Reloaded!"
# Vim-style pane resize
bind-key -r h resize-pane -L 5
bind-key -r j resize-pane -D 5
bind-key -r k resize-pane -U 5
bind-key -r l resize-pane -R 5
EOF
tmux ergonomics#
tmux can feel awkward compared to a native terminal - scrolling, copy/paste, and mouse interactions work differently because tmux manages its own buffer and input handling. This is a historical design choice that enables session persistence across disconnections.
If you’re on macOS with iTerm2, there’s a better option: tmux control mode (-CC flag). iTerm2 communicates directly with tmux, rendering windows as native tabs with native scrolling, Cmd+F search, and normal copy/paste. We’ll configure this in the Usage section so ssh claude auto-attaches with control mode enabled.
Session architecture#
The goal: Run --dangerously-skip-permissions for autonomous work, but repos might contain secrets (.env files, credentials) we don’t want to leak.
The solution: A single Docker-sandboxed session. The container only sees mounted directories (~/Repos), can’t touch ~/.ssh, ~/.aws, or system configs. When you need to review actions before they execute, toggle off --dangerously-skip-permissions within the session.
Why Docker over Claude Code’s “native” sandboxing?#
Claude Code has built-in sandboxing using bubblewrap and seccomp on Linux. Sounds great in theory - but:
-
Known bugs with deny permissions - Settings are not reliably enforced, allowing access to files explicitly denied.
-
Sandbox escape without prompts - Bug report shows Claude retrying with
dangerouslyDisableSandbox: trueand executing outside sandbox with no permission prompt, even whenallowUnsandboxedCommands: falseis configured. -
Philosophical issue - Letting Claude Code be its own security guard is backwards. The agent deciding when to escape its own sandbox is like letting the prisoner hold the keys.
Docker provides real isolation:
- OS-level boundaries Claude literally cannot escape
- Mount only what you want accessible
- Network restrictions at the container level
- No “escape hatch” mechanism to bypass
Performance impact on Pi 4:
- ~2.67% CPU overhead - negligible
- ~300-400MB memory for Docker daemon (varies with workload)
- IO overhead bypassed by using Docker volumes for workspace mounts
Docker setup#
Install Docker:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
Reboot now to apply the docker group membership system-wide (required for systemd services):
sudo reboot
After reboot, verify Docker works:
docker --version
docker run hello-world
Create the Claude Code image:
mkdir -p ~/claude-sandbox-image && cd ~/claude-sandbox-image
cat > Dockerfile << 'EOF'
FROM node:22-slim
RUN apt-get update && apt-get install -y \
git \
curl \
sudo \
&& rm -rf /var/lib/apt/lists/*
# Enable corepack for pnpm (available for projects)
RUN corepack enable && corepack prepare pnpm@latest --activate
# Install global packages with npm (happy-coder has issues with pnpm's isolated home)
# happy-coder (happy.engineering) wraps Claude Code for phone access via text/voice
RUN npm install -g @anthropic-ai/claude-code happy-coder
# node user already has UID 1000, matching typical host user for bind mounts
RUN echo "node ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
USER node
# Create tmp directory on same filesystem as .claude to avoid EXDEV errors [1]
RUN mkdir -p /home/node/.claude/tmp
# Git config for commits (customize these)
RUN git config --global user.name "Claude (Pi)" && \
git config --global user.email "claude@localhost"
WORKDIR /workspace
# Using happy-coder wrapper; replace with "claude" if not using happy
ENTRYPOINT ["happy", "--dangerously-skip-permissions"]
EOF
docker build -t claude-code .
Build takes ~6 minutes on Pi 4. (The tmp directory workaround avoids EXDEV errors during plugin installs [1].)
Create directories and configure security:
mkdir -p ~/Repos # Docker will have access to this
mkdir -p ~/.claude-docker # Credentials for sandboxed Claude
# Block Claude from reading sensitive files
cat > ~/.claude-docker/settings.json << 'EOF'
{
"permissions": {
"deny": [
"Read(**/.env)",
"Read(**/.env.*)",
"Read(**/*credentials*)",
"Read(**/*secret*)"
]
}
}
EOF
# Document the environment for project sessions
cat > ~/Repos/CLAUDE.md << 'EOF'
# Project Sessions
This is the workspace root for sandboxed Claude project sessions.
## Environment
- Running in Docker with `--dangerously-skip-permissions`
- Full access to all repos in this directory
- SSH keys available for git operations
## tmux Navigation
- `Ctrl-b w` - list all windows
- `Ctrl-b d` - detach (session keeps running)
EOF
Create launcher script:
cat > ~/claude-sandbox.sh << 'EOF'
#!/bin/bash
docker run -it --rm \
--init \
--memory 3g \
-v ~/Repos:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
claude-code "$@"
EOF
chmod +x ~/claude-sandbox.sh
The --init flag ensures proper signal handling and prevents zombie process accumulation when Claude spawns child processes (git, npm, bash). The --memory 3g limit prevents runaway processes from crashing the host - the container gets OOM-killed instead of taking down the entire Pi. The TERM and COLORTERM variables enable true 24-bit color in Claude’s output - without them, Docker defaults to basic colors that look washed out. The CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC variable disables telemetry, error reporting, and auto-updates - useful for privacy and reducing unnecessary network traffic on the Pi.
Note: We intentionally avoid passing ANTHROPIC_API_KEY. Claude Code prioritizes API keys over subscription auth - if that env var leaks into the container, you’ll get unexpected pay-per-token charges instead of using your subscription.
Test and authenticate the sandbox:
~/claude-sandbox.sh
First run prompts for login (separate credentials from host) and the bypass permissions warning. Accept, then /exit to continue setup.
Long-lived authentication#
By default, Claude Code tokens expire after 6-12 hours (GitHub issue #12447). For a true “forever-running” setup, use a 1-year token:
Generate the token:
claude setup-token
Copy the token and store it securely:
read -sp "Paste 1-year token: " token && \
echo "$token" > ~/.claude-docker/.oauth_token && \
chmod 600 ~/.claude-docker/.oauth_token && \
echo -e "\nToken saved."
Update the launcher to pass the token:
cat > ~/claude-sandbox.sh << 'EOF'
#!/bin/bash
docker run -it --rm \
--init \
--memory 3g \
-v ~/Repos:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
-e CLAUDE_CODE_OAUTH_TOKEN="$(cat ~/.claude-docker/.oauth_token 2>/dev/null)" \
claude-code "$@"
EOF
chmod +x ~/claude-sandbox.sh
Note: The Ansible playbook automates this via
vault_claude_oauth_tokenin the encrypted vault.
Autostart with systemd#
Enable lingering so user services start at boot (not just login):
sudo loginctl enable-linger $(whoami)
Create the tmux service:
mkdir -p ~/.config/systemd/user
cat > ~/.config/systemd/user/claude-tmux.service << 'EOF'
[Unit]
Description=Claude Code tmux session
[Service]
Type=forking
ExecStart=/usr/bin/tmux new-session -d -s main -n main %h/claude-sandbox.sh
ExecStop=/usr/bin/tmux kill-session -t main
# Always restart when session exits (clean or crash)
# Does NOT restart when stopped via: systemctl --user stop claude-tmux
Restart=always
RestartSec=5
# Rate limit: max 3 restarts in 60 seconds, then give up
# Check logs with: journalctl --user -u claude-tmux
StartLimitIntervalSec=60
StartLimitBurst=3
[Install]
WantedBy=default.target
EOF
systemctl --user daemon-reload
systemctl --user enable claude-tmux
systemctl --user start claude-tmux
Verify:
tmux ls
Remote access with Tailscale#
So far we’ve assumed local network access. But what if you want to reach the Pi from a coffee shop, office, or while traveling?
Why Tailscale: Tailscale creates a private mesh VPN using WireGuard. No router configuration, no port forwarding, no dynamic DNS. The free tier covers personal use with up to 100 devices. It just works.
On the Pi:
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
This prints an auth URL - open it in your browser and approve the device.
Headless alternative: If you can’t access a browser from the Pi, generate an auth key at login.tailscale.com/admin/settings/keys first:
sudo tailscale up --authkey=tskey-auth-xxxxx
Tailscale installs as a systemd service that auto-starts on boot. You only run tailscale up once - authentication persists across reboots.
On your laptop/phone:
Download from tailscale.com/download, install, sign in. Both devices now see each other on a private 100.x.x.x network.
Find your Pi’s Tailscale IP:
tailscale ip -4
Pro tip: Use the Tailscale IP everywhere. Tailscale is smart - when both devices are on the same LAN, traffic stays on the LAN and establishes direct connections with negligible overhead. So instead of maintaining separate configs for local vs remote, just use the Tailscale IP (100.x.x.x) as your hostname. One config that works from home, office, or anywhere.
Alternative - MagicDNS: Enable it in the admin console and SSH by hostname instead of IP:
ssh your-username@rpi-dev
Usage#
SSH config for auto-attach#
Configure your local machine to auto-attach to tmux on connect. Add to ~/.ssh/config:
# Pi Claude server (use Tailscale IP - works from anywhere)
Host claude-shell claude
HostName YOUR_TAILSCALE_IP
User YOUR_USERNAME
IdentityFile ~/.ssh/id_ed25519
# Connection multiplexing - ~10x faster subsequent connections [7]
# https://gist.github.com/rtomayko/502aefc63d26e80ab6d7c0db66f2cb69
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 10m
# Keepalive for NAT traversal
ServerAliveInterval 60
ServerAliveCountMax 3
# Direct shell access (no tmux)
Host claude-shell
# Auto-attach to tmux session (iTerm2 control mode for native scrolling)
Host claude
RequestTTY yes
RemoteCommand tmux -CC attach-session -t main
Note: We use
attach-session(notnew-session -A) so systemd is the sole creator of sessions. If the session doesn’t exist yet, the SSH command will fail - just wait a few seconds for systemd to restart it.
Note: Avoid enabling SSH compression (
Compression yes) on fast local networks. The Pi 4’s CPU is already the bottleneck for SSH throughput (~30 MB/s), and compression overhead makes this worse. Over slow or metered connections, compression might help — but Tailscale’s direct LAN mode means you’re usually on a fast path anyway.
Setup: Create the control socket directory on your local machine:
mkdir -p ~/.ssh/sockets && chmod 700 ~/.ssh/sockets
Now ssh claude drops you directly into the tmux session with native iTerm2 tabs and scrolling - no extra commands needed. The -CC flag enables tmux control mode which iTerm2 renders natively.
Use ssh claude-shell when you need direct shell access (e.g., for scp or debugging).
Non-iTerm2 users: Remove the
-CCflag for standard tmux behavior, or use a terminal that supports tmux control mode (like iTerm2).
iTerm2 tmux settings (macOS only)#
Skip this section if you’re not on macOS with iTerm2. The SSH config above works with any terminal - you just won’t get native tabs/scrolling.
For the best experience with tmux control mode, configure these settings in Settings → General → tmux:
| Setting | Value | Why |
|---|---|---|
| When attaching, restore windows as | Tabs in the attaching window | Pi tmux tabs appear alongside your local tabs in the same window |
| Automatically bury the tmux client session | ✓ Enabled | Hides the control session tab, keeping your tab bar clean |
| Mirror tmux paste buffer to local clipboard | ✓ Enabled | Seamless copy/paste between tmux and macOS |
With these settings, running ssh claude from any iTerm2 window adds your Pi’s tmux tabs right next to your existing local tabs - with native scrolling, Cmd+F search, and normal copy/paste.
Note: Since iTerm2 3.3, tmux sessions inherit your active profile’s settings by default. If you want a dedicated appearance for Pi sessions (e.g., different colors to distinguish remote work), enable “Use ‘tmux’ profile” and customize the “tmux” profile.
Starting Claude#
The systemd service auto-starts Claude in tmux at boot. When you ssh claude, you’ll attach directly to the running session - no manual startup needed.
If Claude isn’t running (e.g., after a crash or manual stop), start it with:
~/claude-sandbox.sh
First run will prompt for auth - both Claude Code and happy-coder need separate logins. The happy auth may prompt twice; just follow through. Credentials persist in ~/.claude-docker/.
The sandbox can only access ~/Repos - your host system is protected.
Why no separate “safe mode” window? The
--dangerously-skip-permissionsflag can be toggled off within a running Claude session. When you need to review actions before they execute, just disable it temporarily. One sandboxed session handles both autonomous and interactive workflows.
Upgrade script#
cat > ~/upgrade-claude.sh << 'EOF'
#!/bin/bash
set -e
echo "=== Upgrading Claude Code ==="
cd ~/claude-sandbox-image
docker build --pull --no-cache -t claude-code .
echo ""
echo "Version: $(docker run --rm claude-code claude --version)"
EOF
chmod +x ~/upgrade-claude.sh
Run whenever you want to update:
~/upgrade-claude.sh
Create helper scripts for shell access and tmux attachment:
cat > ~/claude-sandbox-shell.sh << 'EOF'
#!/bin/bash
docker run -it --rm \
-v ~/.claude-docker:/home/node/.claude \
--entrypoint /bin/bash \
claude-code
EOF
cat > ~/claude-attach.sh << 'EOF'
#!/bin/bash
tmux attach -t main
EOF
chmod +x ~/claude-sandbox-shell.sh ~/claude-attach.sh
GitHub MCP server#
To search code, manage issues, and interact with GitHub repos directly from Claude, we’ll add the GitHub MCP server.
Two approaches exist:
- HTTP transport - Connect to GitHub’s hosted MCP endpoint at
api.githubcopilot.com - Local Docker - Run the MCP server locally in a container, which then calls GitHub’s API
Both ultimately hit GitHub’s API - the difference is where the MCP server logic runs. For a resource-constrained Pi, HTTP transport makes more sense: no extra container overhead, and GitHub’s co-located servers likely respond faster than a local Node.js process on slow ARM hardware.
Potential downside: You’re dependent on GitHub maintaining this endpoint. If they deprecate it, you’d need to switch to the Docker approach.
Setup:
First, create a fine-grained Personal Access Token:
- Set expiration and repository access (recommend “All repositories”)
- Under Repository permissions, add:
- Contents → Read-only (search code, read files)
- Issues → Read and write (manage issues)
- Pull requests → Read and write (manage PRs)
- Metadata → Read-only (auto-required)
Note: Fine-grained PATs don’t support organization team tools. If you need those, use a classic PAT with
repoandread:orgscopes instead.
Store the token securely:
read -sp "Paste GitHub token: " token && echo "$token" > ~/.claude-docker/.github_token && chmod 600 ~/.claude-docker/.github_token && echo -e "\nToken saved."
Update the launcher script to pass the token as an environment variable:
cat > ~/claude-sandbox.sh << 'EOF'
#!/bin/bash
docker run -it --rm \
--init \
--memory 3g \
-v ~/Repos:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
-e GITHUB_PAT="$(cat ~/.claude-docker/.github_token 2>/dev/null)" \
-e CLAUDE_CODE_OAUTH_TOKEN="$(cat ~/.claude-docker/.oauth_token 2>/dev/null)" \
-e CLAUDE_CONFIG_DIR="/home/node/.claude" \
claude-code "$@"
EOF
Add the MCP server from within a Claude session. Press ! to drop to bash, then run:
claude mcp add -t http github https://api.githubcopilot.com/mcp \
-H "Authorization: Bearer \${GITHUB_PAT}"
Type exit to return to Claude, then /mcp to verify the server connected.
Adding more MCP servers#
The same pattern works for any HTTP-based MCP server:
- Store the token in
~/.claude-docker/with chmod 600 - Update
claude-sandbox.shto pass the token via-e ENV_VAR="$(cat ~/.claude-docker/.token_file 2>/dev/null)" - From within Claude, press
!and runclaude mcp add
Example adding Context7 (up-to-date library docs):
# On host: store token
read -sp "Paste Context7 token: " token && echo "$token" > ~/.claude-docker/.context7_token && chmod 600 ~/.claude-docker/.context7_token && echo -e "\nToken saved."
# Add to claude-sandbox.sh: -e CONTEXT7_TOKEN="$(cat ~/.claude-docker/.context7_token 2>/dev/null)"
# From within Claude session (press ! for bash):
claude mcp add -t http context7 https://mcp.context7.com/mcp \
-H "CONTEXT7_API_KEY: \${CONTEXT7_TOKEN}"
For stdio-based MCP servers, install them in the Docker image and use claude mcp add <name> -- <command>.
Git SSH access#
The container needs SSH keys to clone private repos. We’ll generate a dedicated keypair (separate from your host keys) and mount it into the container.
Generate keys:
mkdir -p ~/.claude-docker/.ssh
ssh-keygen -t ed25519 -C "claude-pi" -f ~/.claude-docker/.ssh/id_ed25519 -N ""
Add to GitHub:
cat ~/.claude-docker/.ssh/id_ed25519.pub
Copy this output and add it at github.com/settings/keys.
Configure SSH for GitHub:
cat > ~/.claude-docker/.ssh/config << 'EOF'
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking accept-new
EOF
chmod 700 ~/.claude-docker/.ssh
chmod 600 ~/.claude-docker/.ssh/*
Update launcher script to add SSH and happy-coder mounts:
mkdir -p ~/.claude-docker/.happy
cat > ~/claude-sandbox.sh << 'EOF'
#!/bin/bash
docker run -it --rm \
--init \
--memory 3g \
-v ~/Repos:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-v ~/.claude-docker/.ssh:/home/node/.ssh:ro \
-v ~/.claude-docker/.happy:/home/node/.happy \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
-e GITHUB_PAT="$(cat ~/.claude-docker/.github_token 2>/dev/null)" \
-e CONTEXT7_TOKEN="$(cat ~/.claude-docker/.context7_token 2>/dev/null)" \
-e CLAUDE_CODE_OAUTH_TOKEN="$(cat ~/.claude-docker/.oauth_token 2>/dev/null)" \
-e CLAUDE_CONFIG_DIR="/home/node/.claude" \
claude-code "$@"
EOF
The :ro mount makes SSH keys read-only inside the container. The .happy mount persists happy-coder authentication between container restarts. The CLAUDE_CONFIG_DIR variable is required for credential persistence - without it, Claude Code won’t recognize the mounted credentials directory. The 2>/dev/null suppresses errors for optional tokens you haven’t set up yet.
Test:
~/claude-sandbox.sh
# Then inside Claude:
# > "Clone git@github.com:your-username/private-repo.git"
VS Code Remote SSH#
Edit files on the Pi directly from VS Code on your local machine.
I wanted to briefly test this just to see if it was working, but realistically I will probably not be using it. It’s always nice to know that you can gain the convenience of an IDE with your server, and I’ll forever love the devs who made Remote SSH a thing with VS Code.
Prerequisites:
- Install the “Remote - SSH” extension in VS Code
- SSH key access to the Pi (password-less login)
SSH config (on your local machine):
If you followed the Usage section, you already have claude and claude-shell hosts configured. VS Code Remote SSH works with both - use claude-shell if you want VS Code’s integrated terminal to open a regular shell instead of auto-attaching to tmux.
Connect: Open the command palette (Cmd+Shift+P on Mac, Ctrl+Shift+P on Windows/Linux) → “Remote-SSH: Connect to Host” → select claude or claude-shell.
Auto-attach to tmux session:
Create VS Code config files in the Repos folder on the Pi:
mkdir -p ~/Repos/.vscode
cat > ~/Repos/.vscode/tasks.json << 'EOF'
{
"version": "2.0.0",
"tasks": [
{
"label": "Attach to Claude tmux",
"type": "shell",
"command": "tmux attach -t main",
"presentation": {
"reveal": "always",
"panel": "dedicated",
"focus": true
},
"runOptions": {
"runOn": "folderOpen"
},
"problemMatcher": []
}
]
}
EOF
cat > ~/Repos/.vscode/settings.json << 'EOF'
{
"terminal.integrated.profiles.linux": {
"tmux-main": {
"path": "tmux",
"args": ["attach", "-t", "main"]
}
},
"terminal.integrated.defaultProfile.linux": "tmux-main"
}
EOF
Now when you open ~/Repos via Remote SSH, VS Code will:
- Auto-run a task that attaches to the Claude tmux session
- Use tmux-attach as the default terminal profile
Note: VS Code will ask “This folder has tasks that run automatically. Allow?” - click Allow.
Quick open from terminal (on your local machine):
code --remote ssh-remote+claude /home/YOUR_USERNAME/Repos
Multi-session architecture#
Spawning additional sessions#
The sandboxed Docker Claude already has access to all of ~/Repos - so for most cases, you can just cd to different projects within the same session. But there are valid reasons to want separate windows:
- Parallel work - run autonomous tasks on multiple repos simultaneously
- Context isolation - keep Claude’s context fresh per-project
Helper script:
cat > ~/claude-sandbox.sh << 'EOF'
#!/bin/bash
set -e
# Project session: full access to ~/Repos
run_project() {
local prompt="$1"
docker run -it --rm \
--init \
--memory 3g \
--add-host=host.docker.internal:host-gateway \
-v ~/Repos:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-v ~/.claude-docker/.ssh:/home/node/.ssh:ro \
-v ~/.claude-docker/.happy:/home/node/.happy \
-w /workspace \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
-e GITHUB_PAT="$(cat ~/.claude-docker/.github_token 2>/dev/null)" \
-e CONTEXT7_TOKEN="$(cat ~/.claude-docker/.context7_token 2>/dev/null)" \
-e CLAUDE_CODE_OAUTH_TOKEN="$(cat ~/.claude-docker/.oauth_token 2>/dev/null)" \
-e CLAUDE_CONFIG_DIR="/home/node/.claude" \
claude-code ${prompt:+"$prompt"}
}
# --run "prompt": internal, called by tmux to run project session
if [[ "$1" == "--run" ]]; then
run_project "$2"
exit 0
fi
# No args: run control plane (isolated - no repo access, no SSH)
if [[ -z "$1" ]]; then
docker run -it --rm \
--init \
--memory 3g \
--add-host=host.docker.internal:host-gateway \
-v ~/.claude-control-plane/workspace:/workspace \
-v ~/.claude-docker:/home/node/.claude \
-v ~/.claude-docker/.happy:/home/node/.happy \
-e TERM=xterm-256color \
-e COLORTERM=truecolor \
-e TMPDIR=/home/node/.claude/tmp \
-e CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \
-e GITHUB_PAT="$(cat ~/.claude-docker/.github_token 2>/dev/null)" \
-e CONTEXT7_TOKEN="$(cat ~/.claude-docker/.context7_token 2>/dev/null)" \
-e CLAUDE_CODE_OAUTH_TOKEN="$(cat ~/.claude-docker/.oauth_token 2>/dev/null)" \
-e CLAUDE_CONFIG_DIR="/home/node/.claude" \
claude-code
exit 0
fi
# --spawn "prompt" "window_name": MCP entry point for spawning sessions
if [[ "$1" == "--spawn" ]]; then
PROMPT="$2"
WINDOW_NAME="${3:-session-$(date +%s)}"
ESCAPED="${PROMPT//\'/\'\\\'\'}"
tmux new-window -t main: -n "$WINDOW_NAME" \
"$HOME/claude-sandbox.sh --run '$ESCAPED'"
echo "Spawned window '$WINDOW_NAME'"
exit 0
fi
echo "Usage:"
echo " $0 Run control plane (isolated)"
echo " $0 --spawn 'prompt' 'name' Spawn project session via MCP"
exit 1
EOF
chmod +x ~/claude-sandbox.sh
Key design: Host never touches git. SSH keys stay inside Docker containers only. When you pass a git URL, Claude receives it as an instruction and handles cloning itself.
Isolation model:
- Control plane (no args): Isolated workspace at
~/.claude-control-plane/workspace. No access to repos or SSH keys. Uses MCP tools to spawn project sessions. - Project sessions (
--spawn): Full access to~/Reposand SSH keys for git operations. Claude handles all git commands.
The --add-host flag lets Docker containers reach the host via host.docker.internal for MCP access.
Set up the control plane workspace:
mkdir -p ~/.claude-control-plane/workspace
cat > ~/.claude-control-plane/workspace/CLAUDE.md << 'EOF'
# Control Plane
You are the orchestration layer for Claude sessions on this Raspberry Pi.
## Your Role
- Spawn project sessions via MCP tools
- Manage running sessions (list, kill)
- Help the user decide what to work on
## What You CAN'T Do
- Access code repositories directly (by design)
- Use git or SSH (no keys mounted)
- Modify project files
## Available MCP Tools
Use `list_sessions` to see running windows, `spawn_session` to start new ones:
- `spawn_session({ repo: "github.com/user/repo" })` - clone and explore
- `spawn_session({ instruction: "Research X" })` - task without repo
- `spawn_session({ repo: "...", instruction: "Fix the login bug" })` - clone then task
- `spawn_session({ instruction: "...", window_name: "research" })` - custom window name
- `kill_session("window-name")` - close a session
- `end_session("window-name")` - end a session
## Session Persistence
This workspace is intentionally minimal. Don't store files here - use project sessions for actual work.
EOF
Usage:
# Run control plane (isolated - no repo access)
~/claude-sandbox.sh
# MCP entry point: spawn project session with instructions
~/claude-sandbox.sh --spawn "Clone github.com/user/repo and fix the tests" "repo-name"
~/claude-sandbox.sh --spawn "Research WebSocket auth patterns" "research"
The control plane uses the tmux-control-plane MCP server (set up below) to spawn project sessions - with full flexibility to pass any instructions.
Quick navigation - add to ~/.tmux.conf:
# List all Claude windows
bind C-c choose-tree -s -f '#{==:#{session_name},main}'
Now Ctrl-b Ctrl-c shows all your Claude windows for quick switching.
tmux-control-plane MCP server#
Want Claude to spawn new sessions itself? This MCP server runs on the host and exposes tools for session management. Claude inside Docker connects to it via HTTP.
Architecture:
┌─────────────────────┐ ┌─────────────────────┐
│ Docker (Claude) │ │ Host (Pi) │
│ │ HTTP │ │
│ Uses spawn_session │────────▶│ MCP server :3100 │
│ tool natively │ │ runs claude-sandbox│
└─────────────────────┘ └─────────────────────┘
Tools exposed:
list_sessions- Show active tmux windowsspawn_session({ repo, instruction, window_name })- Spawn a new session (1-3 min startup)kill_session(target)- Close a window by name or indexend_session(window)- End a session by window name
Create the MCP server:
mkdir -p ~/tmux-control-plane
cat > ~/tmux-control-plane/package.json << 'EOF'
{
"name": "tmux-control-plane",
"version": "1.0.0",
"type": "module",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.21.2"
}
}
EOF
Create server.js:
cat > ~/tmux-control-plane/server.js << 'EOF'
import express from "express";
import { execSync, exec } from "child_process";
const app = express();
const PORT = 3100;
const BEARER_TOKEN = "tmux-local-dev";
app.use(express.json());
function run(cmd) {
try {
return execSync(cmd, { encoding: "utf8", timeout: 30000 }).trim();
} catch (e) {
throw new Error(e.stderr || e.message);
}
}
const TOOLS = [
{
name: "list_sessions",
description: "List all active tmux windows in the Claude session",
inputSchema: { type: "object", properties: {}, required: [] }
},
{
name: "spawn_session",
description: "Spawn a new Claude project session. Can clone repos, run tasks, or start blank sessions.",
inputSchema: {
type: "object",
properties: {
repo: { type: "string", description: "Git URL to clone (e.g., github.com/user/repo)" },
instruction: { type: "string", description: "Task or instructions for Claude" },
window_name: { type: "string", description: "Custom window name (auto-generated if not provided)" }
},
required: []
}
},
{
name: "kill_session",
description: "Kill a tmux window by name or index",
inputSchema: {
type: "object",
properties: {
target: { type: "string", description: "Window name or index to kill" }
},
required: ["target"]
}
},
{
name: "end_session",
description: "End a Claude session by window name. Use list_sessions first to find window names.",
inputSchema: {
type: "object",
properties: {
window: { type: "string", description: "Window name from list_sessions" }
},
required: ["window"]
}
}
];
const toolHandlers = {
list_sessions: async () => {
try {
const output = run(`tmux list-windows -t main -F "#{window_index}: #{window_name}"`);
return { content: [{ type: "text", text: output || "No windows found" }] };
} catch (e) {
return { content: [{ type: "text", text: `Error: ${e.message}` }], isError: true };
}
},
spawn_session: async ({ repo, instruction, window_name }) => {
// Build prompt based on inputs
let prompt = "";
if (repo && instruction) {
prompt = `Clone ${repo}, then ${instruction}`;
} else if (repo) {
prompt = `Clone ${repo} and explore the codebase`;
} else if (instruction) {
prompt = instruction;
}
// else blank session
// Determine window name
let windowName = window_name;
if (!windowName) {
if (repo) {
windowName = repo.split("/").pop().replace(".git", "");
} else {
windowName = `session-${Date.now()}`;
}
}
// Escape for shell
const escapedPrompt = prompt.replace(/'/g, "'\\''");
const escapedWindow = windowName.replace(/'/g, "'\\''");
// Use bash -l for login shell (loads docker group, nvm, PATH, etc.)
const cmd = `bash -l -c '${process.env.HOME}/claude-sandbox.sh --spawn "${escapedPrompt}" "${escapedWindow}"'`;
try {
const result = await new Promise((resolve, reject) => {
exec(cmd, { timeout: 600000 }, (err, stdout, stderr) => {
if (err) reject(new Error(stderr || err.message));
else resolve(stdout);
});
});
console.log("Spawned:", result);
return { content: [{ type: "text", text: `Spawned "${windowName}". Use list_sessions to see all windows.` }] };
} catch (e) {
console.error("Spawn error:", e.message);
return { content: [{ type: "text", text: `Failed to spawn "${windowName}": ${e.message}` }], isError: true };
}
},
kill_session: async ({ target }) => {
try {
run(`tmux kill-window -t "main:${target}"`);
return { content: [{ type: "text", text: `Killed window: ${target}` }] };
} catch (e) {
return { content: [{ type: "text", text: `Error: ${e.message}` }], isError: true };
}
},
end_session: async ({ window }) => {
try {
run(`tmux kill-window -t "main:${window}"`);
return { content: [{ type: "text", text: `Session "${window}" ended.` }] };
} catch (e) {
return { content: [{ type: "text", text: `Error: ${e.message}` }], isError: true };
}
}
};
function validateAuth(req, res, next) {
const auth = req.headers.authorization;
if (!auth || auth !== `Bearer ${BEARER_TOKEN}`) {
return res.status(401).json({ error: "Unauthorized" });
}
next();
}
async function handleJsonRpc(request) {
const { method, params, id } = request;
switch (method) {
case "initialize":
return {
jsonrpc: "2.0", id,
result: {
protocolVersion: "2024-11-05",
capabilities: { tools: {} },
serverInfo: { name: "tmux-control-plane", version: "1.0.0" }
}
};
case "notifications/initialized":
return null;
case "tools/list":
return { jsonrpc: "2.0", id, result: { tools: TOOLS } };
case "tools/call":
const { name, arguments: args } = params;
const handler = toolHandlers[name];
if (!handler) {
return { jsonrpc: "2.0", id, error: { code: -32601, message: `Unknown tool: ${name}` } };
}
const result = await handler(args || {});
return { jsonrpc: "2.0", id, result };
default:
return { jsonrpc: "2.0", id, error: { code: -32601, message: `Method not found: ${method}` } };
}
}
app.post("/mcp", validateAuth, async (req, res) => {
try {
const response = await handleJsonRpc(req.body);
if (response === null) return res.status(202).end();
res.json(response);
} catch (e) {
res.status(500).json({ jsonrpc: "2.0", id: req.body?.id, error: { code: -32603, message: e.message } });
}
});
app.get("/health", (req, res) => res.json({ status: "ok" }));
app.listen(PORT, "0.0.0.0", () => {
console.log(`tmux-control-plane running on http://0.0.0.0:${PORT}/mcp`);
});
EOF
# Install dependencies
cd ~/tmux-control-plane && pnpm install
Create wrapper script (nvm isn’t available in systemd’s environment):
cat > ~/tmux-control-plane/start.sh << 'EOF'
#!/bin/bash
source ~/.nvm/nvm.sh
cd ~/tmux-control-plane
exec node server.js
EOF
chmod +x ~/tmux-control-plane/start.sh
Create systemd service:
cat > ~/.config/systemd/user/tmux-control-plane.service << 'EOF'
[Unit]
Description=tmux-control-plane MCP server
After=network.target
[Service]
Type=simple
ExecStart=%h/tmux-control-plane/start.sh
Restart=always
RestartSec=5
Environment=NODE_ENV=production
[Install]
WantedBy=default.target
EOF
systemctl --user daemon-reload
systemctl --user enable tmux-control-plane
systemctl --user start tmux-control-plane
Configure the control plane to connect:
Only the control plane needs this MCP - project sessions use user-level MCPs added via claude mcp add.
cat > ~/.claude-control-plane/workspace/.mcp.json << 'EOF'
{
"mcpServers": {
"tmux-control-plane": {
"type": "http",
"url": "http://host.docker.internal:3100/mcp",
"headers": {
"Authorization": "Bearer tmux-local-dev"
}
}
}
}
EOF
The Bearer token bypasses Claude Code’s OAuth requirement for HTTP MCP servers. For a local trusted server, this simple token is sufficient.
Verify:
# Check service is running
systemctl --user status tmux-control-plane
# Test endpoint
curl -s http://localhost:3100/health
# View logs (useful for debugging spawn failures)
journalctl --user -u tmux-control-plane -f
Restart the control plane to pick up the new MCP config:
systemctl --user restart claude-tmux
Then ssh claude and run /mcp to verify tmux-control-plane appears with 4 tools.
Usage from within Claude:
“List my tmux sessions” “Spawn a session to clone github.com/user/repo” “Spawn a session to research WebSocket authentication patterns” “Clone github.com/user/repo and fix the failing tests” “Kill the portfolio window”
Session persistence with tmux plugins#
tmux sessions survive disconnections, but not reboots. These plugins add persistence:
- tmux-resurrect - Manual save/restore of session layout, working directories, pane contents
- tmux-continuum - Auto-saves every 15 minutes, auto-restores on tmux start
Install TPM (tmux plugin manager)#
git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm
Add plugins to tmux.conf#
Append to ~/.tmux.conf:
cat >> ~/.tmux.conf << 'EOF'
# Plugin manager
set -g @plugin 'tmux-plugins/tpm'
# Session persistence
set -g @plugin 'tmux-plugins/tmux-resurrect'
set -g @plugin 'tmux-plugins/tmux-continuum'
# Auto-restore on tmux start
set -g @continuum-restore 'on'
# Initialize TPM (keep at bottom)
run '~/.tmux/plugins/tpm/tpm'
EOF
Install plugins#
Reload config and install:
tmux source ~/.tmux.conf
Then press Ctrl-b I (capital I) to install plugins. You’ll see a confirmation message when complete.
Usage#
- Manual save:
Ctrl-b Ctrl-s - Manual restore:
Ctrl-b Ctrl-r - Auto-save: Happens every 15 minutes automatically
- Auto-restore: Happens when tmux starts (if
@continuum-restoreis on)
Caveat for Docker sessions#
Docker containers started with --rm won’t survive reboot regardless of tmux persistence. Claude Code has built-in conversation resume (claude --resume) that works within a running session, but the container itself needs to be recreated after reboot.
Host-level helper agent#
While it’s possible to SSH from my Mac and pipe commands to the Pi, I realized I wanted a permission-tied agent that could help with simple sysadmin tasks - installing plugins, managing services, updating configs. The sandboxed Docker instances can’t do this by design.
Solution: A separate Claude instance running directly on the host (not in Docker), with normal human-in-the-loop permissions.
Tip: If you’re using Claude Code locally to create scripts on the Pi via SSH (
ssh claude-shell 'cat > file << EOF...'), you’ll run into escaping nightmares - nested quotes, heredocs, and special characters like!get mangled across the SSH boundary. Two better options: (1) write the file locally andscpit over, or (2) SSH in and use the helper agent directly to create the files.
Create the helper’s context file:
cat > ~/CLAUDE.md << 'EOF'
# Claude Helper Instance
This is a human-in-the-loop Claude for system administration tasks. NOT sandboxed.
## What This Is
A helper instance for setting up and managing the **autonomous agents** that live in `~/Repos`. Those run sandboxed in Docker with `--dangerously-skip-permissions`.
This instance runs on the host with normal permissions - use it for:
- Installing/configuring Claude Code plugins
- Managing systemd services
- Updating Docker images
- SSH key management
- System configuration
## Architecture
```
~/ ← YOU ARE HERE (helper, human-in-loop)
├── Repos/ ← Autonomous agents live here (Docker sandboxed)
├── claude-sandbox.sh ← Spawns sandboxed sessions
├── upgrade-claude.sh ← Rebuilds Docker image
├── tmux-control-plane/ ← MCP server for session orchestration
└── .claude-docker/ ← Credentials for sandboxed instances
```
## Key Services
| Service | Purpose | Check |
|---------|---------|-------|
| `claude-tmux` | Main tmux session with Claude | `systemctl --user status claude-tmux` |
| `tmux-control-plane` | MCP server for spawning sessions | `systemctl --user status tmux-control-plane` |
## Common Tasks
### Install a Claude Code plugin
The whole point of this setup is autonomy - I want Claude working on tasks while I'm away from my desk. The [Ralph Wiggum technique](https://www.reddit.com/r/ClaudeAI/comments/1l0ysy8/ralph_wiggum_infinite_agentic_loop/) keeps Claude running in a self-reflective loop, re-evaluating its work until it decides it's done. Without it, Claude stops after each response and waits for input. With it, I can kick off a task and check back hours later.
Third-party plugins require registering the marketplace first. Example with [chief-wiggum](https://github.com/jes5199/chief-wiggum) (an implementation of Ralph Wiggum):
```bash
# On the host - clone the marketplace
git clone --depth 1 https://github.com/jes5199/chief-wiggum.git \
~/.claude-docker/plugins/marketplaces/chief-wiggum
# Register in known_marketplaces.json
cat > ~/.claude-docker/plugins/known_marketplaces.json << 'EOF'
{
"chief-wiggum": {
"source": {
"source": "github",
"repo": "jes5199/chief-wiggum"
},
"installLocation": "/home/node/.claude/plugins/marketplaces/chief-wiggum",
"lastUpdated": "2026-01-06T00:00:00.000Z"
}
}
EOF
```
Then inside a Claude session:
```bash
/plugin install chief-wiggum
```
> **Note:** Host Claude (`~/.claude/`) and Docker Claude (`~/.claude-docker/`) have separate plugin directories. The `installLocation` path must match the container's mount point (`/home/node/.claude/...`).
### Install custom slash commands
Custom commands add project-agnostic shortcuts for common workflows. The [claude-instructions](https://github.com/wbern/claude-instructions) package provides 27+ commands for TDD, git, code review, and more.
The installer is interactive, so use the sandbox shell script:
```bash
~/claude-sandbox-shell.sh
# Inside the container:
npx @wbern/claude-instructions --scope=user
```
This installs commands like `/commit`, `/pr`, `/red`, `/green`, `/refactor` to `~/.claude/commands/`. Run `/help` in a Claude session to see what's available.
### Restart services
```bash
systemctl --user restart claude-tmux
systemctl --user restart tmux-control-plane
```
### Update Claude Code
```bash
~/upgrade-claude.sh
```
## Important Paths
- **Sandboxed credentials:** `~/.claude-docker/`
- **Plugin config:** `~/.claude-docker/plugins/`
- **Control plane MCP config:** `~/.claude-control-plane/workspace/.mcp.json`
- **Systemd services:** `~/.config/systemd/user/`
- **tmux config:** `~/.tmux.conf`
## DO NOT
- Run `--dangerously-skip-permissions` here (that's for sandboxed instances)
- Modify files in `~/Repos` directly (spawn a sandboxed session instead)
EOF
Install Claude Code on the host (separate from Docker):
npm install -g @anthropic-ai/claude-code
Disable telemetry (same as the Docker instances):
echo 'export CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1' >> ~/.bashrc
source ~/.bashrc
Usage:
ssh claude-shell
cd ~
claude
This gives you a trusted agent for host-level tasks while keeping the autonomous agents safely sandboxed.
Login greeting#
Three months from now, I’ll have forgotten all these commands. A simple MOTD helps:
cat >> ~/.bashrc << 'EOF'
# Claude server greeting (only for interactive shells, not tmux attach)
if [[ $- == *i* ]] && [[ -z "$TMUX" ]]; then
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo " 🤖 Claude Code Server"
echo "═══════════════════════════════════════════════════════════════"
echo ""
echo " Quick commands:"
echo " claude Open helper agent (human-in-loop)"
echo " ~/claude-sandbox.sh Run control plane (isolated)"
echo " ~/claude-sandbox-shell.sh Get a bash shell inside the sandbox"
echo " ~/claude-attach.sh Attach to running tmux session"
echo " ~/upgrade-claude.sh Rebuild Docker with latest Claude"
echo ""
echo " Control plane spawns sessions via MCP:"
echo " spawn_session({ repo, instruction, window_name })"
echo ""
echo "═══════════════════════════════════════════════════════════════"
echo ""
fi
EOF
This shows on ssh claude-shell but not ssh claude (which attaches directly to tmux).
Things I haven’t looked into yet#
Push notifications when Claude stops#
Getting notified when Claude finishes a task (or needs input) so you can inspect results remotely. Several options exist:
claudecode-pushover-integration - A daemon that hooks into Claude Code events:
- Notifies on idle prompts, permission requests, stops
- Rate limiting (max 1 notification per 30 seconds)
- Priority queuing for critical events
claude-notify - CLI tool for Claude Code hooks:
- Configurable
BUSY_TIME- only notifies after prompt runs X seconds - Avoids spam from quick tasks
DIY with hooks - Claude Code has built-in hooks that trigger shell commands. A simple script can POST to Pushover’s API. See this tutorial for a walkthrough.
Haven’t tested any of these yet - the main use case would be knowing when to check back on autonomous tasks running on the Pi.
Where this leaves me#
I’m not done tinkering with this setup - it’s very much a work in progress. But even in its current state, it’s given me an interesting playground for exploring autonomous agentic workflows.
The key for me is the balance: I can let Claude run wild on tasks without needing my laptop open, but I can still take back control whenever I want. Voice or text. From my computer or from my phone. The Happy app makes this surprisingly seamless - I can check in on a running task from the couch, give it a nudge, or spawn a completely new session while walking to get coffee.
Whether this turns into something genuinely useful or just a fun experiment remains to be seen. But for now, I’ve got Claude Code in my pocket, and that feels like the future.
References#
- Plugin install fails with EXDEV when /tmp is tmpfs - GitHub issue documenting the cross-device link error when installing plugins in Docker
- schedutil governor on RPi4 rarely dips below 1000 MHz - Raspberry Pi forum discussion on
schedutilbehavior and clock speed usage - Raspberry Pi Performance: Add ZRAM and these Kernel Parameters - comprehensive guide covering zram, kernel parameters, and compression algorithms
- CPU frequency scaling - ArchWiki - thorough reference on Linux CPU governors, including why
schedutilbecame the mainline default in kernel 5.9+ - Change the default value for escape-time - tmux issue explaining why 500ms default is outdated and causes sluggish escape key response
- How to increase scrollback buffer in tmux - explains tmux’s default 2000-line history-limit
- 10x SSH connection speedup with persistent control masters - Ryan Tomayko’s guide to SSH multiplexing