Back to articles
4 min read

Running Parallel AI Coding Agents Without Conflicts

How I run multiple AI coding agents on one repo using worktrees and isolated runtime config, without local environment conflicts.

Gert Jansen van Rensburg

Gert Jansen van Rensburg

Software consultant

Comic-style illustration of runners launching from starting blocks

I hit this wall after a few days of heavy agent use. I started with bash scripts to create worktrees, copy .env.local files, and patch feature names. It worked, but it was brittle and hard to repeat. The moment I spun up a second task, everything collided: ports, database state, and terminal context.

I cared about solving this because waiting for one agent to finish before starting the next killed momentum. I wanted parallel work without hacks.

This is the setup that worked for me: multiple AI coding agents on the same repo, each with isolated runtime config, while infrastructure stays shared.

The stack

ToolRole
Git worktreesOne working directory per branch
WorktrunkAutomates worktree lifecycle with hooks
MiseLayered per-directory env vars and tasks
Docker ComposeShared Postgres, RabbitMQ, Redis, and Caddy
Microsoft EntraOAuth provider with one redirect URI
Claude CodeThe coding agents running in each worktree

What each tool does in practice

Credit where due: this setup was inspired by DevOps Toolbox’s YouTube video, Stop Using Git Worktrees. Do THIS Instead..

  • Git worktrees keep branches isolated without cloning the whole repo.
  • Worktrunk creates/switches worktrees and runs setup hooks before agents start.
  • Mise merges shared defaults with per-worktree overrides automatically when you enter a directory.
  • Caddy routes requests back to the right frontend, including OAuth callbacks.

Architecture overview

  wt switch -c feature-notifications -x claude -- "Add notifications"
  wt switch -c feature-dashboard     -x claude -- "Build dashboard"
  wt switch -c feature-user-profile  -x claude -- "Add profile page"
        |                |                |
        v                v                v
  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
  │  Worktree    │ │  Worktree    │ │  Worktree    │
  │ FE :14315    │ │ FE :14926    │ │ FE :16644    │
  │ BE :14317    │ │ BE :19843    │ │ BE :10677    │
  └──────┬───────┘ └──────┬───────┘ └──────┬───────┘
        │                │                   │
        ├────────────────┼───────────────────┤
        │         Shared Infrastructure      │
        │                                    │
   ┌────▼────┐    ┌──────▼──────┐    ┌──────▼──────┐
   │Postgres │    │  RabbitMQ   │    │    Redis    │
   │per-DB   │    │  per-vhost  │    │ per-prefix  │
   └─────────┘    └─────────────┘    └─────────────┘
        │                │                   │
        └────────────────┼───────────────────┘

                    ┌────▼────┐
                    │  Caddy  │ :8080
                    └────┬────┘

                    ┌────▼────┐
                    │  Entra  │ (single redirect URI)
                    └─────────┘

Each agent gets its own worktree, frontend/backend ports, database, RabbitMQ vhost, and Redis prefix. Docker services run once.

Shared infrastructure, isolated data

I run one docker-compose.yml for all backing services:

services:
  postgres:
    image: postgres:16-alpine
    ports:
      - "5432:5432"
    environment:
      POSTGRES_USER: demo
      POSTGRES_PASSWORD: demo
      POSTGRES_DB: demo
    volumes:
      - pgdata:/var/lib/postgresql/data

  rabbitmq:
    image: rabbitmq:3-management-alpine
    ports:
      - "5672:5672"
      - "15672:15672"
    environment:
      RABBITMQ_DEFAULT_USER: demo
      RABBITMQ_DEFAULT_PASS: demo

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

  caddy:
    image: caddy:2-alpine
    ports:
      - "8080:8080"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data

volumes:
  pgdata:
  caddy_data:

Isolation strategy at a glance:

  • PostgreSQL: one database per feature (for example, demo_feature_notifications).
  • RabbitMQ: one vhost per feature (for example, /feature-notifications).
  • Redis: one key prefix per feature (for example, feature-notifications:*).

How each worktree gets isolated runtime config

The backend reads env vars for identity and connections:

var feature = Environment.GetEnvironmentVariable("FEATURE") ?? "main";
var dbName = Environment.GetEnvironmentVariable("DATABASE_NAME") ?? "demo";
var rabbitVhost = Environment.GetEnvironmentVariable("RABBITMQ_VHOST") ?? "/";
var redisPrefix = Environment.GetEnvironmentVariable("REDIS_KEY_PREFIX") ?? "";

The frontend stays branch-agnostic and just proxies /api to that worktree’s backend port:

// vite.config.js
export default defineConfig({
  server: {
    port: parseInt(process.env.FRONTEND_PORT || "3000"),
    proxy: {
      "/api": {
        target: `http://localhost:${process.env.BACKEND_PORT || "5000"}`,
        rewrite: (path) => path.replace(/^\/api/, ""),
      },
    },
  },
});

That combination keeps app code simple while isolation is handled by environment.

Worktrunk and Mise automation

This is the core hook setup in .config/wt.toml:

# .config/wt.toml
post-switch = "printf '\\033]0;{{ branch }}\\007' > /dev/tty"

[post-create]
env = """
FEATURE="{{ branch }}"
FEATURE_DB=$(echo "$FEATURE" | tr '-' '_')
cat > .mise.local.toml << MISE
[env]
FEATURE = "$FEATURE"
FRONTEND_PORT = "{{ ('fe-' ~ branch) | hash_port }}"
BACKEND_PORT = "{{ ('be-' ~ branch) | hash_port }}"
DATABASE_NAME = "demo_${FEATURE_DB}"
RABBITMQ_VHOST = "/$FEATURE"
REDIS_KEY_PREFIX = "$FEATURE:"
MISE
mise trust .mise.local.toml
"""

deps = "npm install --prefix frontend"

[list]
url = "http://localhost:{{ ('fe-' ~ branch) | hash_port }}"

And this is the shared root mise.toml:

[env]
BACKEND_PORT = "5000"
FRONTEND_PORT = "3000"
FEATURE = "main"
DATABASE_NAME = "demo"
RABBITMQ_VHOST = "/"
REDIS_KEY_PREFIX = ""
POSTGRES_HOST = "localhost"
POSTGRES_PORT = "5432"
POSTGRES_USER = "demo"
POSTGRES_PASSWORD = "demo"
RABBITMQ_HOST = "localhost"
RABBITMQ_PORT = "5672"
REDIS_HOST = "localhost"
REDIS_PORT = "6379"
CLAUDE_CODE_DISABLE_TERMINAL_TITLE = "1"

[hooks]
enter = 'printf "\033]0;${FEATURE}\007" > /dev/tty 2>/dev/null || true'

What matters most:

  • post-create writes .mise.local.toml before -x claude runs.
  • post-switch and Mise enter keep terminal tab titles aligned to branch/feature.
  • hash_port gives stable port mapping per branch, so URLs stay predictable.

Caddy + Entra: the single redirect URI trick

OAuth providers like Microsoft Entra require you to register redirect URIs. Registering a new URI for every feature branch is tedious and doesn’t scale. Instead, register a single redirect URI: http://localhost:8080/auth/callback.

The trick is to include the feature name in the OAuth state parameter. When Entra redirects back, Caddy inspects state and redirects the browser directly to the feature’s Vite dev server on its own port. This matters because proxying the callback serves HTML, but later asset requests (for example /@vite/client, /src/main.jsx) still hit port 8080 with no matching route.

:8080 {
    handle /feature-notifications/* {
        reverse_proxy host.docker.internal:14315
    }

    handle /feature-dashboard/* {
        reverse_proxy host.docker.internal:14926
    }

    handle /auth/callback {
        @feature-notifications query state=feature-notifications
        redir @feature-notifications http://localhost:14315{uri}

        @feature-dashboard query state=feature-dashboard
        redir @feature-dashboard http://localhost:14926{uri}
    }

    handle {
        reverse_proxy host.docker.internal:3000
    }
}

Each frontend passes state=feature-notifications (or whatever its feature name is) when starting OAuth. The callback returns to one URI, Caddy redirects the browser to the correct frontend port, and the frontend exchanges the code and stores the token for later API calls.

Copy/paste quick start

Clone the demo project:

git clone https://github.com/gertjvr/demo-parallel-agent-workflow
cd demo-parallel-agent-workflow

Start shared infrastructure once:

mise run infra

Launch multiple agents:

wt switch -c feature-notifications -x claude -- "Add notification bell with real-time updates"
wt switch -c feature-dashboard     -x claude -- "Build analytics dashboard with charts"
wt switch -c feature-user-profile  -x claude -- "Add user profile page with avatar upload"

In any worktree, start both backend and frontend together:

mise run dev

mise run dev uses the merged mise.toml + .mise.local.toml, so both processes start with the right feature-specific ports and connection settings.

Check active worktrees and URLs:

wt list

Quick isolation check:

# Identity endpoint should differ by backend port
curl -s localhost:14317 | jq .
curl -s localhost:19843 | jq .

# Data should stay isolated per feature database
curl -s -X POST localhost:14317/todos \
  -H "Content-Type: application/json" \
  -d '{"title": "Implement login flow", "isComplete": false}'

curl -s localhost:14317/todos | jq length
curl -s localhost:19843/todos | jq length

Conclusion

This setup gave me a practical way to run agents in parallel without collisions, while keeping the workflow boring and repeatable.

Key takeaways:

  • Worktrees isolate code changes per feature branch.
  • Worktrunk post-create generates per-worktree config before agent execution.
  • Mise layering keeps shared defaults and feature overrides clean.
  • App-level isolation (database/vhost/key-prefix) prevents cross-feature bleed.
  • Caddy + state lets one OAuth callback serve many parallel branches.

The biggest shift for me was not speed. It was being able to keep momentum across multiple tasks without turning my local environment into chaos.

If you want to try this exact setup, use the demo repo: gertjvr/demo-parallel-agent-workflow.

Comments

Join the conversation on Bluesky.