Shallow clones for parallel agents
The setup
I run parallel Claude agents most days now. Say you have eight files with the same anti-pattern, and you'd like eight parallel agents to fix them in one shot. Same prompt, different file each. You don't want them stomping each other's session log, and you don't want each one to drag along the entire memory of your day: the in-progress refactor, the ten skills you have loaded, the agent context from this morning. You want a small, focused process for one focused task. Eight times.
The two obvious shapes are both wrong.
Reuse the main profile. Now eight subprocesses are writing into the same ~/.claude/. Sessions interleaved, memory racing, settings drifting under your feet. You spend the afternoon untangling whose log is whose.
Spin up eight fresh profiles. Now each one logs in from scratch, re-onboards, has no skills, no MCP servers, none of the auth setup that took you a week to get right. The cost of cleanliness is dead time at startup, eight times over.
The shape that works
What you actually want is a shallow clone, the same idea as git clone --depth 1. Bring the auth, the keys, the model access, the billing identity. Skip the history. Skip the memory. Skip everything that's about who you've been as opposed to what you can do right now.
That's a worker profile in textaccounts:
textaccounts create sweep-1 --shallow --from personal
textaccounts create sweep-2 --shallow --from personal
# ...
Each sweep-N shares your personal profile's auth: same Claude account, same model access, no extra login. Everything else is fresh: empty session log, empty memory file, default settings. The parent is untouched. The siblings don't see each other.
They all show up in your registry:
$ textaccounts list
Name Path
personal ~/.claude-personal (active)
sweep-1 ~/.textaccounts/profiles/sweep-1
sweep-2 ~/.textaccounts/profiles/sweep-2
sweep-3 ~/.textaccounts/profiles/sweep-3
...
You hand each one a single prompt:
CLAUDE_CONFIG_DIR=~/.textaccounts/profiles/sweep-1 \
claude -p "Apply the rename in src/foo/a.py, run the tests."
CLAUDE_CONFIG_DIR=~/.textaccounts/profiles/sweep-2 \
claude -p "Apply the rename in src/foo/b.py, run the tests."
Eight parallel agents. Eight clean rooms. One shared identity. Your main profile keeps reading The Brothers Karamazov on the side without spoilers from the refactor sweep.
Why "shallow" is the right model
The thing I kept getting wrong before this clicked was treating identity and state as the same thing. They're not. Identity is "I am allowed to call this model, billed to this account." State is "I have been thinking about X for the last three hours and these are the skills I've loaded." Most parallel-agent work needs the first and is actively poisoned by the second.
Once you separate them, the right primitive is obvious: copy the identity layer (auth tokens, account binding), throw away the state layer (sessions, memory, agent context). A shallow clone of the profile, made cheaply, used briefly.
This is also the layer where things like profile isolation and parallel orchestration meet. The previous post in this series was about firewalling work and personal accounts on disk. That's the deep separation, the one you set up once and live with for years. Worker profiles are the shallow separation, made on demand, thrown away when the job's done.
What's coming next
Worker profiles pile up if you make a lot of them. Today the cleanup is manual: textaccounts remove sweep-1, repeat. There's a draft spec in the repo (docs/specs/shallow-clone.md) for --ephemeral and a gc command, so an orchestrator (or a weekly human sweep) can throw a whole batch away at once without forgetting any. The shape will become clearer in upcoming posts on the orchestration layer of Paperworlds. For now, worker profiles are the primitive; the cleanup is your problem.
If you've been running parallel agents by hand and feeling the pollution, this is the smallest thing that fixes it. Try it on your next eight-file refactor.