The Problem

I was working on a Windows target where Claude simply would not run. This was not a degraded experience or a partial failure. It would not start at all due to a dependency issue. At the same time, the code I was building had to run in that environment, so avoiding the platform was not an option.

The obvious next step was to try to fix Claude on that system. I spent some time going down that path, but it quickly became clear that this was not going to be a quick fix. Even if I managed to get it working once, there was no guarantee it would continue working across updates or configuration changes. At that point, the problem started to look different.

The Wrong Assumption

The mistake was assuming that the AI needed to run in the same place as the code. That assumption feels natural, especially when working across operating systems, but it is not actually required.

What the AI really needs is a way to execute work and a way to observe the results of that work.

Once I started looking at it that way, the solution became much simpler.

The Approach

Instead of trying to run Claude on the Windows environment, I kept Claude on macOS where it worked reliably. The Windows environment became an execution surface.

Claude would generate commands or scripts, those would be executed on Windows, and the results would be captured and sent back.

The implementation itself was intentionally simple. A shared directory was used as the communication layer. Claude would write a job into that directory, a runner process on Windows would pick it up and execute it, and the results would be written back out for Claude to read.

This avoided the need for any kind of network service or complex protocol. It also made the entire system easy to inspect and debug.

Why This Works

A few design choices made this work reliably.

Each job runs in its own process. If something crashes, it does not affect the runner itself. That separation turned out to be important when experimenting with different commands and scripts.

All output is captured, including standard output, standard error, and exit codes. That gives Claude complete visibility into what actually happened, instead of trying to infer results from partial information.

Jobs are processed in a predictable order. There is no reliance on timing or race conditions, which makes the behavior easier to reason about.

There is also a simple timeout mechanism. If a job hangs, it is terminated and the system moves on. This keeps the overall workflow from getting stuck.

Where Safety Lives

One detail that is worth calling out is where safety is enforced.

The runner itself does not attempt to restrict what can be executed. It will run whatever is placed into the queue. The responsibility for safety sits with how the AI is used.

That includes scoping what it is allowed to modify, avoiding destructive operations without explicit approval, and applying the same kinds of constraints discussed in earlier posts about stop conditions.

Control does not come from the execution layer. It comes from the process around it.

What This Unlocks

Once this pattern is in place, it applies to more than just this specific setup.

Any environment that is difficult or impossible to run AI in can still be used as an execution target. That includes restricted corporate machines, different operating systems, or remote build environments.

The AI can stay in a stable environment while interacting with those targets through a controlled interface.

This Is Not New

There is nothing particularly new about the underlying idea. This is similar to how job queues and remote execution systems have worked for a long time.

The difference is applying that same pattern to AI driven development. Instead of a developer manually issuing commands, the AI generates them, observes the results, and iterates.

Connection to Earlier Posts

This connects directly to the earlier posts about structuring AI interactions and enforcing limits.

The execution is still controlled. The workflow is still bounded. The difference is that the execution no longer has to occur in the same environment as the AI.

Closing Thoughts

If you find yourself trying to get an AI tool running in an environment where it does not fit, it is worth stepping back and reconsidering the assumption.

It is often easier to let the AI stay where it works well and provide a controlled way for it to interact with the environment you actually care about.