About CV Publications & Talks Teaching Blog
← Back to Blog
AI

AI Agents in Production: Who Controls the Runtime?

AI agents are rapidly becoming part of the developer stack. Tools like Claude Code, OpenAI Codex, GitHub Copilot and Gemini are already changing the way many of us write and ship software. But while the ecosystem is moving fast, one question is becoming increasingly important, especially in regulated or security-sensitive environments:

Who actually controls the runtime of these agents?

VibePod: A Local Runtime for AI Coding Agents

My colleague Harald Nezbeda recently started a very interesting open-source project tackling exactly this problem: VibePod.

VibePod is a local runtime for AI coding agents. It allows developers to run and switch between agents inside a consistent CLI environment. It already integrates well with Anthropic’s Claude Code, OpenAI Codex, GitHub Copilot, Gemini and others, all managed from a single CLI.

I’ve been using it in my daily work, and two things stood out to me:

  • Transparency. Every API call the agent makes is tracked by VibePod. You can see exactly what the agent does at runtime, not just what it outputs.
  • Isolation. Each agent runs inside a container and only has access to the files explicitly mapped in. No silent access to the rest of your system.

The Next Bottleneck Won’t Be Model Quality

What I think is underrated in the current AI tooling debate: the next bottleneck won’t be model quality. It will be the infrastructure layer: runtime control, security boundaries, observability, governance.

Right now, most teams still treat agents as experiments. But the moment they enter production pipelines, these questions become unavoidable:

  • What data does the agent have access to?
  • Which API calls is it making, and to where?
  • Can I audit what happened during a session?
  • How do I enforce boundaries without killing productivity?

These aren’t hypothetical concerns. In regulated industries like finance, healthcare and the public sector, they are preconditions for adoption.

Open Source as a Building Block

Open-source tools like VibePod will become an important building block for a secure AI developer stack. They give teams the control and visibility that hosted, black-box solutions often can’t provide. And they align well with the broader push towards transparency and trustworthy AI that we’re seeing across the industry and in regulation like the EU AI Act.

The shift from “AI as experiment” to “AI as production tooling” is happening. The infrastructure needs to keep up.