A bit confused, all this to say you folks use standard containerization?
whinvik 1 days ago [-]
Same. I didn't really understand what the difference is compared to containerization
rvz 1 days ago [-]
Fundamentally, there is no difference. Blocking syscalls in a Docker container is nothing new and one of the ways to achieve "sandboxing" and can already be done right now.
The only thing that caught people's attention was that it was applied to "AI Agents".
kjok 1 days ago [-]
What is so fundamentally different for AI agents?
Yoric 1 days ago [-]
The fact that the first thing people are going to do is punch holes in the sandbox with MCP servers?
rvz 1 days ago [-]
Other than the current popular thing which is "AI agents", like all programs, it changes absolutely nothing.
CuriouslyC 1 days ago [-]
Just gonna toss this out there, using an agent for code review is a little weird. You can calculate a covering set for the PR deterministically and feed that into a long context model along with the diff and any relevant metadata and get a good review in one shot without the hassle.
dakshgupta 1 days ago [-]
That used to be how we did it, but this method performed better on super large codebases. One of the reasons is that grepping is a highly effective way to trace function calls to understand the full impact of a change. It's also great for finding other examples of similar code (for example the same library being used) to ensure consistency of standards.
kketch 16 hours ago [-]
The main concern here isn’t really whether the agent needs access to the whole codebase. Personally I feel an agent might need to have access to all or most of the codebase to make better decision, see things have been done before etc.
The real issue is that containers are being used as a security boundary while it’s well known they are not. Containers aren't a sufficient isolation mechanism for multi-tenant / untrusted workloads.
Using them to run your code review agent again puts your customers source code at risk of theft, unless you are using an actual secure sandbox mechanism to protect your customers data which from reading the article does not seem to be the case.
arjvik 1 days ago [-]
If that's the case, isn't a grep tool a lot more tractable than a Linux agent that will end up mostly calling `grep`?
lomase 1 days ago [-]
But then you can't say is powered by AI and get that VC money.
kjok 1 days ago [-]
Ah ha.
CuriouslyC 1 days ago [-]
You shouldn't need the entire codebase, just a covering set for the modified files (you can derive this by parsing the files). If your PR is atomic, covering set + diff + business context is probably going to be less than 300k tokens, which Gemini can handle easily. Gemini is quite good even at 500k, and you can run it multiple times with KV cache for cheap to get a distribution (tell it to analyze the PR from different perspectives).
thundergolfer 1 days ago [-]
This is a good explanation of how standard filesystem sandboxing works, but it's hopefully not trying to be convincing to security engineers.
> At Greptile, we run our agent process in a locked-down rootless podman container so that we have kernel guarantees that it sees only things it’s supposed to.
This sounds like a runc container because they've not said otherwise. runc has a long history with filesystem exploits based on leaked file descriptors and `openat` without NO_FOLLOW.
The agent ecosystem seems to have already settled on VMs or gVisor[2] being table-stakes. We use the latter.
chroot'ing isn't sandboxing or "containers". And I don't think it's a very good explanation, actually - not that its necessarily easy to explain.
It looks like the author just discovered the kernel and syscalls and is sharing it - but, it's not exactly new or rocket science.
The author probably should use the existing sandbox libraries to sandbox their code - and that has nothing to with AI Agents actually, any process will benefit from sandboxing, that it runs on LLM replies or not.
ujrvjhtifcvlvvi 1 days ago [-]
if you don't mind me asking: how do you deal with syscalls that gVisor has not implemented?
thundergolfer 10 hours ago [-]
gVisor has implemented a lot of them, but every few months we have an application that hits an unimplemented syscall. We tend to reach for application workarounds, and haven't yet landed a PR to add a syscall. But I'd expect we could land such a PR.
kketch 1 days ago [-]
The seems to be looking to let the agent access the source code for review. But in that case, the agent should only see the codebase and nothing else. For a code review agent, all it really needs are:
- Access to files in the repositorie(s)
- Access to the patch/diff being reviewed
- Ability to perform text/semantic search across the codebase
That doesn’t require running the agent inside a container on a system with sensitive data. Exposing an API to the agent that specifically give it access to the above data, avoiding the risk altogether.
If it's really important that the agent is able to use a shell, why not use something like codespaces and run it in there?
warkdarrior 1 days ago [-]
It would also need:
- Access to repo history
- Access to CI/CD logs
- Access to bug/issue tracking
kketch 17 hours ago [-]
I guess maybe even more things? The approach presented in the article doesn't seem like a good way of giving access to these by the way. All of these don't live on a dev machine. Things like Github codespaces are better suited for this job and are in fact already used to implement code reviews by LLMs.
My point is whitelisting is better than blacklisting.
When a front end need access to a bunch of things in a database. We usually provide exactly what's needed through an API, we don't let it run SQL queries on the database and attempt to filter / sandbox the SQL queries.
jt2190 1 days ago [-]
OT: I wonder if WASM is ready to fulfill the sandboxing needs expressed in this article, i.e. can we put the AI agent into a web assembly sandbox and have it function as required?
seanw265 12 hours ago [-]
If the agent only needs the filesystem then probably. If it needs to execute code then things get flaky. The WASM/WASI/WASIX ecosystem still has gaps (notably no nodejs).
Yoric 1 days ago [-]
You'll probably need some kind of WebGPU bindings, but I think it sounds feasible.
seanw265 12 hours ago [-]
Containers might be fine if you’re only sandboxing filesystem access, but once an agent is executing code, kernel-level escapes are a concern. You need at least a VM boundary (or something equivalent) in that case.
wmf 1 days ago [-]
"How can I sandbox a coding agent?"
"Early civilizations had no concept of zero..."
IshKebab 1 days ago [-]
If you only care about filesystem sandboxing isn't Landlock the easiest solution?
Rendered at 01:53:39 GMT+0000 (Coordinated Universal Time) with Vercel.
The only thing that caught people's attention was that it was applied to "AI Agents".
The real issue is that containers are being used as a security boundary while it’s well known they are not. Containers aren't a sufficient isolation mechanism for multi-tenant / untrusted workloads.
Using them to run your code review agent again puts your customers source code at risk of theft, unless you are using an actual secure sandbox mechanism to protect your customers data which from reading the article does not seem to be the case.
> At Greptile, we run our agent process in a locked-down rootless podman container so that we have kernel guarantees that it sees only things it’s supposed to.
This sounds like a runc container because they've not said otherwise. runc has a long history with filesystem exploits based on leaked file descriptors and `openat` without NO_FOLLOW.
The agent ecosystem seems to have already settled on VMs or gVisor[2] being table-stakes. We use the latter.
1. https://github.com/opencontainers/runc/security/advisories/G...
2. https://gvisor.dev/docs/architecture_guide/security/
It looks like the author just discovered the kernel and syscalls and is sharing it - but, it's not exactly new or rocket science.
The author probably should use the existing sandbox libraries to sandbox their code - and that has nothing to with AI Agents actually, any process will benefit from sandboxing, that it runs on LLM replies or not.
- Access to files in the repositorie(s)
- Access to the patch/diff being reviewed
- Ability to perform text/semantic search across the codebase
That doesn’t require running the agent inside a container on a system with sensitive data. Exposing an API to the agent that specifically give it access to the above data, avoiding the risk altogether.
If it's really important that the agent is able to use a shell, why not use something like codespaces and run it in there?
- Access to repo history
- Access to CI/CD logs
- Access to bug/issue tracking
My point is whitelisting is better than blacklisting.
When a front end need access to a bunch of things in a database. We usually provide exactly what's needed through an API, we don't let it run SQL queries on the database and attempt to filter / sandbox the SQL queries.
"Early civilizations had no concept of zero..."