The Open-Source Security Cage
for AI Agents

Kernel-hard sandbox · Real-time PII redaction · Merkle tamper-evident audit · Enforced cost & loop kills

Open source (GPL-3). Built for infra, app dev, and security teams.

AKIOS control plane and data plane funneling clients, CI/CD and apps into sandboxed agents and policy-gated destinations

Why AKIOS

Sandboxed by Default

Run agents safely in a strict sandbox (seccomp-bpf, userns). Default-deny network and filesystem access.

Cost & Loop Kills

Hard kill-switches for API costs and infinite loops. Enforce budgets per workflow.

PII Redaction

Real-time PII detection and redaction built-in. Protect sensitive data before it leaves the cage.

Auditable Logs

Ship auditable logs and reproducible builds. Every action is signed and traceable.

Explicit Policies

Connect tools with explicit policies, not magic. You define exactly what an agent can access.

Minimal & Native

Single binary, no heavy dependencies. Native Unix design for clean CI/CD integration.

Quickstart (2 minutes)

Install the runtime on a native Unix system and run your first sandboxed agent.

# Install
pip install akios

# Initialize a project
akios init my-project
cd my-project

# Run the sample workflow (kernel-hard on Linux)
akios run templates/hello-workflow.yml

Expected output:

[akios] sandbox: seccomp-bpf, cgroups v2 (Linux) / policy container (Docker) 
[akios] pii: redaction enabled 
[akios] audit: Merkle trail initialized

Architecture

Runtime layers: policy engine, kernel sandbox (seccomp/cgroups), PII redaction, budget/loop kills, Merkle audit — then agents to tools/APIs under explicit policies.

Client or CLI flows into AKIOS policies, sandbox, audit, then agents, then policy-gated tools/APIs

Security guarantees (every run)

The runtime enforces these controls for every agent:

Core agents

Four built-ins cover typical workflows while staying inside the cage:

Filesystem

Allowlisted reads, optional writes; path and mode constrained.

HTTP

Rate-limited requests with PII-redacted payloads and headers.

LLM

Token and cost tracking with budget kills; prompts/responses redacted.

Tool Executor

Whitelisted commands in a sandboxed subprocess with syscall filtering.

Ready to Test the Cage?

Star on GitHub if you value secure AI.
Join the community, contribute templates, discuss use cases.

Need help or to report an issue? Use GitHub Discussions or Issues — no contact form to keep support open and transparent.

Have a question? Visit the Community page or use GitHub Discussions/Issues for open, transparent support.