Hull Runtime

Hull is a daemonless Linux container runtime. A single ~3 MB static-musl Zig binary orchestrates namespaces, cgroups v2, seccomp-bpf, Landlock, and pivot_root — no daemon, no containerd, no shim. Each hull run forks the workload, writes state to disk, and exits. Commands like ps, stop, and inspect read that state directory.

CLI Reference

hull run [--rootless] <manifest>Start a container from a JSON manifest. --rootless forces NEWUSER even as root
hull psList running containers: name, PID, uptime, argv[0]
hull stop <name>Graceful stop (SIGTERM)
hull kill <name>Immediate stop (SIGKILL)
hull logs <name>Print captured stdout/stderr
hull inspect <name>Show cgroup numbers, namespace inums, mount points
hull versionPrint version
hull helpShow usage

Exit codes: 0 success, 1 usage error, 2 runtime error, 3 manifest error, 127 child execve failed.

Manifest Specification

A hull manifest is a JSON file describing the container. Three fields are required; everything else has sane defaults.

Required Fields

namestringContainer name (1-64 chars, alphanumeric + dash + underscore)
rootfsstringPath to rootfs directory or .tar.gz archive. Archives are extracted and cached under /var/lib/hull/rootfs/<name>/
argvstring[]Command to execute inside the container

Optional Fields

envstring[][]Environment variables (e.g. ["PORT=4500", "NODE_ENV=production"])
profilestring"default"Seccomp profile: default, beam, dotnet, or node
networkstring"none"Network mode: none (loopback), host (shared), or bridge (veth)
bridge.namestring"hull0"Linux bridge device name (only if network=bridge)
bridge.subnetstring"10.88.0.0/24"Bridge subnet CIDR (only /24 supported)
bridge.ipstring"" (auto)Container IP. Empty = auto-allocate next free via lease dir
bridge.mtunumber0MTU for veth pair. 0 = kernel default (1500)
hostnamestringnameContainer hostname (UTS namespace)
cwdstring"/"Working directory inside the container before execve
limits.memory_mbnumber0Memory limit in MB. 0 = no limit
limits.cpunumber0CPU fraction (1.0 = one core). 0 = no limit
limits.pidsnumber0Max processes. 0 = no limit
mounts[].hoststringHost source path for bind mount
mounts[].containerstringContainer destination path
mounts[].readonlybooleanfalseMount as read-only

Example: Titan ESB (Elixir/Phoenix)

/etc/hull/titan.json
{
  "name": "titan",
  "rootfs": "/var/lib/hull/rootfs/titan",
  "argv": ["/opt/titan/bin/titan", "start"],
  "env": [
    "PHX_SERVER=true",
    "PORT=4500",
    "PHX_HOST=www.titan-bus.com",
    "LANG=en_US.UTF-8",
    "HOME=/tmp",
    "RELEASE_TMP=/tmp"
  ],
  "profile": "beam",
  "network": "host",
  "hostname": "titan",
  "cwd": "/opt/titan",
  "mounts": [
    { "host": "/dev/null", "container": "/dev/null" },
    { "host": "/dev/urandom", "container": "/dev/urandom" },
    { "host": "/etc/resolv.conf", "container": "/etc/resolv.conf", "readonly": true },
    { "host": "/etc/ssl/certs", "container": "/etc/ssl/certs", "readonly": true }
  ],
  "limits": { "memory_mb": 1024, "cpu": 2.0, "pids": 4096 }
}

Example: Bridge-Networked Container

bridge-test.json
{
  "name": "webapp",
  "rootfs": "/var/lib/hull/rootfs/webapp",
  "argv": ["/usr/local/bin/node", "server.js"],
  "env": ["PORT=3000", "NODE_ENV=production"],
  "profile": "node",
  "network": "bridge",
  "bridge": { "subnet": "10.88.0.0/24" },
  "hostname": "webapp",
  "cwd": "/app",
  "limits": { "memory_mb": 256, "cpu": 1.0, "pids": 128 }
}

Seccomp Profiles

Hull ships four curated syscall allowlists. The profile is selected via the profile manifest field and installed just before execve— even the workload’s first syscall is filtered. Any syscall not in the list triggers KILL_PROCESS(not EPERM like Docker’s default).

ProfileSyscallsUse CaseNotable Extras
default122Rust musl, Zig, Go static binaries, shell scriptsexecve/clone/clone3 for shell pipelines, copy_file_range for coreutils
beam177Elixir, Erlang, Phoenix — the BEAM VM+55 extras: timerfd, signalfd, inotify, memfd_create, legacy mkdir/rmdir/unlink/rename/chmod/chown
node32Node.js, Deno, Bun — libuv event loopepoll_create1, epoll_wait, eventfd, signalfd, timerfd
dotnet36.NET 8/9 (CoreCLR, NativeAOT)select, pselect6, signalfd4, memfd_create (JIT staging), tgkill (pthreads)

All profiles allow ~400 syscalls total on x86_64; hull blocks the other ~280-370 per profile. Notably blocked in all profiles: ptrace, process_vm_readv/writev, bpf, add_key/keyctl, userfaultfd, kexec_load, init_module.

Security Layers

Hull applies seven isolation layers, outermost to innermost. Each layer is independent — failure of one does not disable the others.

1User namespaceProcess thinks it's root; host sees unprivileged uid. Enabled by --rootless flag
2PID namespaceIsolated PID tree. Container PID 1 = your workload. Cannot see or signal host processes
3Network namespaceIsolated network stack. Three modes: none (loopback only), host (shared), bridge (veth pair + NAT)
4Mount namespacepivot_root into dedicated rootfs. Host filesystem completely invisible (not chroot — kernel enforcement)
5cgroups v2Hard limits on CPU, memory, and PIDs. Enforced by kernel. Container cannot fork-bomb or exhaust host RAM
6Landlock LSMFilesystem allowlist. Default: rootfs read+exec, /tmp read+write. Even uid 0 inside the container cannot bypass it. Graceful skip on kernels < 5.13
7seccomp-bpfSyscall allowlist per workload profile. KILL_PROCESS on violation. Installed just before execve — even the first syscall is filtered

Bridge Networking

Set "network": "bridge" and hull creates a full veth-bridge networking stack per container:

  1. Creates bridge device hull0 (idempotent) with gateway IP 10.88.0.1/24
  2. Enables ip_forward and installs nftables masquerade rule for the subnet
  3. Inserts iptables -I FORWARD 1 -i hull0 -j ACCEPT to bypass Docker’s policy DROP
  4. Allocates next free IP via atomic O_EXCL lock files in ~/.hull/leases/
  5. Creates veth pair; attaches host end to bridge, moves container end to child’s NEWNET
  6. Configures container side via nsenter -t <pid> -n ip addr add ...
  7. On exit: lease released, veth auto-cleaned by kernel when netns terminates
Verified output from bridge container
$ sudo nsenter -t <pid> -n ip -br addr
lo               UNKNOWN        127.0.0.1/8
eth0@if36548     UP             10.88.0.2/24

$ sudo nsenter -t <pid> -n ip route
default via 10.88.0.1 dev eth0
10.88.0.0/24 dev eth0 proto kernel scope link src 10.88.0.2

$ sudo nsenter -t <pid> -n ping -c 3 8.8.8.8
3 packets transmitted, 3 received, 0% packet loss
rtt min/avg/max = 0.256/0.389/0.523 ms

Rootless Mode

Pass --rootless (or run as a non-root user) and hull uses a three-process fork-pipe dance to set up the NEWUSER namespace:

Rootless process tree
orig_parent (host uid)
    │ fork
    ▼
userns_setup
    │ unshare(NEWUSER) → signal parent "ready"
    │ block on pipe → parent writes uid_map/gid_map
    │ unshare(NEWPID|NEWNET|NEWNS|NEWUTS|NEWIPC)
    │ fork
    ▼
workload (PID 1 in NEWPID, uid 0 in container)
    dup2 log_fd → pivot_root → landlock → seccomp → execve

The workload sees itself as uid 0 with full capabilities inside the user namespace, but the host sees an unprivileged uid with no write access to any host path. cgroups are best-effort in rootless mode (most hosts don’t delegate cgroups to unprivileged users).

State & Logs

Hull has no daemon — state lives on disk. Location precedence:

State$HULL_STATE_DIR → $HOME/.hull/state → /var/run/hull/state → /tmp/.hull/state
Logs$HULL_LOGS_DIR → $HOME/.hull/logs → /var/run/hull/logs → /tmp/.hull/logs
Leases$HULL_LEASE_DIR → $HOME/.hull/leases → /var/run/hull/leases
Rootfs cache/var/lib/hull/rootfs/<name>/ (extracted from tar.gz archives)

Logs are captured by dup2’ing the workload’s stdout/stderr to a heritable fd opened beforepivot_root. The fd survives all namespace transitions so the workload’s output lands on the host filesystem even though the container cannot see it.

Mentat Integration

When used as a Mentat driver, hull is invoked by mentat-agent via the HullDriver trait implementation. The YAML config.driver: hull triggers the agent to spawn hull run <manifest>, register the endpoint in the service registry, and auto-generate the Caddy reverse proxy block.

Mentat service YAML for hull
services:
  - name: titan
    replicas: 1
    config:
      driver: hull
      manifest: /etc/hull/titan.json
      binary: /usr/local/bin/hull-sudo
      port: 4500
    ingress:
      host: www.titan-bus.com
      aliases:
        - titan.getmentat.run
      path: /
      tls: true
    security:
      profile: hull

Known Limitations

  • x86_64 and aarch64 only (seccomp tables are architecture-specific)
  • Kernel ≥ 5.13 required for Landlock (graceful fallback on older kernels)
  • cgroups v2 unified hierarchy only (no v1 fallback)
  • Bridge mode only supports /24 subnets (254 usable IPs)
  • Bridge mode skips NEWPID — the workload shares the host PID namespace
  • No container registry — rootfs must be a local path or archive
  • No layered filesystem — the rootfs is a full copy, not overlayfs layers
  • Custom Landlock rules via manifest not yet implemented (default policy only)
  • Rootless mode: cgroups are best-effort (most hosts don’t delegate to unprivileged users)

Build & Deploy

Cross-compile from macOS to Linux
cd /path/to/hull
zig build -Dtarget=x86_64-linux-musl -Doptimize=ReleaseFast
# Binary: zig-out/bin/hull (~3.1 MB)

scp zig-out/bin/hull user@host:/usr/local/bin/hull
ssh user@host "hull version"
# hull 0.2.0 (zig 0.15.2)