Hull Runtime
Hull is a daemonless Linux container runtime. A single ~3 MB static-musl Zig binary orchestrates namespaces, cgroups v2, seccomp-bpf, Landlock, and pivot_root — no daemon, no containerd, no shim. Each hull run forks the workload, writes state to disk, and exits. Commands like ps, stop, and inspect read that state directory.
CLI Reference
| hull run [--rootless] <manifest> | Start a container from a JSON manifest. --rootless forces NEWUSER even as root |
| hull ps | List running containers: name, PID, uptime, argv[0] |
| hull stop <name> | Graceful stop (SIGTERM) |
| hull kill <name> | Immediate stop (SIGKILL) |
| hull logs <name> | Print captured stdout/stderr |
| hull inspect <name> | Show cgroup numbers, namespace inums, mount points |
| hull version | Print version |
| hull help | Show usage |
Exit codes: 0 success, 1 usage error, 2 runtime error, 3 manifest error, 127 child execve failed.
Manifest Specification
A hull manifest is a JSON file describing the container. Three fields are required; everything else has sane defaults.
Required Fields
| name | string | — | Container name (1-64 chars, alphanumeric + dash + underscore) |
| rootfs | string | — | Path to rootfs directory or .tar.gz archive. Archives are extracted and cached under /var/lib/hull/rootfs/<name>/ |
| argv | string[] | — | Command to execute inside the container |
Optional Fields
| env | string[] | [] | Environment variables (e.g. ["PORT=4500", "NODE_ENV=production"]) |
| profile | string | "default" | Seccomp profile: default, beam, dotnet, or node |
| network | string | "none" | Network mode: none (loopback), host (shared), or bridge (veth) |
| bridge.name | string | "hull0" | Linux bridge device name (only if network=bridge) |
| bridge.subnet | string | "10.88.0.0/24" | Bridge subnet CIDR (only /24 supported) |
| bridge.ip | string | "" (auto) | Container IP. Empty = auto-allocate next free via lease dir |
| bridge.mtu | number | 0 | MTU for veth pair. 0 = kernel default (1500) |
| hostname | string | name | Container hostname (UTS namespace) |
| cwd | string | "/" | Working directory inside the container before execve |
| limits.memory_mb | number | 0 | Memory limit in MB. 0 = no limit |
| limits.cpu | number | 0 | CPU fraction (1.0 = one core). 0 = no limit |
| limits.pids | number | 0 | Max processes. 0 = no limit |
| mounts[].host | string | — | Host source path for bind mount |
| mounts[].container | string | — | Container destination path |
| mounts[].readonly | boolean | false | Mount as read-only |
Example: Titan ESB (Elixir/Phoenix)
{
"name": "titan",
"rootfs": "/var/lib/hull/rootfs/titan",
"argv": ["/opt/titan/bin/titan", "start"],
"env": [
"PHX_SERVER=true",
"PORT=4500",
"PHX_HOST=www.titan-bus.com",
"LANG=en_US.UTF-8",
"HOME=/tmp",
"RELEASE_TMP=/tmp"
],
"profile": "beam",
"network": "host",
"hostname": "titan",
"cwd": "/opt/titan",
"mounts": [
{ "host": "/dev/null", "container": "/dev/null" },
{ "host": "/dev/urandom", "container": "/dev/urandom" },
{ "host": "/etc/resolv.conf", "container": "/etc/resolv.conf", "readonly": true },
{ "host": "/etc/ssl/certs", "container": "/etc/ssl/certs", "readonly": true }
],
"limits": { "memory_mb": 1024, "cpu": 2.0, "pids": 4096 }
}Example: Bridge-Networked Container
{
"name": "webapp",
"rootfs": "/var/lib/hull/rootfs/webapp",
"argv": ["/usr/local/bin/node", "server.js"],
"env": ["PORT=3000", "NODE_ENV=production"],
"profile": "node",
"network": "bridge",
"bridge": { "subnet": "10.88.0.0/24" },
"hostname": "webapp",
"cwd": "/app",
"limits": { "memory_mb": 256, "cpu": 1.0, "pids": 128 }
}Seccomp Profiles
Hull ships four curated syscall allowlists. The profile is selected via the profile manifest field and installed just before execve— even the workload’s first syscall is filtered. Any syscall not in the list triggers KILL_PROCESS(not EPERM like Docker’s default).
| Profile | Syscalls | Use Case | Notable Extras |
| default | 122 | Rust musl, Zig, Go static binaries, shell scripts | execve/clone/clone3 for shell pipelines, copy_file_range for coreutils |
| beam | 177 | Elixir, Erlang, Phoenix — the BEAM VM | +55 extras: timerfd, signalfd, inotify, memfd_create, legacy mkdir/rmdir/unlink/rename/chmod/chown |
| node | 32 | Node.js, Deno, Bun — libuv event loop | epoll_create1, epoll_wait, eventfd, signalfd, timerfd |
| dotnet | 36 | .NET 8/9 (CoreCLR, NativeAOT) | select, pselect6, signalfd4, memfd_create (JIT staging), tgkill (pthreads) |
All profiles allow ~400 syscalls total on x86_64; hull blocks the other ~280-370 per profile. Notably blocked in all profiles: ptrace, process_vm_readv/writev, bpf, add_key/keyctl, userfaultfd, kexec_load, init_module.
Security Layers
Hull applies seven isolation layers, outermost to innermost. Each layer is independent — failure of one does not disable the others.
| 1 | User namespace | Process thinks it's root; host sees unprivileged uid. Enabled by --rootless flag |
| 2 | PID namespace | Isolated PID tree. Container PID 1 = your workload. Cannot see or signal host processes |
| 3 | Network namespace | Isolated network stack. Three modes: none (loopback only), host (shared), bridge (veth pair + NAT) |
| 4 | Mount namespace | pivot_root into dedicated rootfs. Host filesystem completely invisible (not chroot — kernel enforcement) |
| 5 | cgroups v2 | Hard limits on CPU, memory, and PIDs. Enforced by kernel. Container cannot fork-bomb or exhaust host RAM |
| 6 | Landlock LSM | Filesystem allowlist. Default: rootfs read+exec, /tmp read+write. Even uid 0 inside the container cannot bypass it. Graceful skip on kernels < 5.13 |
| 7 | seccomp-bpf | Syscall allowlist per workload profile. KILL_PROCESS on violation. Installed just before execve — even the first syscall is filtered |
Bridge Networking
Set "network": "bridge" and hull creates a full veth-bridge networking stack per container:
- Creates bridge device
hull0(idempotent) with gateway IP10.88.0.1/24 - Enables
ip_forwardand installs nftables masquerade rule for the subnet - Inserts
iptables -I FORWARD 1 -i hull0 -j ACCEPTto bypass Docker’spolicy DROP - Allocates next free IP via atomic
O_EXCLlock files in~/.hull/leases/ - Creates veth pair; attaches host end to bridge, moves container end to child’s NEWNET
- Configures container side via
nsenter -t <pid> -n ip addr add ... - On exit: lease released, veth auto-cleaned by kernel when netns terminates
$ sudo nsenter -t <pid> -n ip -br addr
lo UNKNOWN 127.0.0.1/8
eth0@if36548 UP 10.88.0.2/24
$ sudo nsenter -t <pid> -n ip route
default via 10.88.0.1 dev eth0
10.88.0.0/24 dev eth0 proto kernel scope link src 10.88.0.2
$ sudo nsenter -t <pid> -n ping -c 3 8.8.8.8
3 packets transmitted, 3 received, 0% packet loss
rtt min/avg/max = 0.256/0.389/0.523 msRootless Mode
Pass --rootless (or run as a non-root user) and hull uses a three-process fork-pipe dance to set up the NEWUSER namespace:
orig_parent (host uid)
│ fork
▼
userns_setup
│ unshare(NEWUSER) → signal parent "ready"
│ block on pipe → parent writes uid_map/gid_map
│ unshare(NEWPID|NEWNET|NEWNS|NEWUTS|NEWIPC)
│ fork
▼
workload (PID 1 in NEWPID, uid 0 in container)
dup2 log_fd → pivot_root → landlock → seccomp → execveThe workload sees itself as uid 0 with full capabilities inside the user namespace, but the host sees an unprivileged uid with no write access to any host path. cgroups are best-effort in rootless mode (most hosts don’t delegate cgroups to unprivileged users).
State & Logs
Hull has no daemon — state lives on disk. Location precedence:
| State | $HULL_STATE_DIR → $HOME/.hull/state → /var/run/hull/state → /tmp/.hull/state |
| Logs | $HULL_LOGS_DIR → $HOME/.hull/logs → /var/run/hull/logs → /tmp/.hull/logs |
| Leases | $HULL_LEASE_DIR → $HOME/.hull/leases → /var/run/hull/leases |
| Rootfs cache | /var/lib/hull/rootfs/<name>/ (extracted from tar.gz archives) |
Logs are captured by dup2’ing the workload’s stdout/stderr to a heritable fd opened beforepivot_root. The fd survives all namespace transitions so the workload’s output lands on the host filesystem even though the container cannot see it.
Mentat Integration
When used as a Mentat driver, hull is invoked by mentat-agent via the HullDriver trait implementation. The YAML config.driver: hull triggers the agent to spawn hull run <manifest>, register the endpoint in the service registry, and auto-generate the Caddy reverse proxy block.
services:
- name: titan
replicas: 1
config:
driver: hull
manifest: /etc/hull/titan.json
binary: /usr/local/bin/hull-sudo
port: 4500
ingress:
host: www.titan-bus.com
aliases:
- titan.getmentat.run
path: /
tls: true
security:
profile: hullKnown Limitations
- x86_64 and aarch64 only (seccomp tables are architecture-specific)
- Kernel ≥ 5.13 required for Landlock (graceful fallback on older kernels)
- cgroups v2 unified hierarchy only (no v1 fallback)
- Bridge mode only supports /24 subnets (254 usable IPs)
- Bridge mode skips NEWPID — the workload shares the host PID namespace
- No container registry — rootfs must be a local path or archive
- No layered filesystem — the rootfs is a full copy, not overlayfs layers
- Custom Landlock rules via manifest not yet implemented (default policy only)
- Rootless mode: cgroups are best-effort (most hosts don’t delegate to unprivileged users)
Build & Deploy
cd /path/to/hull
zig build -Dtarget=x86_64-linux-musl -Doptimize=ReleaseFast
# Binary: zig-out/bin/hull (~3.1 MB)
scp zig-out/bin/hull user@host:/usr/local/bin/hull
ssh user@host "hull version"
# hull 0.2.0 (zig 0.15.2)