Compare commits

..

4 Commits

Author SHA1 Message Date
d0da012a82 feat: show golden VM tag in clone list, add console logging, fix ubuntu boot
- Persist golden VM tag to clones/{id}/tag at spawn time
- GET /clones now returns [{id, tag}] objects instead of plain IDs
- Web UI renders tag as a dim label next to each clone entry (clone 3 · default)
- Pre-existing fixes included in this commit:
  - console: tee all PTY output to clones/{id}/console.log for boot capture
  - network: destroy stale tap before recreating to avoid EBUSY errors
  - orchestrator: fix ubuntu systemd boot (custom fc-console.service, fstab,
    mask serial-getty udev dep, longer settle time, correct package list)
  - config: remove quiet/loglevel=0 from default boot args

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:04:11 +00:00
fb1db7c9ea feat: multi-distro support and tagged golden snapshots
Add Alpine, Debian, and Ubuntu rootfs support to `init [distro]`.
Golden snapshots are now namespaced under `golden/<tag>/` so multiple
baselines can coexist. `spawn [tag] [N]` selects which snapshot to
clone from. Systemd-based distros (Debian, Ubuntu) get a fc-net-init
systemd unit; Alpine keeps its inittab-based init.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 20:48:43 +00:00
bfc1f47287 fix: pause VM before MMDS injection, resume after to ensure config is applied
- Load snapshot with ResumeVM: false so MMDS data can be written while VM is paused
- Call ResumeVM explicitly after configureMmds succeeds
- Skip PUT /mmds/config on restored VMs (Firecracker rejects it with 400)
- Strip JSON quotes from MMDS values with tr -d '"' in net-init script
- Add 169.254.169.2/32 link-local addr and flush eth0 before applying new IP

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 15:11:14 +00:00
5e23e0ab4e feat: add guest network autoconfiguration via Firecracker MMDS
Introduces optional per-clone IP assignment using the Firecracker Microvm
Metadata Service (MMDS). A background daemon (fc-net-init) is baked into
the rootfs during init and captured in the golden snapshot — on clone
resume it polls 169.254.169.254 and applies the IP/GW/DNS config injected
by the orchestrator immediately after snapshot restore.

- config.go: add AutoNetConfig bool (FC_AUTO_NET_CONFIG=1)
- orchestrator.go: embed fc-net-init daemon + MMDS link-local route in
  init script; set AllowMMDS: true on golden NIC; spawnOne/SpawnSingle
  accept net bool and propagate it via FC_AUTO_NET_CONFIG in proxy env
- console.go: set AllowMMDS: true on clone NIC; call configureMmds()
  after m.Start() when AutoNetConfig is enabled
- network.go: add configureMmds() — PUT /mmds with ip/gw/dns over the
  clone's Firecracker Unix socket
- serve.go: POST /clones accepts optional {"net": bool} body to override
  the global AutoNetConfig default per-request
- web/terminal.html: spawn button always sends {"net": true}
- docs/commands.md: document manual config + MMDS autoconfiguration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 11:58:59 +00:00
12 changed files with 858 additions and 129 deletions

View File

@@ -35,8 +35,9 @@ All tunables are set via environment variables. Every variable has a default; no
| `FC_MEM_MIB` | `128` | Memory per VM in MiB |
| `FC_BRIDGE` | `fcbr0` | Host bridge name. Set to `none` to disable all networking |
| `FC_BRIDGE_CIDR` | `172.30.0.1/24` | IP address and prefix assigned to the host bridge |
| `FC_GUEST_PREFIX` | `172.30.0` | IP prefix for guest address allocation |
| `FC_GUEST_GW` | `172.30.0.1` | Default gateway advertised to guests |
| `FC_GUEST_PREFIX` | `172.30.0` | IP prefix for guest address allocation (used with `FC_AUTO_NET_CONFIG`) |
| `FC_GUEST_GW` | `172.30.0.1` | Default gateway advertised to guests (used with `FC_AUTO_NET_CONFIG`) |
| `FC_AUTO_NET_CONFIG` | _(unset)_ | Set to `1` to automatically assign guest IPs via MMDS on clone start |
Kernel boot arguments are hardcoded and not user-configurable:
@@ -55,10 +56,12 @@ After running all commands, `$FC_BASE_DIR` (`/tmp/fc-orch` by default) contains:
├── vmlinux # kernel image (shared, immutable)
├── rootfs.ext4 # base Alpine rootfs (shared, immutable)
├── golden/
│ ├── api.sock # Firecracker API socket (golden VM, transient)
│ ├── rootfs.ext4 # COW copy of base rootfs used by golden VM
│ ├── mem # memory snapshot (read by all clones, never written)
└── vmstate # VM state snapshot (golden reference)
│ ├── default/ # "default" tag directory
│ ├── api.sock # Firecracker API socket (golden VM, transient)
│ ├── rootfs.ext4 # COW copy of base rootfs used by golden VM
│ ├── mem # memory snapshot (read by all clones, never written)
│ │ └── vmstate # VM state snapshot (golden reference)
│ └── <tag>/ # other tagged snapshots
├── clones/
│ ├── 1/
│ │ ├── api.sock # Firecracker API socket (clone 1)
@@ -95,23 +98,63 @@ When `FC_BRIDGE` is not `none` (the default), a Linux bridge and per-VM TAP devi
└── fctapN (clone N)
```
Each clone receives a unique TAP device and MAC address (`AA:FC:00:00:XX:XX`). IP assignment inside the guest is the guest OS's responsibility (the rootfs init script only brings `eth0` up; no DHCP server is included).
Each clone receives a unique TAP device and MAC address (`AA:FC:00:00:XX:XX`). The host-side bridge has NAT masquerading enabled so guests can reach the internet through the host's default route.
Set `FC_BRIDGE=none` to skip all network configuration. VMs will boot without a network interface.
### Guest IP assignment
The rootfs init script brings `eth0` up at the link layer only. Guests have no IP address by default. There are two ways to configure networking inside a VM:
#### Manual configuration (inside the VM console)
```sh
# Pick an unused IP in the bridge subnet — e.g. .11 for clone 1, .12 for clone 2
ip addr add 172.30.0.11/24 dev eth0
ip route add default via 172.30.0.1
echo "nameserver 1.1.1.1" > /etc/resolv.conf
ping 1.1.1.1 # verify
```
Manual config is ephemeral — it is lost when the clone is stopped. Use the automatic option below for persistent configuration.
#### Automatic configuration via MMDS (`FC_AUTO_NET_CONFIG=1`)
When `FC_AUTO_NET_CONFIG=1` is set, the orchestrator uses the Firecracker **Microvm Metadata Service (MMDS)** to inject per-clone network config immediately after the VM starts. A small background daemon embedded in the rootfs (`/sbin/fc-net-init`) polls `169.254.169.254` and applies the config automatically — no manual steps needed.
IPs are assigned deterministically from `FC_GUEST_PREFIX`:
```
clone 1 → 172.30.0.11/24
clone 2 → 172.30.0.12/24
clone N → 172.30.0.(10+N)/24
```
Usage:
```sh
sudo FC_AUTO_NET_CONFIG=1 ./fc-orch start
```
Within ~12 seconds of clone start, `eth0` inside the VM will have the assigned IP, default route, and DNS (`1.1.1.1`) configured.
> **Note:** `FC_AUTO_NET_CONFIG` requires `fc-orch init` and `fc-orch golden` to have been run (or re-run) after this feature was added, so that the `fc-net-init` daemon is present in the golden snapshot.
---
## `init`
### Purpose
Downloads the Linux kernel image and builds a minimal Alpine Linux ext4 rootfs. This command only needs to run once; both artifacts are reused by all subsequent `golden` invocations. `init` is idempotent — it skips any artifact that already exists on disk.
Downloads the Linux kernel image and builds a minimal filesystem (Alpine, Debian, or Ubuntu). This command only needs to run once per distro; both artifacts are reused by `golden` invocations. `init` is idempotent — it skips any artifact that already exists on disk.
### Usage
```sh
sudo ./fc-orch init
sudo ./fc-orch init [distro]
```
Where `[distro]` can be `alpine` (default), `debian`, or `ubuntu`.
Optional overrides:
@@ -225,13 +268,15 @@ This command always recreates the golden directory from scratch, discarding any
### Usage
```sh
sudo ./fc-orch golden
sudo ./fc-orch golden [tag] [distro]
```
Where `[tag]` identifies the snapshot baseline name (default `default`), and `[distro]` dictates the source `.ext4` image to use (default: `alpine`).
Optional overrides:
```sh
sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden ubuntu
```
### Prerequisites
@@ -249,14 +294,14 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
2. **Recreate golden directory**
```sh
rm -rf /tmp/fc-orch/golden
mkdir -p /tmp/fc-orch/golden /tmp/fc-orch/pids
rm -rf /tmp/fc-orch/golden/<tag>
mkdir -p /tmp/fc-orch/golden/<tag> /tmp/fc-orch/pids
```
3. **COW copy of base rootfs**
```sh
cp --reflink=always /tmp/fc-orch/rootfs.ext4 /tmp/fc-orch/golden/rootfs.ext4
cp --reflink=always /tmp/fc-orch/rootfs.ext4 /tmp/fc-orch/golden/<tag>/rootfs.ext4
```
On filesystems that do not support reflinks (e.g. ext4), this falls back to a regular byte-for-byte copy via `io.Copy`. On btrfs or xfs, the reflink is instant and consumes no additional space until the VM writes to the disk.
@@ -290,7 +335,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
5. **Build Firecracker machine configuration** (passed to the SDK in memory):
```
SocketPath: /tmp/fc-orch/golden/api.sock
SocketPath: /tmp/fc-orch/golden/<tag>/api.sock
KernelImagePath: /tmp/fc-orch/vmlinux
KernelArgs: console=ttyS0 reboot=k panic=1 pci=off i8042.noaux quiet loglevel=0
MachineCfg:
@@ -299,7 +344,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
TrackDirtyPages: true ← required for snapshot support
Drives:
- DriveID: rootfs
PathOnHost: /tmp/fc-orch/golden/rootfs.ext4
PathOnHost: /tmp/fc-orch/golden/<tag>/rootfs.ext4
IsRootDevice: true
IsReadOnly: false
NetworkInterfaces:
@@ -312,7 +357,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
The Firecracker Go SDK spawns:
```sh
firecracker --api-sock /tmp/fc-orch/golden/api.sock
firecracker --api-sock /tmp/fc-orch/golden/<tag>/api.sock
```
The SDK then applies the machine configuration via HTTP calls to the Firecracker API socket.
@@ -345,13 +390,13 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
```go
m.CreateSnapshot(ctx,
"/tmp/fc-orch/golden/mem",
"/tmp/fc-orch/golden/vmstate",
"/tmp/fc-orch/golden/<tag>/mem",
"/tmp/fc-orch/golden/<tag>/vmstate",
)
// SDK call — PUT /snapshot/create
// {
// "mem_file_path": "/tmp/fc-orch/golden/mem",
// "snapshot_path": "/tmp/fc-orch/golden/vmstate",
// "mem_file_path": "/tmp/fc-orch/golden/<tag>/mem",
// "snapshot_path": "/tmp/fc-orch/golden/<tag>/vmstate",
// "snapshot_type": "Full"
// }
```
@@ -377,16 +422,16 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
| Path | Description |
|---|---|
| `/tmp/fc-orch/golden/mem` | Full memory snapshot (~`FC_MEM_MIB` MiB) |
| `/tmp/fc-orch/golden/vmstate` | VM state snapshot (vCPU registers, device state) |
| `/tmp/fc-orch/golden/rootfs.ext4` | COW copy of base rootfs (not needed after snapshotting, kept for reference) |
| `/tmp/fc-orch/golden/<tag>/mem` | Full memory snapshot (~`FC_MEM_MIB` MiB) |
| `/tmp/fc-orch/golden/<tag>/vmstate` | VM state snapshot (vCPU registers, device state) |
| `/tmp/fc-orch/golden/<tag>/rootfs.ext4` | COW copy of base rootfs (not needed after snapshotting, kept for reference) |
### Error conditions
| Error | Cause | Resolution |
|---|---|---|
| `kernel not found — run init first` | `FC_KERNEL` path does not exist | Run `init` first |
| `rootfs not found — run init first` | `FC_ROOTFS` path does not exist | Run `init` first |
| `rootfs not found — run init first` | Ext4 file does not exist | Run `init [distro]` first |
| `firecracker binary not found` | `FC_BIN` not in `$PATH` | Install Firecracker or set `FC_BIN` |
| `create bridge: ...` | `ip link add` failed | Check if another bridge with the same name exists with incompatible config |
| `start golden VM: ...` | Firecracker failed to boot | Check Firecracker logs; verify kernel and rootfs are valid |
@@ -406,8 +451,8 @@ Clone IDs are auto-incremented: if clones 13 already exist, the next `spawn 2
### Usage
```sh
sudo ./fc-orch spawn # spawn 1 clone (default)
sudo ./fc-orch spawn 10 # spawn 10 clones
sudo ./fc-orch spawn # spawn 1 clone from the "default" golden snapshot
sudo ./fc-orch spawn ubuntu 10 # spawn 10 clones from the "ubuntu" golden snapshot
```
### Prerequisites
@@ -422,7 +467,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
1. **Verify golden artifacts exist**
Checks for both `/tmp/fc-orch/golden/vmstate` and `/tmp/fc-orch/golden/mem`. Exits with an error if either is missing.
Checks for both `/tmp/fc-orch/golden/<tag>/vmstate` and `/tmp/fc-orch/golden/<tag>/mem`. Exits with an error if either is missing.
2. **Create directories**
@@ -438,20 +483,20 @@ The following steps are performed once for each requested clone. Let `{id}` be t
4. **COW copy of golden rootfs**
```sh
cp --reflink=always /tmp/fc-orch/golden/rootfs.ext4 /tmp/fc-orch/clones/{id}/rootfs.ext4
cp --reflink=always /tmp/fc-orch/golden/<tag>/rootfs.ext4 /tmp/fc-orch/clones/{id}/rootfs.ext4
```
Falls back to a full copy if reflinks are unsupported.
5. **Shared memory reference** (no copy)
The clone's Firecracker config will point directly at `/tmp/fc-orch/golden/mem`. No file operation is needed here — the kernel's MAP_PRIVATE ensures each clone's writes are private.
The clone's Firecracker config will point directly at `/tmp/fc-orch/golden/<tag>/mem`. No file operation is needed here — the kernel's MAP_PRIVATE ensures each clone's writes are private.
6. **Copy vmstate**
```sh
# implemented as io.Copy in Go
cp /tmp/fc-orch/golden/vmstate /tmp/fc-orch/clones/{id}/vmstate
cp /tmp/fc-orch/golden/<tag>/vmstate /tmp/fc-orch/clones/{id}/vmstate
```
The vmstate file is small (typically < 1 MiB), so a full copy is cheap.
@@ -484,7 +529,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
- MacAddress: AA:FC:00:00:00:{id:02X}
HostDevName: fctap{id}
Snapshot:
MemFilePath: /tmp/fc-orch/golden/mem ← shared, read-only mapping
MemFilePath: /tmp/fc-orch/golden/<tag>/mem ← shared, read-only mapping
SnapshotPath: /tmp/fc-orch/clones/{id}/vmstate
ResumeVM: true ← restore instead of fresh boot
```
@@ -503,7 +548,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
m.Start(ctx)
// SDK call — POST /snapshot/load
// {
// "mem_file_path": "/tmp/fc-orch/golden/mem",
// "mem_file_path": "/tmp/fc-orch/golden/<tag>/mem",
// "snapshot_path": "/tmp/fc-orch/clones/{id}/vmstate",
// "resume_vm": true
// }
@@ -511,13 +556,27 @@ The following steps are performed once for each requested clone. Let `{id}` be t
Restoration time (from `m.Start` call to return) is measured and logged.
11. **Record PID**
11. **Inject network config via MMDS** (only when `FC_AUTO_NET_CONFIG=1` and networking is enabled)
Immediately after the snapshot is restored, the orchestrator configures the MMDS for this clone via two API calls to the clone's Firecracker socket:
```
PUT /mmds/config
{"version": "V1", "network_interfaces": ["1"]}
PUT /mmds
{"ip": "172.30.0.{10+id}/24", "gw": "172.30.0.1", "dns": "1.1.1.1"}
```
The `fc-net-init` daemon already running inside the guest (started during golden VM boot, captured in the snapshot) polls `169.254.169.254` via a link-local route and applies the config to `eth0` within ~1 second of clone resume.
12. **Record PID**
```sh
echo {pid} > /tmp/fc-orch/pids/clone-{id}.pid
```
12. **Register clone in memory**
13. **Register clone in memory**
The running clone is tracked in an in-process map keyed by clone ID, holding the Firecracker SDK handle, context cancel function, and TAP device name. This allows `kill` to cleanly terminate clones started in the same process invocation.

166
docs/create_golden_image.md Normal file
View File

@@ -0,0 +1,166 @@
# Guide: Creating Custom Golden Images
This guide outlines exactly how to create new, customized golden images (e.g. pivoting from Alpine to an Ubuntu or Node.js environment) and seamlessly integrate them into the `fc-orch` tagging system.
By default, executing `./fc-orch init` gives you a basic Alpine Linux image, but you can also generate built-in Ubuntu and Debian environments trivially via `./fc-orch init ubuntu` or `./fc-orch init debian`. The real power of `fc-orch` lies in maintaining multiple customized snapshot bases (golden tags).
---
## 1. Acquiring Custom Assets
To build a fresh golden image, at minimum you must provide a new filesystem:
- **Custom Root Filesystem**: An uncompressed `ext4` filesystem image that contains your system and libraries.
- **Custom Kernel** *(Optional)*: An uncompressed Linux kernel binary (`vmlinux`). If not provided, the default Firecracker CI VM kernel will continue to be utilized flawlessly.
### Recommendations for Custom Distros:
If you are generating a completely custom/unsupported base, you may use tools like `docker export` compiled via `mkfs.ext4`, or utilize `debootstrap` to provision your image.
Ensure that your custom root filesystem contains an appropriate bootstrap sequence inside `/etc/init.d/rcS` (or systemd if configured) to natively mount essential directories (`/proc`, `/sys`, `/dev`) and configure the `eth0` link interface, as Firecracker expects the guest OS to prepare these primitives natively. Our native `init` tool handles this automatically for `alpine`, `ubuntu`, and `debian` distributions.
## 2. Using Environment Overrides
Rather than replacing the default `/tmp/fc-orch/rootfs.ext4`, `fc-orch` implements powerful environment variables you can override prior to capturing the golden snapshot.
The essential variables to override are:
- `FC_ROOTFS`: Path to your custom `.ext4` image (e.g., `/home/user/ubuntu.ext4`).
- `FC_MEM_MIB`: Amount of initial memory the golden VM receives. Heavier OS's like Ubuntu typically require more than the 128 MiB default (e.g., `512`).
- `FC_VCPUS`: Processing allocation to start the VM. Default is `1`.
## 3. Capturing the Custom Golden Snapshot
Let's assume we want to provision a standard Ubuntu environment. First, create the rootfs (this automatically downloads and sets up the rootfs on your host):
```bash
sudo ./fc-orch init ubuntu
```
Then capture the baseline using the `ubuntu` tag and the `ubuntu` distro target. Note the increased resources `FC_MEM_MIB` allocating 512MB RAM for tighter operations.
```bash
sudo FC_MEM_MIB=512 FC_VCPUS=2 ./fc-orch golden ubuntu ubuntu
```
### What happens in the background?
1. The orchestrator prepares a brand new directory for this baseline exclusively at: `/tmp/fc-orch/golden/ubuntu/`.
2. It takes your customized `/home/user/ubuntu.ext4` and utilizes it as the root block device for the orchestrator environment.
3. Firecracker boots the VM. It waits exactly 3 seconds for the OS initialization logic to naturally settle.
4. The VM instance is aggressively paused. A serialized register state checkpoint (`vmstate`) and raw memory projection (`mem`) are exported permanently into the `/tmp/fc-orch/golden/ubuntu/` directory.
> **Note**: Firecracker terminates the internal process after finalizing the artifacts. The custom snapshot baseline completely persists!
## 4. Spawning Scalable Clones
Since your image is now indexed inside the `ubuntu` tag boundary, it can be cloned independently using Copy-on-Write (COW).
Simply address the target tag along with the desired replica count during the `spawn` command:
```bash
sudo ./fc-orch spawn ubuntu 10
```
This immediately duplicates the exact hardware footprint, generating 10 concurrent active Firecracker VMs resolving locally to your custom OS without disturbing your previous generic Alpine builds. Multiple base OS architectures can run collectively side-by-side using this methodology!
---
## How Ubuntu VM Configuration Works
### Build-time: chroot package installation
`ubuntu-base` is a deliberately bare tarball — it ships no shell beyond `/bin/sh` (dash), no network tools, and no package cache. When `fc-orch init ubuntu` runs, after extracting the tarball the orchestrator performs a chroot install step:
1. **Virtual filesystems are bind-mounted** into the image (`/proc`, `/sys`, `/dev`, `/dev/pts`) so that `apt-get` can function correctly inside the chroot.
2. **`/etc/resolv.conf` is copied** from the host so DNS works during the install.
3. **`apt-get` installs the following packages** with `--no-install-recommends` to keep the image lean:
| Package | Purpose |
|---|---|
| `bash` | Interactive shell |
| `curl` | General-purpose HTTP client |
| `iproute2` | Provides the `ip` command (required by `fc-net-init`) |
| `wget` | Used by `fc-net-init` to poll the MMDS metadata endpoint |
| `ca-certificates` | Trusted CA bundle so HTTPS works out of the box |
4. **`apt` cache is purged** (`apt-get clean` + `rm -rf /var/lib/apt/lists/*`) before unmounting, keeping the final image around 200 MB on disk rather than 2 GB.
5. All bind mounts are removed before the function returns, whether or not the install succeeded.
The resulting ext4 image is **512 MB** (vs. 2 GB for a stock Ubuntu cloud image), comfortably fitting the installed packages with room for runtime state.
### Boot-time: guest network autoconfiguration via MMDS
Every Ubuntu image gets `/sbin/fc-net-init` embedded at build time. On Ubuntu this script is wired into systemd as `fc-net-init.service` (enabled in `multi-user.target`).
When a clone VM resumes from its golden snapshot the service runs the following sequence:
```
1. ip addr add 169.254.169.2/32 dev eth0
— Adds a link-local address so the guest can reach the Firecracker MMDS
gateway at 169.254.169.254 without any prior routing state.
2. Poll GET http://169.254.169.254/ip (1-second timeout, retry every 1 s)
— Loops until the host has injected the per-clone IP config via
PUT /mmds on the Firecracker API socket.
3. Once /ip responds, fetch /gw and /dns from the same endpoint.
4. ip addr flush dev eth0
ip addr add <ip> dev eth0
ip route add default via <gw> dev eth0
echo "nameserver <dns>" > /etc/resolv.conf
— Applies the config atomically and exits.
```
The host side (see `orchestrator/network.go`) injects the three keys (`ip`, `gw`, `dns`) via the Firecracker MMDS API **after** the snapshot is loaded but **before** the VM is resumed, so the guest sees the data on its very first poll.
This design means the golden snapshot captures the polling loop already running. Clones that are spawned without `FC_AUTO_NET_CONFIG=1` will still run the loop — it simply never exits, which is harmless and consumes negligible CPU.
### Serial console
`serial-getty@ttyS0.service` is enabled at build time via a symlink in `getty.target.wants`. The root password is cleared so the console auto-logs-in without a password prompt. Connect with:
```bash
sudo ./fc-orch console <clone-id>
```
---
## Appendix: Practical Examples
### Creating Multiple Golden Images with Different Specs
You can manage a rich registry of different tagged images, provisioning them with varying specifications.
**1. Standard Alpine (Default, 128 MiB RAM, 1 vCPU)**
```bash
sudo ./fc-orch golden alpine alpine
```
**2. Ubuntu Web Server (1024 MiB RAM, 2 vCPUs)**
```bash
# assuming init ubuntu was already run
sudo FC_MEM_MIB=1024 FC_VCPUS=2 ./fc-orch golden my-ubuntu-server ubuntu
```
**3. Debian Database Node (4096 MiB RAM, 4 vCPUs)**
```bash
# assuming init debian was already run
sudo FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden my-debian-db debian
```
**4. External Custom Image (E.g. CentOS via Manual Provision)**
```bash
sudo FC_ROOTFS=/images/centos.ext4 FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden tag-centos
```
### Inspecting Your Hypervisor State
To easily visualize what your orchestrator has stored and where, you can run the following hypervisor commands:
**View the structured layout of all golden image namespaces:**
```bash
tree -a /tmp/fc-orch/golden
```
*(If `tree` is not installed, you can use `ls -R /tmp/fc-orch/golden`)*
**View the exact disk usage and file sizes for a specific image artifact (like ubuntu):**
```bash
ls -lh /tmp/fc-orch/golden/ubuntu/
```
Output will similarly demonstrate that `mem` represents your full allocated RAM (e.g., 1024M), while `vmstate` is essentially negligible.

38
main.go
View File

@@ -21,6 +21,7 @@ import (
"os"
"path/filepath"
"runtime"
"strconv"
log "github.com/sirupsen/logrus"
@@ -65,15 +66,32 @@ func main() {
switch os.Args[1] {
case "init":
fatal(orch.Init())
distro := "alpine"
if len(os.Args) > 2 {
distro = os.Args[2]
}
fatal(orch.Init(distro))
case "golden":
fatal(orch.Golden())
tag := "default"
distro := "alpine"
if len(os.Args) > 2 {
tag = os.Args[2]
}
if len(os.Args) > 3 {
distro = os.Args[3]
}
fatal(orch.Golden(tag, distro))
case "spawn":
n := 1
if len(os.Args) > 2 {
fmt.Sscanf(os.Args[2], "%d", &n)
tag := "default"
for _, arg := range os.Args[2:] {
if parsed, err := strconv.Atoi(arg); err == nil {
n = parsed
} else {
tag = arg
}
fatal(orch.Spawn(n))
}
fatal(orch.Spawn(n, tag))
case "status":
orch.Status()
case "kill":
@@ -98,14 +116,16 @@ func main() {
// Internal subcommand: started by spawnOne, runs as a background daemon.
fs := flag.NewFlagSet("_console-proxy", flag.ContinueOnError)
var id int
var tag string
var tap string
fs.IntVar(&id, "id", 0, "clone ID")
fs.StringVar(&tag, "tag", "default", "Golden VM tag")
fs.StringVar(&tap, "tap", "", "TAP device name")
if err := fs.Parse(os.Args[2:]); err != nil {
fmt.Fprintf(os.Stderr, "console-proxy: %v\n", err)
os.Exit(1)
}
fatal(orchestrator.RunConsoleProxy(orchestrator.DefaultConfig(), id, tap))
fatal(orchestrator.RunConsoleProxy(orchestrator.DefaultConfig(), id, tap, tag))
default:
usage()
os.Exit(1)
@@ -119,9 +139,9 @@ Flags:
--dev log format with source file:line (e.g. file="orchestrator.go:123")
Commands:
init Download kernel + create Alpine rootfs
golden Boot golden VM → pause → snapshot
spawn [N] Restore N clones from golden snapshot (default: 1)
init [distro] Download kernel + create distro rootfs (default: alpine, options: alpine, debian, ubuntu)
golden [tag] [distro] Boot golden VM → pause → snapshot (default tag: default, default distro: alpine)
spawn [tag] [N] Restore N clones from golden snapshot (default tag: default, default N: 1)
serve [addr] Start terminal web UI (default: :8080)
console <id> Attach to the serial console of a running clone (Ctrl+] to detach)
status Show running clones

View File

@@ -2,6 +2,7 @@ package orchestrator
import (
"os"
"path/filepath"
"strconv"
)
@@ -11,13 +12,14 @@ type Config struct {
BaseDir string // working directory for all state
Kernel string // path to vmlinux
KernelURL string // URL to download vmlinux if Kernel file is missing
Rootfs string // path to base rootfs.ext4
CustomRootfs string // Custom path to rootfs if FC_ROOTFS is set
VCPUs int64
MemMiB int64
Bridge string // host bridge name, or "none" to skip networking
BridgeCIDR string // e.g. "172.30.0.1/24"
GuestPrefix string // e.g. "172.30.0" — clones get .10, .11, ...
GuestGW string
GuestPrefix string // e.g. "172.30.0" — clones get .11, .12, ...
GuestGW string // default gateway for guest VMs
AutoNetConfig bool // inject guest IP/GW/DNS via MMDS on clone start
BootArgs string
}
@@ -31,15 +33,24 @@ func DefaultConfig() Config {
BridgeCIDR: envOr("FC_BRIDGE_CIDR", "172.30.0.1/24"),
GuestPrefix: envOr("FC_GUEST_PREFIX", "172.30.0"),
GuestGW: envOr("FC_GUEST_GW", "172.30.0.1"),
BootArgs: "console=ttyS0 reboot=k panic=1 pci=off i8042.noaux quiet loglevel=0",
AutoNetConfig: envOr("FC_AUTO_NET_CONFIG", "") == "1",
BootArgs: "console=ttyS0 reboot=k panic=1 pci=off i8042.noaux",
}
c.Kernel = envOr("FC_KERNEL", c.BaseDir+"/vmlinux")
c.KernelURL = envOr("FC_KERNEL_URL",
"https://s3.amazonaws.com/spec.ccfc.min/firecracker-ci/20260408-ce2a467895c1-0/x86_64/vmlinux-6.1.166")
c.Rootfs = envOr("FC_ROOTFS", c.BaseDir+"/rootfs.ext4")
c.CustomRootfs = os.Getenv("FC_ROOTFS")
return c
}
// RootfsPath returns the path to the root filesystem depending on the requested distribution.
func (c Config) RootfsPath(distro string) string {
if c.CustomRootfs != "" {
return c.CustomRootfs
}
return filepath.Join(c.BaseDir, "rootfs-"+distro+".ext4")
}
func envOr(key, fallback string) string {
if v := os.Getenv(key); v != "" {
return v

View File

@@ -25,11 +25,11 @@ import (
// It restores a Firecracker clone from the golden snapshot, connecting its serial
// console (ttyS0) to a PTY, then serves the PTY master on a Unix socket at
// {cloneDir}/console.sock for the lifetime of the VM.
func RunConsoleProxy(cfg Config, id int, tapName string) error {
func RunConsoleProxy(cfg Config, id int, tapName, tag string) error {
logger := log.WithField("component", fmt.Sprintf("console-proxy[%d]", id))
cloneDir := filepath.Join(cfg.BaseDir, "clones", strconv.Itoa(id))
goldenDir := filepath.Join(cfg.BaseDir, "golden")
goldenDir := filepath.Join(cfg.BaseDir, "golden", tag)
sockPath := filepath.Join(cloneDir, "api.sock")
consoleSockPath := filepath.Join(cloneDir, "console.sock")
sharedMem := filepath.Join(goldenDir, "mem")
@@ -88,6 +88,7 @@ func RunConsoleProxy(cfg Config, id int, tapName string) error {
MacAddress: mac,
HostDevName: tapName,
},
AllowMMDS: true,
},
}
}
@@ -117,7 +118,7 @@ func RunConsoleProxy(cfg Config, id int, tapName string) error {
})
}
// --- Start VM (blocks until snapshot is loaded and VM is running) ---
// --- Start VM (blocks until snapshot is loaded and VM is PAUSED) ---
start := time.Now()
logger.Infof("restoring clone %d from snapshot ...", id)
if err := m.Start(ctx); err != nil {
@@ -125,6 +126,25 @@ func RunConsoleProxy(cfg Config, id int, tapName string) error {
ptm.Close()
return fmt.Errorf("restore clone %d: %w", id, err)
}
// Inject per-clone IP config via MMDS so the fc-net-init guest daemon
// can configure eth0 without any manual steps inside the VM.
// This must happen while the VM is PAUSED (ResumeVM: false in snapshot load).
if cfg.AutoNetConfig && cfg.Bridge != "none" {
guestIP := fmt.Sprintf("%s.%d/24", cfg.GuestPrefix, 10+id)
if err := configureMmds(ctx, sockPath, guestIP, cfg.GuestGW, "1.1.1.1"); err != nil {
logger.Warnf("MMDS config failed (guest network will be unconfigured): %v", err)
} else {
logger.Infof("MMDS: assigned %s gw %s to clone %d", guestIP, cfg.GuestGW, id)
}
}
// Now RESUME the VM to start execution!
if err := m.ResumeVM(ctx); err != nil {
pts.Close()
ptm.Close()
return fmt.Errorf("resume clone %d: %w", id, err)
}
elapsed := time.Since(start)
// Release our copy of the slave — firecracker holds its own fd now.
@@ -144,6 +164,17 @@ func RunConsoleProxy(cfg Config, id int, tapName string) error {
logger.Infof("clone %d: restored in %s (pid=%d, tap=%s)",
id, elapsed.Round(time.Millisecond), cmd.Process.Pid, tapName)
// --- Open console log (captures all serial output from boot) ---
consoleLogPath := filepath.Join(cloneDir, "console.log")
consoleLog, err := os.OpenFile(consoleLogPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
if err != nil {
logger.Warnf("could not open console log: %v", err)
consoleLog = nil
}
if consoleLog != nil {
defer consoleLog.Close()
}
// --- Create console socket ---
os.Remove(consoleSockPath) //nolint:errcheck
listener, err := net.Listen("unix", consoleSockPath)
@@ -171,7 +202,7 @@ func RunConsoleProxy(cfg Config, id int, tapName string) error {
if resizeListener != nil {
go serveResize(resizeListener, ptm, vmDone, logger)
}
serveConsole(listener, ptm, vmDone, logger)
serveConsole(listener, ptm, consoleLog, vmDone, logger)
listener.Close()
if resizeListener != nil {
@@ -254,16 +285,20 @@ func (a *atomicWriter) Write(p []byte) (int, error) {
// A background goroutine reads from the PTY master continuously (discarding
// output when no client is connected so the VM never blocks on a full buffer).
// Only one client is served at a time; sessions are serialised.
func serveConsole(listener net.Listener, ptm *os.File, vmDone <-chan struct{}, logger *log.Entry) {
func serveConsole(listener net.Listener, ptm *os.File, logFile *os.File, vmDone <-chan struct{}, logger *log.Entry) {
aw := &atomicWriter{w: io.Discard}
// Background PTY reader — runs for the full VM lifetime.
// All output is tee'd to logFile (if set) so boot messages are never lost.
go func() {
buf := make([]byte, 4096)
for {
n, err := ptm.Read(buf)
if n > 0 {
aw.Write(buf[:n]) //nolint:errcheck
if logFile != nil {
logFile.Write(buf[:n]) //nolint:errcheck
}
}
if err != nil {
return // PTY closed (VM exited)

View File

@@ -1,7 +1,13 @@
package orchestrator
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"os/exec"
"strings"
)
@@ -48,6 +54,8 @@ func (o *Orchestrator) setupBridge() error {
// createTap creates a tap device and attaches it to the bridge.
func (o *Orchestrator) createTap(name string) error {
// Destroy any stale tap with this name before (re)creating it.
_ = run("ip", "link", "del", name)
if err := run("ip", "tuntap", "add", "dev", name, "mode", "tap"); err != nil {
return fmt.Errorf("create tap %s: %w", name, err)
}
@@ -66,6 +74,57 @@ func destroyTap(name string) {
_ = run("ip", "link", "del", name)
}
// configureMmds writes per-clone IP config to the Firecracker MMDS so that
// the fc-net-init daemon running inside the guest can read and apply it.
// It makes two API calls to the Firecracker Unix socket:
//
// 1. PUT /mmds/config — associates MMDS with the guest's first NIC ("1")
// 2. PUT /mmds — stores ip/gw/dns values the guest daemon will read
func configureMmds(ctx context.Context, sockPath, ip, gw, dns string) error {
httpClient := &http.Client{
Transport: &http.Transport{
DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
return net.Dial("unix", sockPath)
},
},
}
doJSON := func(method, path string, body any) error {
data, err := json.Marshal(body)
if err != nil {
return fmt.Errorf("marshal %s: %w", path, err)
}
req, err := http.NewRequestWithContext(ctx, method,
"http://localhost"+path, bytes.NewReader(data))
if err != nil {
return fmt.Errorf("build request %s: %w", path, err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := httpClient.Do(req)
if err != nil {
return fmt.Errorf("%s %s: %w", method, path, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusNoContent {
b, _ := io.ReadAll(resp.Body)
return fmt.Errorf("%s %s failed (%d): %s", method, path, resp.StatusCode, b)
}
return nil
}
// 1. MMDS configuration (version, network_interfaces binding, etc.) is
// persisted in the golden snapshot, so we don't need to configure it here.
// In fact, Firecracker will reject PUT /mmds/config with a 400 error
// on a restored VM, which previously caused this function to abort early.
// 2. Store the network config the guest daemon will poll for.
return doJSON(http.MethodPut, "/mmds", map[string]string{
"ip": ip,
"gw": gw,
"dns": dns,
})
}
// run executes a command, returning an error if it fails.
func run(name string, args ...string) error {
return exec.Command(name, args...).Run()

View File

@@ -43,13 +43,13 @@ func New(cfg Config) *Orchestrator {
}
}
func (o *Orchestrator) goldenDir() string { return filepath.Join(o.cfg.BaseDir, "golden") }
func (o *Orchestrator) goldenDir(tag string) string { return filepath.Join(o.cfg.BaseDir, "golden", tag) }
func (o *Orchestrator) clonesDir() string { return filepath.Join(o.cfg.BaseDir, "clones") }
func (o *Orchestrator) pidsDir() string { return filepath.Join(o.cfg.BaseDir, "pids") }
// ——— Init ————————————————————————————————————————————————————————————————
func (o *Orchestrator) Init() error {
func (o *Orchestrator) Init(distro string) error {
if err := os.MkdirAll(o.cfg.BaseDir, 0o755); err != nil {
return err
}
@@ -65,48 +65,57 @@ func (o *Orchestrator) Init() error {
}
// Build rootfs if missing
if _, err := os.Stat(o.cfg.Rootfs); os.IsNotExist(err) {
o.log.Info("building minimal Alpine rootfs ...")
if err := o.buildRootfs(); err != nil {
rootfsPath := o.cfg.RootfsPath(distro)
if _, err := os.Stat(rootfsPath); os.IsNotExist(err) {
o.log.Infof("building minimal %s rootfs ...", distro)
if err := o.buildRootfs(distro, rootfsPath); err != nil {
return fmt.Errorf("build rootfs: %w", err)
}
o.log.Infof("rootfs saved to %s", o.cfg.Rootfs)
o.log.Infof("rootfs saved to %s", rootfsPath)
}
o.log.Info("init complete")
return nil
}
func (o *Orchestrator) buildRootfs() error {
func (o *Orchestrator) buildRootfs(distro, rootfsPath string) error {
sizeMB := 512
if distro == "debian" || distro == "ubuntu" {
sizeMB = 2048
}
mnt := filepath.Join(o.cfg.BaseDir, "mnt")
// create empty ext4 image
o.log.Infof("running: dd if=/dev/zero of=%s bs=1M count=%d status=none", o.cfg.Rootfs, sizeMB)
if err := run("dd", "if=/dev/zero", "of="+o.cfg.Rootfs,
o.log.Infof("running: dd if=/dev/zero of=%s bs=1M count=%d status=none", rootfsPath, sizeMB)
if err := run("dd", "if=/dev/zero", "of="+rootfsPath,
"bs=1M", fmt.Sprintf("count=%d", sizeMB), "status=none"); err != nil {
return err
}
o.log.Infof("running: mkfs.ext4 -qF %s", o.cfg.Rootfs)
if err := run("mkfs.ext4", "-qF", o.cfg.Rootfs); err != nil {
o.log.Infof("running: mkfs.ext4 -qF %s", rootfsPath)
if err := run("mkfs.ext4", "-qF", rootfsPath); err != nil {
return err
}
os.MkdirAll(mnt, 0o755)
o.log.Infof("running: mount -o loop %s %s", o.cfg.Rootfs, mnt)
if err := run("mount", "-o", "loop", o.cfg.Rootfs, mnt); err != nil {
o.log.Infof("running: mount -o loop %s %s", rootfsPath, mnt)
if err := run("mount", "-o", "loop", rootfsPath, mnt); err != nil {
return err
}
defer run("umount", mnt)
defer func() {
o.log.Infof("running: umount %s", mnt)
run("umount", mnt)
}()
// download and extract Alpine minirootfs
// download and extract minirootfs
switch distro {
case "alpine":
alpineVer := "3.20"
arch := "x86_64"
tarball := fmt.Sprintf("alpine-minirootfs-%s.0-%s.tar.gz", alpineVer, arch)
url := fmt.Sprintf("https://dl-cdn.alpinelinux.org/alpine/v%s/releases/%s/%s",
alpineVer, arch, tarball)
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download alpine: %w", err)
}
@@ -114,13 +123,73 @@ func (o *Orchestrator) buildRootfs() error {
if err := run("tar", "xzf", tarPath, "-C", mnt); err != nil {
return err
}
case "debian":
tarball := "debian-12-nocloud-amd64.tar.xz"
url := "https://cloud.debian.org/images/cloud/bookworm/latest/" + tarball
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download debian: %w", err)
}
o.log.Infof("running: tar xJf %s -C %s", tarPath, mnt)
if err := run("tar", "xJf", tarPath, "-C", mnt); err != nil {
return err
}
case "ubuntu":
tarball := "ubuntu-base-24.04.4-base-amd64.tar.gz"
url := "https://cdimage.ubuntu.com/ubuntu-base/releases/24.04/release/" + tarball
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download ubuntu: %w", err)
}
o.log.Infof("running: tar xzf %s -C %s", tarPath, mnt)
if err := run("tar", "xzf", tarPath, "-C", mnt); err != nil {
return err
}
o.log.Info("installing essential packages in ubuntu chroot ...")
if err := installUbuntuPackages(mnt, o.log); err != nil {
return fmt.Errorf("install ubuntu packages: %w", err)
}
default:
return fmt.Errorf("unsupported distro: %s", distro)
}
// write fc-net-init daemon: polls MMDS for IP config and applies it.
// Always embedded — harmless if MMDS is never populated (sleeps 1 s/loop).
// Captured in the golden snapshot so it runs on every clone resume too.
netInitScript := `#!/bin/sh
# Poll Firecracker MMDS for network config, apply it, then exit.
# Runs in background; loops until MMDS responds (survives snapshot resume).
ip link set eth0 up 2>/dev/null
ip route add 169.254.169.254 dev eth0 2>/dev/null
ip addr add 169.254.169.2/32 dev eth0 2>/dev/null
while true; do
ip=$(wget -q -T1 -O- http://169.254.169.254/ip 2>/dev/null | tr -d '"')
[ -n "$ip" ] || { sleep 1; continue; }
gw=$(wget -q -T1 -O- http://169.254.169.254/gw 2>/dev/null | tr -d '"')
dns=$(wget -q -T1 -O- http://169.254.169.254/dns 2>/dev/null | tr -d '"')
ip addr flush dev eth0 2>/dev/null
ip addr add "$ip" dev eth0 2>/dev/null
ip route add default via "$gw" dev eth0 2>/dev/null
printf "nameserver %s\n" "$dns" > /etc/resolv.conf
break
done
`
os.MkdirAll(filepath.Join(mnt, "sbin"), 0o755)
if err := os.WriteFile(filepath.Join(mnt, "sbin", "fc-net-init"), []byte(netInitScript), 0o755); err != nil {
return err
}
if distro == "alpine" {
// write init script
initScript := `#!/bin/sh
mount -t proc proc /proc
mount -t sysfs sys /sys
mount -t devtmpfs devtmpfs /dev
ip link set eth0 up 2>/dev/null
ip route add 169.254.169.254 dev eth0 2>/dev/null
/sbin/fc-net-init &
`
initPath := filepath.Join(mnt, "etc", "init.d", "rcS")
os.MkdirAll(filepath.Dir(initPath), 0o755)
@@ -131,26 +200,102 @@ ip link set eth0 up 2>/dev/null
// write inittab
inittab := "::sysinit:/etc/init.d/rcS\nttyS0::respawn:/bin/sh\n"
return os.WriteFile(filepath.Join(mnt, "etc", "inittab"), []byte(inittab), 0o644)
} else {
// systemd-based distributions (Debian, Ubuntu)
svc := `[Unit]
Description=Firecracker Network Init
After=basic.target
[Service]
Type=simple
ExecStart=/sbin/fc-net-init
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
`
svcPath := filepath.Join(mnt, "etc", "systemd", "system", "fc-net-init.service")
os.MkdirAll(filepath.Dir(svcPath), 0o755)
if err := os.WriteFile(svcPath, []byte(svc), 0o644); err != nil {
return err
}
// Enable service dynamically
wantsDir := filepath.Join(mnt, "etc", "systemd", "system", "multi-user.target.wants")
os.MkdirAll(wantsDir, 0o755)
os.Symlink("/etc/systemd/system/fc-net-init.service", filepath.Join(wantsDir, "fc-net-init.service")) //nolint:errcheck
// Mask serial-getty@ttyS0.service and the udev device unit it depends on.
// In Firecracker, udev never runs so dev-ttyS0.device never activates,
// causing a 90-second systemd timeout. We replace it entirely with a
// custom service that uses ConditionPathExists (filesystem check) instead.
systemdDir := filepath.Join(mnt, "etc", "systemd", "system")
os.Symlink("/dev/null", filepath.Join(systemdDir, "serial-getty@ttyS0.service")) //nolint:errcheck
os.Symlink("/dev/null", filepath.Join(systemdDir, "dev-ttyS0.device")) //nolint:errcheck
// Custom console service: no udev dependency, autologin as root.
consoleSvc := `[Unit]
Description=Serial Console (ttyS0)
After=basic.target
ConditionPathExists=/dev/ttyS0
[Service]
ExecStart=/sbin/agetty --autologin root --noclear ttyS0 vt220
Restart=always
RestartSec=1
[Install]
WantedBy=multi-user.target
`
consoleSvcPath := filepath.Join(systemdDir, "fc-console.service")
os.WriteFile(consoleSvcPath, []byte(consoleSvc), 0o644) //nolint:errcheck
wantsDir2 := filepath.Join(systemdDir, "multi-user.target.wants")
os.MkdirAll(wantsDir2, 0o755)
os.Symlink("/etc/systemd/system/fc-console.service", filepath.Join(wantsDir2, "fc-console.service")) //nolint:errcheck
// Clear root password for auto-login on console
shadowPath := filepath.Join(mnt, "etc", "shadow")
if shadowBytes, err := os.ReadFile(shadowPath); err == nil {
lines := strings.Split(string(shadowBytes), "\n")
for i, line := range lines {
if strings.HasPrefix(line, "root:") {
parts := strings.Split(line, ":")
if len(parts) > 1 {
parts[1] = ""
lines[i] = strings.Join(parts, ":")
}
}
}
os.WriteFile(shadowPath, []byte(strings.Join(lines, "\n")), 0o640) //nolint:errcheck
}
// Write fstab so systemd mounts virtual filesystems at boot.
// Minimal tarball rootfs has no fstab; without it /proc, /sys, /dev are not mounted.
fstab := "proc\t/proc\tproc\tdefaults\t0 0\nsysfs\t/sys\tsysfs\tdefaults\t0 0\ndevtmpfs\t/dev\tdevtmpfs\tdefaults\t0 0\n"
os.WriteFile(filepath.Join(mnt, "etc", "fstab"), []byte(fstab), 0o644) //nolint:errcheck
}
return nil
}
// ——— Golden VM ——————————————————————————————————————————————————————————
func (o *Orchestrator) Golden() error {
func (o *Orchestrator) Golden(tag string, distro string) error {
if _, err := os.Stat(o.cfg.Kernel); err != nil {
return fmt.Errorf("kernel not found — run init first: %w", err)
}
if _, err := os.Stat(o.cfg.Rootfs); err != nil {
rootfsPath := o.cfg.RootfsPath(distro)
if _, err := os.Stat(rootfsPath); err != nil {
return fmt.Errorf("rootfs not found — run init first: %w", err)
}
goldenDir := o.goldenDir()
goldenDir := o.goldenDir(tag)
os.RemoveAll(goldenDir)
os.MkdirAll(goldenDir, 0o755)
os.MkdirAll(o.pidsDir(), 0o755)
// COW copy of rootfs for golden VM
goldenRootfs := filepath.Join(goldenDir, "rootfs.ext4")
if err := reflinkCopy(o.cfg.Rootfs, goldenRootfs); err != nil {
if err := reflinkCopy(rootfsPath, goldenRootfs); err != nil {
return fmt.Errorf("copy rootfs: %w", err)
}
@@ -175,6 +320,7 @@ func (o *Orchestrator) Golden() error {
MacAddress: "AA:FC:00:00:00:01",
HostDevName: tap,
},
AllowMMDS: true,
},
}
}
@@ -241,8 +387,15 @@ func (o *Orchestrator) Golden() error {
[]byte(fmt.Sprintf("%d", cmd.Process.Pid)), 0o644)
}
o.log.Info("golden VM booted, letting it settle ...")
time.Sleep(3 * time.Second)
settleTime := 3 * time.Second
if distro == "debian" || distro == "ubuntu" {
// systemd takes significantly longer to reach multi-user.target than
// Alpine's busybox init. Snapshot too early and serial-getty@ttyS0
// won't have started yet, leaving the console unresponsive on resume.
settleTime = 20 * time.Second
}
o.log.Infof("golden VM booted, letting it settle (%s) ...", settleTime)
time.Sleep(settleTime)
// pause
o.log.Info("pausing golden VM ...")
@@ -274,13 +427,29 @@ func (o *Orchestrator) Golden() error {
return nil
}
// GoldenTags returns a list of all existing golden VM tags.
func (o *Orchestrator) GoldenTags() []string {
goldenDir := filepath.Join(o.cfg.BaseDir, "golden")
entries, err := os.ReadDir(goldenDir)
if err != nil {
return nil
}
var tags []string
for _, e := range entries {
if e.IsDir() {
tags = append(tags, e.Name())
}
}
return tags
}
// ——— Spawn clones ——————————————————————————————————————————————————————
func (o *Orchestrator) Spawn(count int) error {
goldenDir := o.goldenDir()
func (o *Orchestrator) Spawn(count int, tag string) error {
goldenDir := o.goldenDir(tag)
for _, f := range []string{"vmstate", "mem"} {
if _, err := os.Stat(filepath.Join(goldenDir, f)); err != nil {
return fmt.Errorf("golden %s not found — run golden first", f)
return fmt.Errorf("golden %s not found for tag %s — run golden first", f, tag)
}
}
@@ -294,7 +463,7 @@ func (o *Orchestrator) Spawn(count int) error {
for i := 0; i < count; i++ {
id := o.nextCloneID()
if err := o.spawnOne(id); err != nil {
if err := o.spawnOne(id, o.cfg.AutoNetConfig, tag); err != nil {
o.log.Errorf("clone %d failed: %v", id, err)
continue
}
@@ -308,11 +477,14 @@ func (o *Orchestrator) Spawn(count int) error {
// SpawnSingle spawns exactly one new clone and returns its ID.
// It is safe to call from multiple goroutines (nextCloneID is serialised by the
// filesystem scan, and each clone gets its own directory/tap).
func (o *Orchestrator) SpawnSingle() (int, error) {
goldenDir := o.goldenDir()
// SpawnSingle spawns one clone. net controls whether the guest receives
// automatic IP configuration via MMDS (overrides FC_AUTO_NET_CONFIG for this
// clone). Pass cfg.AutoNetConfig to preserve the global default.
func (o *Orchestrator) SpawnSingle(net bool, tag string) (int, error) {
goldenDir := o.goldenDir(tag)
for _, f := range []string{"vmstate", "mem"} {
if _, err := os.Stat(filepath.Join(goldenDir, f)); err != nil {
return 0, fmt.Errorf("golden %s not found — run golden first", f)
return 0, fmt.Errorf("golden %s not found for tag %s — run golden first", f, tag)
}
}
os.MkdirAll(o.clonesDir(), 0o755)
@@ -323,7 +495,7 @@ func (o *Orchestrator) SpawnSingle() (int, error) {
}
}
id := o.nextCloneID()
if err := o.spawnOne(id); err != nil {
if err := o.spawnOne(id, net, tag); err != nil {
return 0, err
}
return id, nil
@@ -349,10 +521,11 @@ func (o *Orchestrator) KillClone(id int) error {
return nil
}
func (o *Orchestrator) spawnOne(id int) error {
goldenDir := o.goldenDir()
func (o *Orchestrator) spawnOne(id int, net bool, tag string) error {
goldenDir := o.goldenDir(tag)
cloneDir := filepath.Join(o.clonesDir(), strconv.Itoa(id))
os.MkdirAll(cloneDir, 0o755)
os.WriteFile(filepath.Join(cloneDir, "tag"), []byte(tag), 0o644) //nolint:errcheck
sockPath := filepath.Join(cloneDir, "api.sock")
os.Remove(sockPath)
@@ -389,7 +562,7 @@ func (o *Orchestrator) spawnOne(id int) error {
return fmt.Errorf("resolve self path: %w", err)
}
proxyArgs := []string{"_console-proxy", "--id", strconv.Itoa(id)}
proxyArgs := []string{"_console-proxy", "--id", strconv.Itoa(id), "--tag", tag}
if o.cfg.Bridge != "none" {
proxyArgs = append(proxyArgs, "--tap", tapName)
}
@@ -399,6 +572,18 @@ func (o *Orchestrator) spawnOne(id int) error {
proxyCmd.Stdin = nil
proxyCmd.Stdout = nil
proxyCmd.Stderr = nil
// Build proxy env: inherit parent env, then force FC_AUTO_NET_CONFIG to
// match the per-clone net flag so the proxy picks it up via DefaultConfig().
proxyEnv := make([]string, 0, len(os.Environ())+1)
for _, kv := range os.Environ() {
if !strings.HasPrefix(kv, "FC_AUTO_NET_CONFIG=") {
proxyEnv = append(proxyEnv, kv)
}
}
if net {
proxyEnv = append(proxyEnv, "FC_AUTO_NET_CONFIG=1")
}
proxyCmd.Env = proxyEnv
if err := proxyCmd.Start(); err != nil {
return fmt.Errorf("start console proxy: %w", err)
@@ -502,7 +687,7 @@ func (o *Orchestrator) Kill() error {
func (o *Orchestrator) Cleanup() error {
o.Kill()
os.RemoveAll(o.clonesDir())
os.RemoveAll(o.goldenDir())
os.RemoveAll(filepath.Join(o.cfg.BaseDir, "golden"))
os.RemoveAll(o.pidsDir())
if o.cfg.Bridge != "none" {
@@ -516,6 +701,56 @@ func (o *Orchestrator) Cleanup() error {
// ——— Helpers ——————————————————————————————————————————————————————————
// installUbuntuPackages bind-mounts the virtual filesystems into mnt, then
// runs apt-get inside the chroot to install the minimal toolset required for
// network operation and general use. Bind mounts are always cleaned up on
// return regardless of whether apt-get succeeds.
func installUbuntuPackages(mnt string, logger *log.Entry) error {
type bm struct{ fstype, src, dst string }
mounts := []bm{
{"proc", "proc", "proc"},
{"sysfs", "sysfs", "sys"},
{"devtmpfs", "devtmpfs", "dev"},
{"devpts", "devpts", "dev/pts"},
}
// mount in order; on any failure unmount whatever succeeded and return.
for i, m := range mounts {
dst := filepath.Join(mnt, m.dst)
os.MkdirAll(dst, 0o755)
logger.Infof("running: mount -t %s %s %s", m.fstype, m.src, dst)
if err := run("mount", "-t", m.fstype, m.src, dst); err != nil {
for j := i - 1; j >= 0; j-- {
logger.Infof("running: umount %s", filepath.Join(mnt, mounts[j].dst))
run("umount", filepath.Join(mnt, mounts[j].dst)) //nolint:errcheck
}
return fmt.Errorf("mount %s: %w", m.dst, err)
}
}
defer func() {
for i := len(mounts) - 1; i >= 0; i-- {
logger.Infof("running: umount %s", filepath.Join(mnt, mounts[i].dst))
run("umount", filepath.Join(mnt, mounts[i].dst)) //nolint:errcheck
}
}()
// Provide DNS resolution inside the chroot so apt-get can reach the network.
if data, err := os.ReadFile("/etc/resolv.conf"); err == nil {
os.WriteFile(filepath.Join(mnt, "etc/resolv.conf"), data, 0o644) //nolint:errcheck
}
pkgs := "bash curl iproute2 wget ca-certificates systemd systemd-sysv util-linux"
script := "DEBIAN_FRONTEND=noninteractive apt-get update -q && " +
"DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends " + pkgs + " && " +
"apt-get clean && rm -rf /var/lib/apt/lists/*"
logger.Infof("running: chroot %s /bin/sh -c %q", mnt, script)
cmd := exec.Command("chroot", mnt, "/bin/sh", "-c", script)
cmd.Stdout = logger.Writer()
cmd.Stderr = logger.Writer()
return cmd.Run()
}
func (o *Orchestrator) nextCloneID() int {
max := 0
entries, _ := os.ReadDir(o.clonesDir())

View File

@@ -49,11 +49,28 @@ func Serve(orch *Orchestrator, addr string) error {
mux.HandleFunc("/clones", func(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet, "":
ids := runningCloneIDs(orch.cfg)
clones := runningClones(orch.cfg)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(ids) //nolint:errcheck
json.NewEncoder(w).Encode(clones) //nolint:errcheck
case http.MethodPost:
id, err := orch.SpawnSingle()
// Optional JSON body: {"net": bool, "tag": string}
// Defaults to the server's FC_AUTO_NET_CONFIG setting.
var req struct {
Net *bool `json:"net"`
Tag *string `json:"tag"`
}
if r.ContentLength > 0 {
json.NewDecoder(r.Body).Decode(&req) //nolint:errcheck
}
net := orch.cfg.AutoNetConfig
if req.Net != nil {
net = *req.Net
}
tag := "default"
if req.Tag != nil && *req.Tag != "" {
tag = *req.Tag
}
id, err := orch.SpawnSingle(net, tag)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
@@ -65,6 +82,16 @@ func Serve(orch *Orchestrator, addr string) error {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
}
})
// /tags — list all available golden VM tags
mux.HandleFunc("/tags", func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet && r.Method != "" {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
tags := orch.GoldenTags()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tags) //nolint:errcheck
})
// /clones/{id} — destroy (DELETE)
mux.HandleFunc("/clones/", func(w http.ResponseWriter, r *http.Request) {
@@ -188,14 +215,19 @@ func bridgeWS(ws *websocket.Conn, consoleConn net.Conn, resizeConn net.Conn) {
<-sockDone
}
// runningCloneIDs returns clone IDs that have a live console socket.
func runningCloneIDs(cfg Config) []int {
type cloneEntry struct {
ID int `json:"id"`
Tag string `json:"tag"`
}
// runningClones returns entries for clones that have a live console socket.
func runningClones(cfg Config) []cloneEntry {
clonesDir := filepath.Join(cfg.BaseDir, "clones")
entries, err := os.ReadDir(clonesDir)
if err != nil {
return nil
}
var ids []int
var clones []cloneEntry
for _, e := range entries {
if !e.IsDir() {
continue
@@ -205,11 +237,16 @@ func runningCloneIDs(cfg Config) []int {
continue
}
sock := filepath.Join(clonesDir, e.Name(), "console.sock")
if _, err := os.Stat(sock); err == nil {
ids = append(ids, id)
if _, err := os.Stat(sock); err != nil {
continue
}
tag := "unknown"
if raw, err := os.ReadFile(filepath.Join(clonesDir, e.Name(), "tag")); err == nil {
tag = strings.TrimSpace(string(raw))
}
return ids
clones = append(clones, cloneEntry{ID: id, Tag: tag})
}
return clones
}
func writeWSError(ws *websocket.Conn, msg string) {

View File

@@ -30,7 +30,7 @@ func loadSnapshotWithNetworkOverride(ctx context.Context, sockPath, memPath, vms
payload := snapshotLoadRequest{
MemFilePath: memPath,
SnapshotPath: vmstatePath,
ResumeVM: true,
ResumeVM: false, // Changed: We pause here so MMDS can be configured BEFORE Resume.
NetworkOverrides: []networkOverride{
{IfaceID: "1", HostDevName: tapName},
},

View File

@@ -62,6 +62,11 @@
}
.clone-entry button.destroy:hover { background: #2a1a1a; }
.clone-entry button.destroy:disabled { color: #555; cursor: default; }
.clone-tag {
font-size: .72rem;
color: #666;
margin-left: .4rem;
}
#index .none { color: #666; font-size: .9rem; }
@@ -79,6 +84,28 @@
#spawn-btn:hover:not(:disabled) { background: #243e24; }
#spawn-btn:disabled { opacity: .5; cursor: default; }
#spawn-controls {
display: flex;
gap: 0.5rem;
align-items: center;
}
#tag-select {
background: #1a1a1a;
border: 1px solid #444;
border-radius: 4px;
color: #8be;
padding: 0.4rem 0.8rem;
font-family: monospace;
font-size: 0.9rem;
outline: none;
transition: border-color .15s, background .15s;
cursor: pointer;
}
#tag-select:hover { background: #222; }
#tag-select:focus { border-color: #8be; }
#tag-select:disabled { opacity: .5; cursor: default; }
#error-msg {
color: #c44;
font-size: .85rem;
@@ -119,7 +146,10 @@
<h1>fc-orch console</h1>
<ul id="clone-list"></ul>
<p class="none" id="no-clones" style="display:none">No running clones.</p>
<div id="spawn-controls">
<select id="tag-select"></select>
<button id="spawn-btn">+ Spawn clone</button>
</div>
<p id="error-msg"></p>
</div>
@@ -143,6 +173,7 @@
const ul = document.getElementById('clone-list');
const noneEl = document.getElementById('no-clones');
const spawnBtn = document.getElementById('spawn-btn');
const tagSelect = document.getElementById('tag-select');
const errEl = document.getElementById('error-msg');
function showError(msg) {
@@ -157,11 +188,11 @@
noneEl.style.display = 'none';
const li = document.createElement('li');
li.className = 'clone-entry';
li.dataset.id = c;
li.dataset.id = c.id;
li.innerHTML =
`<a href="/?id=${c}">clone ${c}</a>` +
`<button class="destroy" title="Destroy clone ${c}">✕</button>`;
li.querySelector('.destroy').addEventListener('click', () => destroyClone(c, li));
`<a href="/?id=${c.id}">clone ${c.id}<span class="clone-tag">${c.tag}</span></a>` +
`<button class="destroy" title="Destroy clone ${c.id}">✕</button>`;
li.querySelector('.destroy').addEventListener('click', () => destroyClone(c.id, li));
ul.appendChild(li);
}
@@ -200,7 +231,11 @@
spawnBtn.disabled = true;
spawnBtn.textContent = 'Spawning…';
clearError();
fetch('/clones', { method: 'POST' })
fetch('/clones', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ net: true, tag: tagSelect.value }),
})
.then(r => {
if (!r.ok) return r.text().then(t => { throw new Error(t); });
return r.json();
@@ -215,7 +250,37 @@
});
});
function refreshTags() {
fetch('/tags')
.then(r => r.json())
.then(tags => {
tagSelect.innerHTML = '';
if (!tags || tags.length === 0) {
const opt = document.createElement('option');
opt.value = '';
opt.textContent = 'No golden VMs';
tagSelect.appendChild(opt);
tagSelect.disabled = true;
spawnBtn.disabled = true;
return;
}
tagSelect.disabled = false;
spawnBtn.disabled = false;
tags.forEach(t => {
const opt = document.createElement('option');
opt.value = t;
opt.textContent = t;
if (t === 'default' || t === 'alpine') opt.selected = true;
tagSelect.appendChild(opt);
});
})
.catch(e => {
console.error("fetch tags failed:", e);
});
}
refreshList();
refreshTags();
return;
}

21
test_mmds.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
sock="/tmp/fctest.sock"
rm -f "$sock"
firecracker --api-sock "$sock" &
FCPID=$!
sleep 1
# Configure MMDS backend
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds/config" \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
# Put data
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds" \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"ip": "10.0.0.2", "gw": "10.0.0.1", "dns": "1.1.1.1"}'
# Read data
curl --unix-socket "$sock" -i -X GET "http://localhost/mmds"
kill $FCPID

21
test_mmds_restore.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
sock="/tmp/fctest2.sock"
rm -f "$sock"
firecracker --api-sock "$sock" >/dev/null 2>&1 &
FCPID=$!
sleep 1
# Start a VM basically
curl --unix-socket "$sock" -s -X PUT "http://localhost/machine-config" -d '{"vcpu_count": 1, "mem_size_mib": 128}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/network-interfaces/1" -d '{"iface_id": "1", "guest_mac": "AA:FC:00:00:00:01", "host_dev_name": "lo"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/mmds/config" -d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/boot-source" -d '{"kernel_image_path": "/tmp/fc-orch/vmlinux", "boot_args": "console=ttyS0 reboot=k panic=1 pci=off"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/actions" -d '{"action_type": "InstanceStart"}'
# Pause
curl --unix-socket "$sock" -s -X PATCH "http://localhost/vm" -d '{"state": "Paused"}'
# TRY TO CONFIGURE MMDS
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds/config" -d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
kill $FCPID