feat: multi-distro support and tagged golden snapshots

Add Alpine, Debian, and Ubuntu rootfs support to `init [distro]`.
Golden snapshots are now namespaced under `golden/<tag>/` so multiple
baselines can coexist. `spawn [tag] [N]` selects which snapshot to
clone from. Systemd-based distros (Debian, Ubuntu) get a fc-net-init
systemd unit; Alpine keeps its inittab-based init.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-14 20:48:43 +00:00
parent bfc1f47287
commit fb1db7c9ea
10 changed files with 576 additions and 102 deletions

View File

@@ -56,10 +56,12 @@ After running all commands, `$FC_BASE_DIR` (`/tmp/fc-orch` by default) contains:
├── vmlinux # kernel image (shared, immutable)
├── rootfs.ext4 # base Alpine rootfs (shared, immutable)
├── golden/
│ ├── api.sock # Firecracker API socket (golden VM, transient)
│ ├── rootfs.ext4 # COW copy of base rootfs used by golden VM
│ ├── mem # memory snapshot (read by all clones, never written)
└── vmstate # VM state snapshot (golden reference)
│ ├── default/ # "default" tag directory
│ ├── api.sock # Firecracker API socket (golden VM, transient)
│ ├── rootfs.ext4 # COW copy of base rootfs used by golden VM
│ ├── mem # memory snapshot (read by all clones, never written)
│ │ └── vmstate # VM state snapshot (golden reference)
│ └── <tag>/ # other tagged snapshots
├── clones/
│ ├── 1/
│ │ ├── api.sock # Firecracker API socket (clone 1)
@@ -145,13 +147,14 @@ Within ~12 seconds of clone start, `eth0` inside the VM will have the assigne
### Purpose
Downloads the Linux kernel image and builds a minimal Alpine Linux ext4 rootfs. This command only needs to run once; both artifacts are reused by all subsequent `golden` invocations. `init` is idempotent — it skips any artifact that already exists on disk.
Downloads the Linux kernel image and builds a minimal filesystem (Alpine, Debian, or Ubuntu). This command only needs to run once per distro; both artifacts are reused by `golden` invocations. `init` is idempotent — it skips any artifact that already exists on disk.
### Usage
```sh
sudo ./fc-orch init
sudo ./fc-orch init [distro]
```
Where `[distro]` can be `alpine` (default), `debian`, or `ubuntu`.
Optional overrides:
@@ -265,13 +268,15 @@ This command always recreates the golden directory from scratch, discarding any
### Usage
```sh
sudo ./fc-orch golden
sudo ./fc-orch golden [tag] [distro]
```
Where `[tag]` identifies the snapshot baseline name (default `default`), and `[distro]` dictates the source `.ext4` image to use (default: `alpine`).
Optional overrides:
```sh
sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden ubuntu
```
### Prerequisites
@@ -289,14 +294,14 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
2. **Recreate golden directory**
```sh
rm -rf /tmp/fc-orch/golden
mkdir -p /tmp/fc-orch/golden /tmp/fc-orch/pids
rm -rf /tmp/fc-orch/golden/<tag>
mkdir -p /tmp/fc-orch/golden/<tag> /tmp/fc-orch/pids
```
3. **COW copy of base rootfs**
```sh
cp --reflink=always /tmp/fc-orch/rootfs.ext4 /tmp/fc-orch/golden/rootfs.ext4
cp --reflink=always /tmp/fc-orch/rootfs.ext4 /tmp/fc-orch/golden/<tag>/rootfs.ext4
```
On filesystems that do not support reflinks (e.g. ext4), this falls back to a regular byte-for-byte copy via `io.Copy`. On btrfs or xfs, the reflink is instant and consumes no additional space until the VM writes to the disk.
@@ -330,7 +335,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
5. **Build Firecracker machine configuration** (passed to the SDK in memory):
```
SocketPath: /tmp/fc-orch/golden/api.sock
SocketPath: /tmp/fc-orch/golden/<tag>/api.sock
KernelImagePath: /tmp/fc-orch/vmlinux
KernelArgs: console=ttyS0 reboot=k panic=1 pci=off i8042.noaux quiet loglevel=0
MachineCfg:
@@ -339,7 +344,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
TrackDirtyPages: true ← required for snapshot support
Drives:
- DriveID: rootfs
PathOnHost: /tmp/fc-orch/golden/rootfs.ext4
PathOnHost: /tmp/fc-orch/golden/<tag>/rootfs.ext4
IsRootDevice: true
IsReadOnly: false
NetworkInterfaces:
@@ -352,7 +357,7 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
The Firecracker Go SDK spawns:
```sh
firecracker --api-sock /tmp/fc-orch/golden/api.sock
firecracker --api-sock /tmp/fc-orch/golden/<tag>/api.sock
```
The SDK then applies the machine configuration via HTTP calls to the Firecracker API socket.
@@ -385,13 +390,13 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
```go
m.CreateSnapshot(ctx,
"/tmp/fc-orch/golden/mem",
"/tmp/fc-orch/golden/vmstate",
"/tmp/fc-orch/golden/<tag>/mem",
"/tmp/fc-orch/golden/<tag>/vmstate",
)
// SDK call — PUT /snapshot/create
// {
// "mem_file_path": "/tmp/fc-orch/golden/mem",
// "snapshot_path": "/tmp/fc-orch/golden/vmstate",
// "mem_file_path": "/tmp/fc-orch/golden/<tag>/mem",
// "snapshot_path": "/tmp/fc-orch/golden/<tag>/vmstate",
// "snapshot_type": "Full"
// }
```
@@ -417,16 +422,16 @@ sudo FC_MEM_MIB=256 FC_VCPUS=2 ./fc-orch golden
| Path | Description |
|---|---|
| `/tmp/fc-orch/golden/mem` | Full memory snapshot (~`FC_MEM_MIB` MiB) |
| `/tmp/fc-orch/golden/vmstate` | VM state snapshot (vCPU registers, device state) |
| `/tmp/fc-orch/golden/rootfs.ext4` | COW copy of base rootfs (not needed after snapshotting, kept for reference) |
| `/tmp/fc-orch/golden/<tag>/mem` | Full memory snapshot (~`FC_MEM_MIB` MiB) |
| `/tmp/fc-orch/golden/<tag>/vmstate` | VM state snapshot (vCPU registers, device state) |
| `/tmp/fc-orch/golden/<tag>/rootfs.ext4` | COW copy of base rootfs (not needed after snapshotting, kept for reference) |
### Error conditions
| Error | Cause | Resolution |
|---|---|---|
| `kernel not found — run init first` | `FC_KERNEL` path does not exist | Run `init` first |
| `rootfs not found — run init first` | `FC_ROOTFS` path does not exist | Run `init` first |
| `rootfs not found — run init first` | Ext4 file does not exist | Run `init [distro]` first |
| `firecracker binary not found` | `FC_BIN` not in `$PATH` | Install Firecracker or set `FC_BIN` |
| `create bridge: ...` | `ip link add` failed | Check if another bridge with the same name exists with incompatible config |
| `start golden VM: ...` | Firecracker failed to boot | Check Firecracker logs; verify kernel and rootfs are valid |
@@ -446,8 +451,8 @@ Clone IDs are auto-incremented: if clones 13 already exist, the next `spawn 2
### Usage
```sh
sudo ./fc-orch spawn # spawn 1 clone (default)
sudo ./fc-orch spawn 10 # spawn 10 clones
sudo ./fc-orch spawn # spawn 1 clone from the "default" golden snapshot
sudo ./fc-orch spawn ubuntu 10 # spawn 10 clones from the "ubuntu" golden snapshot
```
### Prerequisites
@@ -462,7 +467,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
1. **Verify golden artifacts exist**
Checks for both `/tmp/fc-orch/golden/vmstate` and `/tmp/fc-orch/golden/mem`. Exits with an error if either is missing.
Checks for both `/tmp/fc-orch/golden/<tag>/vmstate` and `/tmp/fc-orch/golden/<tag>/mem`. Exits with an error if either is missing.
2. **Create directories**
@@ -478,20 +483,20 @@ The following steps are performed once for each requested clone. Let `{id}` be t
4. **COW copy of golden rootfs**
```sh
cp --reflink=always /tmp/fc-orch/golden/rootfs.ext4 /tmp/fc-orch/clones/{id}/rootfs.ext4
cp --reflink=always /tmp/fc-orch/golden/<tag>/rootfs.ext4 /tmp/fc-orch/clones/{id}/rootfs.ext4
```
Falls back to a full copy if reflinks are unsupported.
5. **Shared memory reference** (no copy)
The clone's Firecracker config will point directly at `/tmp/fc-orch/golden/mem`. No file operation is needed here — the kernel's MAP_PRIVATE ensures each clone's writes are private.
The clone's Firecracker config will point directly at `/tmp/fc-orch/golden/<tag>/mem`. No file operation is needed here — the kernel's MAP_PRIVATE ensures each clone's writes are private.
6. **Copy vmstate**
```sh
# implemented as io.Copy in Go
cp /tmp/fc-orch/golden/vmstate /tmp/fc-orch/clones/{id}/vmstate
cp /tmp/fc-orch/golden/<tag>/vmstate /tmp/fc-orch/clones/{id}/vmstate
```
The vmstate file is small (typically < 1 MiB), so a full copy is cheap.
@@ -524,7 +529,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
- MacAddress: AA:FC:00:00:00:{id:02X}
HostDevName: fctap{id}
Snapshot:
MemFilePath: /tmp/fc-orch/golden/mem ← shared, read-only mapping
MemFilePath: /tmp/fc-orch/golden/<tag>/mem ← shared, read-only mapping
SnapshotPath: /tmp/fc-orch/clones/{id}/vmstate
ResumeVM: true ← restore instead of fresh boot
```
@@ -543,7 +548,7 @@ The following steps are performed once for each requested clone. Let `{id}` be t
m.Start(ctx)
// SDK call — POST /snapshot/load
// {
// "mem_file_path": "/tmp/fc-orch/golden/mem",
// "mem_file_path": "/tmp/fc-orch/golden/<tag>/mem",
// "snapshot_path": "/tmp/fc-orch/clones/{id}/vmstate",
// "resume_vm": true
// }

166
docs/create_golden_image.md Normal file
View File

@@ -0,0 +1,166 @@
# Guide: Creating Custom Golden Images
This guide outlines exactly how to create new, customized golden images (e.g. pivoting from Alpine to an Ubuntu or Node.js environment) and seamlessly integrate them into the `fc-orch` tagging system.
By default, executing `./fc-orch init` gives you a basic Alpine Linux image, but you can also generate built-in Ubuntu and Debian environments trivially via `./fc-orch init ubuntu` or `./fc-orch init debian`. The real power of `fc-orch` lies in maintaining multiple customized snapshot bases (golden tags).
---
## 1. Acquiring Custom Assets
To build a fresh golden image, at minimum you must provide a new filesystem:
- **Custom Root Filesystem**: An uncompressed `ext4` filesystem image that contains your system and libraries.
- **Custom Kernel** *(Optional)*: An uncompressed Linux kernel binary (`vmlinux`). If not provided, the default Firecracker CI VM kernel will continue to be utilized flawlessly.
### Recommendations for Custom Distros:
If you are generating a completely custom/unsupported base, you may use tools like `docker export` compiled via `mkfs.ext4`, or utilize `debootstrap` to provision your image.
Ensure that your custom root filesystem contains an appropriate bootstrap sequence inside `/etc/init.d/rcS` (or systemd if configured) to natively mount essential directories (`/proc`, `/sys`, `/dev`) and configure the `eth0` link interface, as Firecracker expects the guest OS to prepare these primitives natively. Our native `init` tool handles this automatically for `alpine`, `ubuntu`, and `debian` distributions.
## 2. Using Environment Overrides
Rather than replacing the default `/tmp/fc-orch/rootfs.ext4`, `fc-orch` implements powerful environment variables you can override prior to capturing the golden snapshot.
The essential variables to override are:
- `FC_ROOTFS`: Path to your custom `.ext4` image (e.g., `/home/user/ubuntu.ext4`).
- `FC_MEM_MIB`: Amount of initial memory the golden VM receives. Heavier OS's like Ubuntu typically require more than the 128 MiB default (e.g., `512`).
- `FC_VCPUS`: Processing allocation to start the VM. Default is `1`.
## 3. Capturing the Custom Golden Snapshot
Let's assume we want to provision a standard Ubuntu environment. First, create the rootfs (this automatically downloads and sets up the rootfs on your host):
```bash
sudo ./fc-orch init ubuntu
```
Then capture the baseline using the `ubuntu` tag and the `ubuntu` distro target. Note the increased resources `FC_MEM_MIB` allocating 512MB RAM for tighter operations.
```bash
sudo FC_MEM_MIB=512 FC_VCPUS=2 ./fc-orch golden ubuntu ubuntu
```
### What happens in the background?
1. The orchestrator prepares a brand new directory for this baseline exclusively at: `/tmp/fc-orch/golden/ubuntu/`.
2. It takes your customized `/home/user/ubuntu.ext4` and utilizes it as the root block device for the orchestrator environment.
3. Firecracker boots the VM. It waits exactly 3 seconds for the OS initialization logic to naturally settle.
4. The VM instance is aggressively paused. A serialized register state checkpoint (`vmstate`) and raw memory projection (`mem`) are exported permanently into the `/tmp/fc-orch/golden/ubuntu/` directory.
> **Note**: Firecracker terminates the internal process after finalizing the artifacts. The custom snapshot baseline completely persists!
## 4. Spawning Scalable Clones
Since your image is now indexed inside the `ubuntu` tag boundary, it can be cloned independently using Copy-on-Write (COW).
Simply address the target tag along with the desired replica count during the `spawn` command:
```bash
sudo ./fc-orch spawn ubuntu 10
```
This immediately duplicates the exact hardware footprint, generating 10 concurrent active Firecracker VMs resolving locally to your custom OS without disturbing your previous generic Alpine builds. Multiple base OS architectures can run collectively side-by-side using this methodology!
---
## How Ubuntu VM Configuration Works
### Build-time: chroot package installation
`ubuntu-base` is a deliberately bare tarball — it ships no shell beyond `/bin/sh` (dash), no network tools, and no package cache. When `fc-orch init ubuntu` runs, after extracting the tarball the orchestrator performs a chroot install step:
1. **Virtual filesystems are bind-mounted** into the image (`/proc`, `/sys`, `/dev`, `/dev/pts`) so that `apt-get` can function correctly inside the chroot.
2. **`/etc/resolv.conf` is copied** from the host so DNS works during the install.
3. **`apt-get` installs the following packages** with `--no-install-recommends` to keep the image lean:
| Package | Purpose |
|---|---|
| `bash` | Interactive shell |
| `curl` | General-purpose HTTP client |
| `iproute2` | Provides the `ip` command (required by `fc-net-init`) |
| `wget` | Used by `fc-net-init` to poll the MMDS metadata endpoint |
| `ca-certificates` | Trusted CA bundle so HTTPS works out of the box |
4. **`apt` cache is purged** (`apt-get clean` + `rm -rf /var/lib/apt/lists/*`) before unmounting, keeping the final image around 200 MB on disk rather than 2 GB.
5. All bind mounts are removed before the function returns, whether or not the install succeeded.
The resulting ext4 image is **512 MB** (vs. 2 GB for a stock Ubuntu cloud image), comfortably fitting the installed packages with room for runtime state.
### Boot-time: guest network autoconfiguration via MMDS
Every Ubuntu image gets `/sbin/fc-net-init` embedded at build time. On Ubuntu this script is wired into systemd as `fc-net-init.service` (enabled in `multi-user.target`).
When a clone VM resumes from its golden snapshot the service runs the following sequence:
```
1. ip addr add 169.254.169.2/32 dev eth0
— Adds a link-local address so the guest can reach the Firecracker MMDS
gateway at 169.254.169.254 without any prior routing state.
2. Poll GET http://169.254.169.254/ip (1-second timeout, retry every 1 s)
— Loops until the host has injected the per-clone IP config via
PUT /mmds on the Firecracker API socket.
3. Once /ip responds, fetch /gw and /dns from the same endpoint.
4. ip addr flush dev eth0
ip addr add <ip> dev eth0
ip route add default via <gw> dev eth0
echo "nameserver <dns>" > /etc/resolv.conf
— Applies the config atomically and exits.
```
The host side (see `orchestrator/network.go`) injects the three keys (`ip`, `gw`, `dns`) via the Firecracker MMDS API **after** the snapshot is loaded but **before** the VM is resumed, so the guest sees the data on its very first poll.
This design means the golden snapshot captures the polling loop already running. Clones that are spawned without `FC_AUTO_NET_CONFIG=1` will still run the loop — it simply never exits, which is harmless and consumes negligible CPU.
### Serial console
`serial-getty@ttyS0.service` is enabled at build time via a symlink in `getty.target.wants`. The root password is cleared so the console auto-logs-in without a password prompt. Connect with:
```bash
sudo ./fc-orch console <clone-id>
```
---
## Appendix: Practical Examples
### Creating Multiple Golden Images with Different Specs
You can manage a rich registry of different tagged images, provisioning them with varying specifications.
**1. Standard Alpine (Default, 128 MiB RAM, 1 vCPU)**
```bash
sudo ./fc-orch golden alpine alpine
```
**2. Ubuntu Web Server (1024 MiB RAM, 2 vCPUs)**
```bash
# assuming init ubuntu was already run
sudo FC_MEM_MIB=1024 FC_VCPUS=2 ./fc-orch golden my-ubuntu-server ubuntu
```
**3. Debian Database Node (4096 MiB RAM, 4 vCPUs)**
```bash
# assuming init debian was already run
sudo FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden my-debian-db debian
```
**4. External Custom Image (E.g. CentOS via Manual Provision)**
```bash
sudo FC_ROOTFS=/images/centos.ext4 FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden tag-centos
```
### Inspecting Your Hypervisor State
To easily visualize what your orchestrator has stored and where, you can run the following hypervisor commands:
**View the structured layout of all golden image namespaces:**
```bash
tree -a /tmp/fc-orch/golden
```
*(If `tree` is not installed, you can use `ls -R /tmp/fc-orch/golden`)*
**View the exact disk usage and file sizes for a specific image artifact (like ubuntu):**
```bash
ls -lh /tmp/fc-orch/golden/ubuntu/
```
Output will similarly demonstrate that `mem` represents your full allocated RAM (e.g., 1024M), while `vmstate` is essentially negligible.

38
main.go
View File

@@ -21,6 +21,7 @@ import (
"os"
"path/filepath"
"runtime"
"strconv"
log "github.com/sirupsen/logrus"
@@ -65,15 +66,32 @@ func main() {
switch os.Args[1] {
case "init":
fatal(orch.Init())
distro := "alpine"
if len(os.Args) > 2 {
distro = os.Args[2]
}
fatal(orch.Init(distro))
case "golden":
fatal(orch.Golden())
tag := "default"
distro := "alpine"
if len(os.Args) > 2 {
tag = os.Args[2]
}
if len(os.Args) > 3 {
distro = os.Args[3]
}
fatal(orch.Golden(tag, distro))
case "spawn":
n := 1
if len(os.Args) > 2 {
fmt.Sscanf(os.Args[2], "%d", &n)
tag := "default"
for _, arg := range os.Args[2:] {
if parsed, err := strconv.Atoi(arg); err == nil {
n = parsed
} else {
tag = arg
}
}
fatal(orch.Spawn(n))
fatal(orch.Spawn(n, tag))
case "status":
orch.Status()
case "kill":
@@ -98,14 +116,16 @@ func main() {
// Internal subcommand: started by spawnOne, runs as a background daemon.
fs := flag.NewFlagSet("_console-proxy", flag.ContinueOnError)
var id int
var tag string
var tap string
fs.IntVar(&id, "id", 0, "clone ID")
fs.StringVar(&tag, "tag", "default", "Golden VM tag")
fs.StringVar(&tap, "tap", "", "TAP device name")
if err := fs.Parse(os.Args[2:]); err != nil {
fmt.Fprintf(os.Stderr, "console-proxy: %v\n", err)
os.Exit(1)
}
fatal(orchestrator.RunConsoleProxy(orchestrator.DefaultConfig(), id, tap))
fatal(orchestrator.RunConsoleProxy(orchestrator.DefaultConfig(), id, tap, tag))
default:
usage()
os.Exit(1)
@@ -119,9 +139,9 @@ Flags:
--dev log format with source file:line (e.g. file="orchestrator.go:123")
Commands:
init Download kernel + create Alpine rootfs
golden Boot golden VM → pause → snapshot
spawn [N] Restore N clones from golden snapshot (default: 1)
init [distro] Download kernel + create distro rootfs (default: alpine, options: alpine, debian, ubuntu)
golden [tag] [distro] Boot golden VM → pause → snapshot (default tag: default, default distro: alpine)
spawn [tag] [N] Restore N clones from golden snapshot (default tag: default, default N: 1)
serve [addr] Start terminal web UI (default: :8080)
console <id> Attach to the serial console of a running clone (Ctrl+] to detach)
status Show running clones

View File

@@ -2,6 +2,7 @@ package orchestrator
import (
"os"
"path/filepath"
"strconv"
)
@@ -11,7 +12,7 @@ type Config struct {
BaseDir string // working directory for all state
Kernel string // path to vmlinux
KernelURL string // URL to download vmlinux if Kernel file is missing
Rootfs string // path to base rootfs.ext4
CustomRootfs string // Custom path to rootfs if FC_ROOTFS is set
VCPUs int64
MemMiB int64
Bridge string // host bridge name, or "none" to skip networking
@@ -38,10 +39,18 @@ func DefaultConfig() Config {
c.Kernel = envOr("FC_KERNEL", c.BaseDir+"/vmlinux")
c.KernelURL = envOr("FC_KERNEL_URL",
"https://s3.amazonaws.com/spec.ccfc.min/firecracker-ci/20260408-ce2a467895c1-0/x86_64/vmlinux-6.1.166")
c.Rootfs = envOr("FC_ROOTFS", c.BaseDir+"/rootfs.ext4")
c.CustomRootfs = os.Getenv("FC_ROOTFS")
return c
}
// RootfsPath returns the path to the root filesystem depending on the requested distribution.
func (c Config) RootfsPath(distro string) string {
if c.CustomRootfs != "" {
return c.CustomRootfs
}
return filepath.Join(c.BaseDir, "rootfs-"+distro+".ext4")
}
func envOr(key, fallback string) string {
if v := os.Getenv(key); v != "" {
return v

View File

@@ -25,11 +25,11 @@ import (
// It restores a Firecracker clone from the golden snapshot, connecting its serial
// console (ttyS0) to a PTY, then serves the PTY master on a Unix socket at
// {cloneDir}/console.sock for the lifetime of the VM.
func RunConsoleProxy(cfg Config, id int, tapName string) error {
func RunConsoleProxy(cfg Config, id int, tapName, tag string) error {
logger := log.WithField("component", fmt.Sprintf("console-proxy[%d]", id))
cloneDir := filepath.Join(cfg.BaseDir, "clones", strconv.Itoa(id))
goldenDir := filepath.Join(cfg.BaseDir, "golden")
goldenDir := filepath.Join(cfg.BaseDir, "golden", tag)
sockPath := filepath.Join(cloneDir, "api.sock")
consoleSockPath := filepath.Join(cloneDir, "console.sock")
sharedMem := filepath.Join(goldenDir, "mem")

View File

@@ -43,13 +43,13 @@ func New(cfg Config) *Orchestrator {
}
}
func (o *Orchestrator) goldenDir() string { return filepath.Join(o.cfg.BaseDir, "golden") }
func (o *Orchestrator) goldenDir(tag string) string { return filepath.Join(o.cfg.BaseDir, "golden", tag) }
func (o *Orchestrator) clonesDir() string { return filepath.Join(o.cfg.BaseDir, "clones") }
func (o *Orchestrator) pidsDir() string { return filepath.Join(o.cfg.BaseDir, "pids") }
// ——— Init ————————————————————————————————————————————————————————————————
func (o *Orchestrator) Init() error {
func (o *Orchestrator) Init(distro string) error {
if err := os.MkdirAll(o.cfg.BaseDir, 0o755); err != nil {
return err
}
@@ -65,54 +65,94 @@ func (o *Orchestrator) Init() error {
}
// Build rootfs if missing
if _, err := os.Stat(o.cfg.Rootfs); os.IsNotExist(err) {
o.log.Info("building minimal Alpine rootfs ...")
if err := o.buildRootfs(); err != nil {
rootfsPath := o.cfg.RootfsPath(distro)
if _, err := os.Stat(rootfsPath); os.IsNotExist(err) {
o.log.Infof("building minimal %s rootfs ...", distro)
if err := o.buildRootfs(distro, rootfsPath); err != nil {
return fmt.Errorf("build rootfs: %w", err)
}
o.log.Infof("rootfs saved to %s", o.cfg.Rootfs)
o.log.Infof("rootfs saved to %s", rootfsPath)
}
o.log.Info("init complete")
return nil
}
func (o *Orchestrator) buildRootfs() error {
func (o *Orchestrator) buildRootfs(distro, rootfsPath string) error {
sizeMB := 512
if distro == "debian" {
sizeMB = 2048
}
mnt := filepath.Join(o.cfg.BaseDir, "mnt")
// create empty ext4 image
o.log.Infof("running: dd if=/dev/zero of=%s bs=1M count=%d status=none", o.cfg.Rootfs, sizeMB)
if err := run("dd", "if=/dev/zero", "of="+o.cfg.Rootfs,
o.log.Infof("running: dd if=/dev/zero of=%s bs=1M count=%d status=none", rootfsPath, sizeMB)
if err := run("dd", "if=/dev/zero", "of="+rootfsPath,
"bs=1M", fmt.Sprintf("count=%d", sizeMB), "status=none"); err != nil {
return err
}
o.log.Infof("running: mkfs.ext4 -qF %s", o.cfg.Rootfs)
if err := run("mkfs.ext4", "-qF", o.cfg.Rootfs); err != nil {
o.log.Infof("running: mkfs.ext4 -qF %s", rootfsPath)
if err := run("mkfs.ext4", "-qF", rootfsPath); err != nil {
return err
}
os.MkdirAll(mnt, 0o755)
o.log.Infof("running: mount -o loop %s %s", o.cfg.Rootfs, mnt)
if err := run("mount", "-o", "loop", o.cfg.Rootfs, mnt); err != nil {
o.log.Infof("running: mount -o loop %s %s", rootfsPath, mnt)
if err := run("mount", "-o", "loop", rootfsPath, mnt); err != nil {
return err
}
defer run("umount", mnt)
defer func() {
o.log.Infof("running: umount %s", mnt)
run("umount", mnt)
}()
// download and extract Alpine minirootfs
alpineVer := "3.20"
arch := "x86_64"
tarball := fmt.Sprintf("alpine-minirootfs-%s.0-%s.tar.gz", alpineVer, arch)
url := fmt.Sprintf("https://dl-cdn.alpinelinux.org/alpine/v%s/releases/%s/%s",
alpineVer, arch, tarball)
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download alpine: %w", err)
}
o.log.Infof("running: tar xzf %s -C %s", tarPath, mnt)
if err := run("tar", "xzf", tarPath, "-C", mnt); err != nil {
return err
// download and extract minirootfs
switch distro {
case "alpine":
alpineVer := "3.20"
arch := "x86_64"
tarball := fmt.Sprintf("alpine-minirootfs-%s.0-%s.tar.gz", alpineVer, arch)
url := fmt.Sprintf("https://dl-cdn.alpinelinux.org/alpine/v%s/releases/%s/%s",
alpineVer, arch, tarball)
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download alpine: %w", err)
}
o.log.Infof("running: tar xzf %s -C %s", tarPath, mnt)
if err := run("tar", "xzf", tarPath, "-C", mnt); err != nil {
return err
}
case "debian":
tarball := "debian-12-nocloud-amd64.tar.xz"
url := "https://cloud.debian.org/images/cloud/bookworm/latest/" + tarball
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download debian: %w", err)
}
o.log.Infof("running: tar xJf %s -C %s", tarPath, mnt)
if err := run("tar", "xJf", tarPath, "-C", mnt); err != nil {
return err
}
case "ubuntu":
tarball := "ubuntu-base-24.04.4-base-amd64.tar.gz"
url := "https://cdimage.ubuntu.com/ubuntu-base/releases/24.04/release/" + tarball
tarPath := filepath.Join(o.cfg.BaseDir, tarball)
o.log.Infof("downloading http request: GET %s to %s", url, tarPath)
if err := downloadFile(url, tarPath); err != nil {
return fmt.Errorf("download ubuntu: %w", err)
}
o.log.Infof("running: tar xzf %s -C %s", tarPath, mnt)
if err := run("tar", "xzf", tarPath, "-C", mnt); err != nil {
return err
}
o.log.Info("installing essential packages in ubuntu chroot ...")
if err := installUbuntuPackages(mnt, o.log); err != nil {
return fmt.Errorf("install ubuntu packages: %w", err)
}
default:
return fmt.Errorf("unsupported distro: %s", distro)
}
// write fc-net-init daemon: polls MMDS for IP config and applies it.
@@ -139,8 +179,15 @@ done
return err
}
// write init script
initScript := `#!/bin/sh
// write fc-net-init
os.MkdirAll(filepath.Join(mnt, "sbin"), 0o755)
if err := os.WriteFile(filepath.Join(mnt, "sbin", "fc-net-init"), []byte(netInitScript), 0o755); err != nil {
return err
}
if distro == "alpine" {
// write init script
initScript := `#!/bin/sh
mount -t proc proc /proc
mount -t sysfs sys /sys
mount -t devtmpfs devtmpfs /dev
@@ -148,35 +195,83 @@ ip link set eth0 up 2>/dev/null
ip route add 169.254.169.254 dev eth0 2>/dev/null
/sbin/fc-net-init &
`
initPath := filepath.Join(mnt, "etc", "init.d", "rcS")
os.MkdirAll(filepath.Dir(initPath), 0o755)
if err := os.WriteFile(initPath, []byte(initScript), 0o755); err != nil {
return err
}
initPath := filepath.Join(mnt, "etc", "init.d", "rcS")
os.MkdirAll(filepath.Dir(initPath), 0o755)
if err := os.WriteFile(initPath, []byte(initScript), 0o755); err != nil {
return err
}
// write inittab
inittab := "::sysinit:/etc/init.d/rcS\nttyS0::respawn:/bin/sh\n"
return os.WriteFile(filepath.Join(mnt, "etc", "inittab"), []byte(inittab), 0o644)
// write inittab
inittab := "::sysinit:/etc/init.d/rcS\nttyS0::respawn:/bin/sh\n"
return os.WriteFile(filepath.Join(mnt, "etc", "inittab"), []byte(inittab), 0o644)
} else {
// systemd-based distributions (Debian, Ubuntu)
svc := `[Unit]
Description=Firecracker Network Init
After=network.target
[Service]
Type=oneshot
ExecStart=/sbin/fc-net-init
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
`
svcPath := filepath.Join(mnt, "etc", "systemd", "system", "fc-net-init.service")
os.MkdirAll(filepath.Dir(svcPath), 0o755)
if err := os.WriteFile(svcPath, []byte(svc), 0o644); err != nil {
return err
}
// Enable service dynamically
wantsDir := filepath.Join(mnt, "etc", "systemd", "system", "multi-user.target.wants")
os.MkdirAll(wantsDir, 0o755)
os.Symlink("/etc/systemd/system/fc-net-init.service", filepath.Join(wantsDir, "fc-net-init.service")) //nolint:errcheck
// Ensure serial console is active
gettyWantsDir := filepath.Join(mnt, "etc", "systemd", "system", "getty.target.wants")
os.MkdirAll(gettyWantsDir, 0o755)
os.Symlink("/lib/systemd/system/serial-getty@.service", filepath.Join(gettyWantsDir, "serial-getty@ttyS0.service")) //nolint:errcheck
// Clear root password for auto-login on console
shadowPath := filepath.Join(mnt, "etc", "shadow")
if shadowBytes, err := os.ReadFile(shadowPath); err == nil {
lines := strings.Split(string(shadowBytes), "\n")
for i, line := range lines {
if strings.HasPrefix(line, "root:") {
parts := strings.Split(line, ":")
if len(parts) > 1 {
parts[1] = ""
lines[i] = strings.Join(parts, ":")
}
}
}
os.WriteFile(shadowPath, []byte(strings.Join(lines, "\n")), 0o640) //nolint:errcheck
}
}
return nil
}
// ——— Golden VM ——————————————————————————————————————————————————————————
func (o *Orchestrator) Golden() error {
func (o *Orchestrator) Golden(tag string, distro string) error {
if _, err := os.Stat(o.cfg.Kernel); err != nil {
return fmt.Errorf("kernel not found — run init first: %w", err)
}
if _, err := os.Stat(o.cfg.Rootfs); err != nil {
rootfsPath := o.cfg.RootfsPath(distro)
if _, err := os.Stat(rootfsPath); err != nil {
return fmt.Errorf("rootfs not found — run init first: %w", err)
}
goldenDir := o.goldenDir()
goldenDir := o.goldenDir(tag)
os.RemoveAll(goldenDir)
os.MkdirAll(goldenDir, 0o755)
os.MkdirAll(o.pidsDir(), 0o755)
// COW copy of rootfs for golden VM
goldenRootfs := filepath.Join(goldenDir, "rootfs.ext4")
if err := reflinkCopy(o.cfg.Rootfs, goldenRootfs); err != nil {
if err := reflinkCopy(rootfsPath, goldenRootfs); err != nil {
return fmt.Errorf("copy rootfs: %w", err)
}
@@ -301,13 +396,29 @@ func (o *Orchestrator) Golden() error {
return nil
}
// GoldenTags returns a list of all existing golden VM tags.
func (o *Orchestrator) GoldenTags() []string {
goldenDir := filepath.Join(o.cfg.BaseDir, "golden")
entries, err := os.ReadDir(goldenDir)
if err != nil {
return nil
}
var tags []string
for _, e := range entries {
if e.IsDir() {
tags = append(tags, e.Name())
}
}
return tags
}
// ——— Spawn clones ——————————————————————————————————————————————————————
func (o *Orchestrator) Spawn(count int) error {
goldenDir := o.goldenDir()
func (o *Orchestrator) Spawn(count int, tag string) error {
goldenDir := o.goldenDir(tag)
for _, f := range []string{"vmstate", "mem"} {
if _, err := os.Stat(filepath.Join(goldenDir, f)); err != nil {
return fmt.Errorf("golden %s not found — run golden first", f)
return fmt.Errorf("golden %s not found for tag %s — run golden first", f, tag)
}
}
@@ -321,7 +432,7 @@ func (o *Orchestrator) Spawn(count int) error {
for i := 0; i < count; i++ {
id := o.nextCloneID()
if err := o.spawnOne(id, o.cfg.AutoNetConfig); err != nil {
if err := o.spawnOne(id, o.cfg.AutoNetConfig, tag); err != nil {
o.log.Errorf("clone %d failed: %v", id, err)
continue
}
@@ -338,11 +449,11 @@ func (o *Orchestrator) Spawn(count int) error {
// SpawnSingle spawns one clone. net controls whether the guest receives
// automatic IP configuration via MMDS (overrides FC_AUTO_NET_CONFIG for this
// clone). Pass cfg.AutoNetConfig to preserve the global default.
func (o *Orchestrator) SpawnSingle(net bool) (int, error) {
goldenDir := o.goldenDir()
func (o *Orchestrator) SpawnSingle(net bool, tag string) (int, error) {
goldenDir := o.goldenDir(tag)
for _, f := range []string{"vmstate", "mem"} {
if _, err := os.Stat(filepath.Join(goldenDir, f)); err != nil {
return 0, fmt.Errorf("golden %s not found — run golden first", f)
return 0, fmt.Errorf("golden %s not found for tag %s — run golden first", f, tag)
}
}
os.MkdirAll(o.clonesDir(), 0o755)
@@ -353,7 +464,7 @@ func (o *Orchestrator) SpawnSingle(net bool) (int, error) {
}
}
id := o.nextCloneID()
if err := o.spawnOne(id, net); err != nil {
if err := o.spawnOne(id, net, tag); err != nil {
return 0, err
}
return id, nil
@@ -379,8 +490,8 @@ func (o *Orchestrator) KillClone(id int) error {
return nil
}
func (o *Orchestrator) spawnOne(id int, net bool) error {
goldenDir := o.goldenDir()
func (o *Orchestrator) spawnOne(id int, net bool, tag string) error {
goldenDir := o.goldenDir(tag)
cloneDir := filepath.Join(o.clonesDir(), strconv.Itoa(id))
os.MkdirAll(cloneDir, 0o755)
@@ -419,7 +530,7 @@ func (o *Orchestrator) spawnOne(id int, net bool) error {
return fmt.Errorf("resolve self path: %w", err)
}
proxyArgs := []string{"_console-proxy", "--id", strconv.Itoa(id)}
proxyArgs := []string{"_console-proxy", "--id", strconv.Itoa(id), "--tag", tag}
if o.cfg.Bridge != "none" {
proxyArgs = append(proxyArgs, "--tap", tapName)
}
@@ -544,7 +655,7 @@ func (o *Orchestrator) Kill() error {
func (o *Orchestrator) Cleanup() error {
o.Kill()
os.RemoveAll(o.clonesDir())
os.RemoveAll(o.goldenDir())
os.RemoveAll(filepath.Join(o.cfg.BaseDir, "golden"))
os.RemoveAll(o.pidsDir())
if o.cfg.Bridge != "none" {
@@ -558,6 +669,56 @@ func (o *Orchestrator) Cleanup() error {
// ——— Helpers ——————————————————————————————————————————————————————————
// installUbuntuPackages bind-mounts the virtual filesystems into mnt, then
// runs apt-get inside the chroot to install the minimal toolset required for
// network operation and general use. Bind mounts are always cleaned up on
// return regardless of whether apt-get succeeds.
func installUbuntuPackages(mnt string, logger *log.Entry) error {
type bm struct{ fstype, src, dst string }
mounts := []bm{
{"proc", "proc", "proc"},
{"sysfs", "sysfs", "sys"},
{"devtmpfs", "devtmpfs", "dev"},
{"devpts", "devpts", "dev/pts"},
}
// mount in order; on any failure unmount whatever succeeded and return.
for i, m := range mounts {
dst := filepath.Join(mnt, m.dst)
os.MkdirAll(dst, 0o755)
logger.Infof("running: mount -t %s %s %s", m.fstype, m.src, dst)
if err := run("mount", "-t", m.fstype, m.src, dst); err != nil {
for j := i - 1; j >= 0; j-- {
logger.Infof("running: umount %s", filepath.Join(mnt, mounts[j].dst))
run("umount", filepath.Join(mnt, mounts[j].dst)) //nolint:errcheck
}
return fmt.Errorf("mount %s: %w", m.dst, err)
}
}
defer func() {
for i := len(mounts) - 1; i >= 0; i-- {
logger.Infof("running: umount %s", filepath.Join(mnt, mounts[i].dst))
run("umount", filepath.Join(mnt, mounts[i].dst)) //nolint:errcheck
}
}()
// Provide DNS resolution inside the chroot so apt-get can reach the network.
if data, err := os.ReadFile("/etc/resolv.conf"); err == nil {
os.WriteFile(filepath.Join(mnt, "etc/resolv.conf"), data, 0o644) //nolint:errcheck
}
pkgs := "bash curl iproute2 wget ca-certificates"
script := "DEBIAN_FRONTEND=noninteractive apt-get update -q && " +
"DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends " + pkgs + " && " +
"apt-get clean && rm -rf /var/lib/apt/lists/*"
logger.Infof("running: chroot %s /bin/sh -c %q", mnt, script)
cmd := exec.Command("chroot", mnt, "/bin/sh", "-c", script)
cmd.Stdout = logger.Writer()
cmd.Stderr = logger.Writer()
return cmd.Run()
}
func (o *Orchestrator) nextCloneID() int {
max := 0
entries, _ := os.ReadDir(o.clonesDir())

View File

@@ -53,10 +53,11 @@ func Serve(orch *Orchestrator, addr string) error {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(ids) //nolint:errcheck
case http.MethodPost:
// Optional JSON body: {"net": bool}
// Optional JSON body: {"net": bool, "tag": string}
// Defaults to the server's FC_AUTO_NET_CONFIG setting.
var req struct {
Net *bool `json:"net"`
Net *bool `json:"net"`
Tag *string `json:"tag"`
}
if r.ContentLength > 0 {
json.NewDecoder(r.Body).Decode(&req) //nolint:errcheck
@@ -65,7 +66,11 @@ func Serve(orch *Orchestrator, addr string) error {
if req.Net != nil {
net = *req.Net
}
id, err := orch.SpawnSingle(net)
tag := "default"
if req.Tag != nil && *req.Tag != "" {
tag = *req.Tag
}
id, err := orch.SpawnSingle(net, tag)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
@@ -77,6 +82,16 @@ func Serve(orch *Orchestrator, addr string) error {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
}
})
// /tags — list all available golden VM tags
mux.HandleFunc("/tags", func(w http.ResponseWriter, r *http.Request) {
if r.Method != http.MethodGet && r.Method != "" {
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
return
}
tags := orch.GoldenTags()
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(tags) //nolint:errcheck
})
// /clones/{id} — destroy (DELETE)
mux.HandleFunc("/clones/", func(w http.ResponseWriter, r *http.Request) {

View File

@@ -79,6 +79,28 @@
#spawn-btn:hover:not(:disabled) { background: #243e24; }
#spawn-btn:disabled { opacity: .5; cursor: default; }
#spawn-controls {
display: flex;
gap: 0.5rem;
align-items: center;
}
#tag-select {
background: #1a1a1a;
border: 1px solid #444;
border-radius: 4px;
color: #8be;
padding: 0.4rem 0.8rem;
font-family: monospace;
font-size: 0.9rem;
outline: none;
transition: border-color .15s, background .15s;
cursor: pointer;
}
#tag-select:hover { background: #222; }
#tag-select:focus { border-color: #8be; }
#tag-select:disabled { opacity: .5; cursor: default; }
#error-msg {
color: #c44;
font-size: .85rem;
@@ -119,7 +141,10 @@
<h1>fc-orch console</h1>
<ul id="clone-list"></ul>
<p class="none" id="no-clones" style="display:none">No running clones.</p>
<button id="spawn-btn">+ Spawn clone</button>
<div id="spawn-controls">
<select id="tag-select"></select>
<button id="spawn-btn">+ Spawn clone</button>
</div>
<p id="error-msg"></p>
</div>
@@ -143,6 +168,7 @@
const ul = document.getElementById('clone-list');
const noneEl = document.getElementById('no-clones');
const spawnBtn = document.getElementById('spawn-btn');
const tagSelect = document.getElementById('tag-select');
const errEl = document.getElementById('error-msg');
function showError(msg) {
@@ -203,7 +229,7 @@
fetch('/clones', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ net: true }),
body: JSON.stringify({ net: true, tag: tagSelect.value }),
})
.then(r => {
if (!r.ok) return r.text().then(t => { throw new Error(t); });
@@ -219,7 +245,37 @@
});
});
function refreshTags() {
fetch('/tags')
.then(r => r.json())
.then(tags => {
tagSelect.innerHTML = '';
if (!tags || tags.length === 0) {
const opt = document.createElement('option');
opt.value = '';
opt.textContent = 'No golden VMs';
tagSelect.appendChild(opt);
tagSelect.disabled = true;
spawnBtn.disabled = true;
return;
}
tagSelect.disabled = false;
spawnBtn.disabled = false;
tags.forEach(t => {
const opt = document.createElement('option');
opt.value = t;
opt.textContent = t;
if (t === 'default' || t === 'alpine') opt.selected = true;
tagSelect.appendChild(opt);
});
})
.catch(e => {
console.error("fetch tags failed:", e);
});
}
refreshList();
refreshTags();
return;
}

21
test_mmds.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
sock="/tmp/fctest.sock"
rm -f "$sock"
firecracker --api-sock "$sock" &
FCPID=$!
sleep 1
# Configure MMDS backend
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds/config" \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
# Put data
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds" \
-H "Accept: application/json" -H "Content-Type: application/json" \
-d '{"ip": "10.0.0.2", "gw": "10.0.0.1", "dns": "1.1.1.1"}'
# Read data
curl --unix-socket "$sock" -i -X GET "http://localhost/mmds"
kill $FCPID

21
test_mmds_restore.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
sock="/tmp/fctest2.sock"
rm -f "$sock"
firecracker --api-sock "$sock" >/dev/null 2>&1 &
FCPID=$!
sleep 1
# Start a VM basically
curl --unix-socket "$sock" -s -X PUT "http://localhost/machine-config" -d '{"vcpu_count": 1, "mem_size_mib": 128}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/network-interfaces/1" -d '{"iface_id": "1", "guest_mac": "AA:FC:00:00:00:01", "host_dev_name": "lo"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/mmds/config" -d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/boot-source" -d '{"kernel_image_path": "/tmp/fc-orch/vmlinux", "boot_args": "console=ttyS0 reboot=k panic=1 pci=off"}'
curl --unix-socket "$sock" -s -X PUT "http://localhost/actions" -d '{"action_type": "InstanceStart"}'
# Pause
curl --unix-socket "$sock" -s -X PATCH "http://localhost/vm" -d '{"state": "Paused"}'
# TRY TO CONFIGURE MMDS
curl --unix-socket "$sock" -i -X PUT "http://localhost/mmds/config" -d '{"version": "V1", "network_interfaces": ["1"], "ipv4_address": "169.254.169.254"}'
kill $FCPID