Add Alpine, Debian, and Ubuntu rootfs support to `init [distro]`. Golden snapshots are now namespaced under `golden/<tag>/` so multiple baselines can coexist. `spawn [tag] [N]` selects which snapshot to clone from. Systemd-based distros (Debian, Ubuntu) get a fc-net-init systemd unit; Alpine keeps its inittab-based init. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
8.3 KiB
Guide: Creating Custom Golden Images
This guide outlines exactly how to create new, customized golden images (e.g. pivoting from Alpine to an Ubuntu or Node.js environment) and seamlessly integrate them into the fc-orch tagging system.
By default, executing ./fc-orch init gives you a basic Alpine Linux image, but you can also generate built-in Ubuntu and Debian environments trivially via ./fc-orch init ubuntu or ./fc-orch init debian. The real power of fc-orch lies in maintaining multiple customized snapshot bases (golden tags).
1. Acquiring Custom Assets
To build a fresh golden image, at minimum you must provide a new filesystem:
- Custom Root Filesystem: An uncompressed
ext4filesystem image that contains your system and libraries. - Custom Kernel (Optional): An uncompressed Linux kernel binary (
vmlinux). If not provided, the default Firecracker CI VM kernel will continue to be utilized flawlessly.
Recommendations for Custom Distros:
If you are generating a completely custom/unsupported base, you may use tools like docker export compiled via mkfs.ext4, or utilize debootstrap to provision your image.
Ensure that your custom root filesystem contains an appropriate bootstrap sequence inside /etc/init.d/rcS (or systemd if configured) to natively mount essential directories (/proc, /sys, /dev) and configure the eth0 link interface, as Firecracker expects the guest OS to prepare these primitives natively. Our native init tool handles this automatically for alpine, ubuntu, and debian distributions.
2. Using Environment Overrides
Rather than replacing the default /tmp/fc-orch/rootfs.ext4, fc-orch implements powerful environment variables you can override prior to capturing the golden snapshot.
The essential variables to override are:
FC_ROOTFS: Path to your custom.ext4image (e.g.,/home/user/ubuntu.ext4).FC_MEM_MIB: Amount of initial memory the golden VM receives. Heavier OS's like Ubuntu typically require more than the 128 MiB default (e.g.,512).FC_VCPUS: Processing allocation to start the VM. Default is1.
3. Capturing the Custom Golden Snapshot
Let's assume we want to provision a standard Ubuntu environment. First, create the rootfs (this automatically downloads and sets up the rootfs on your host):
sudo ./fc-orch init ubuntu
Then capture the baseline using the ubuntu tag and the ubuntu distro target. Note the increased resources FC_MEM_MIB allocating 512MB RAM for tighter operations.
sudo FC_MEM_MIB=512 FC_VCPUS=2 ./fc-orch golden ubuntu ubuntu
What happens in the background?
- The orchestrator prepares a brand new directory for this baseline exclusively at:
/tmp/fc-orch/golden/ubuntu/. - It takes your customized
/home/user/ubuntu.ext4and utilizes it as the root block device for the orchestrator environment. - Firecracker boots the VM. It waits exactly 3 seconds for the OS initialization logic to naturally settle.
- The VM instance is aggressively paused. A serialized register state checkpoint (
vmstate) and raw memory projection (mem) are exported permanently into the/tmp/fc-orch/golden/ubuntu/directory.
Note
: Firecracker terminates the internal process after finalizing the artifacts. The custom snapshot baseline completely persists!
4. Spawning Scalable Clones
Since your image is now indexed inside the ubuntu tag boundary, it can be cloned independently using Copy-on-Write (COW).
Simply address the target tag along with the desired replica count during the spawn command:
sudo ./fc-orch spawn ubuntu 10
This immediately duplicates the exact hardware footprint, generating 10 concurrent active Firecracker VMs resolving locally to your custom OS without disturbing your previous generic Alpine builds. Multiple base OS architectures can run collectively side-by-side using this methodology!
How Ubuntu VM Configuration Works
Build-time: chroot package installation
ubuntu-base is a deliberately bare tarball — it ships no shell beyond /bin/sh (dash), no network tools, and no package cache. When fc-orch init ubuntu runs, after extracting the tarball the orchestrator performs a chroot install step:
-
Virtual filesystems are bind-mounted into the image (
/proc,/sys,/dev,/dev/pts) so thatapt-getcan function correctly inside the chroot. -
/etc/resolv.confis copied from the host so DNS works during the install. -
apt-getinstalls the following packages with--no-install-recommendsto keep the image lean:Package Purpose bashInteractive shell curlGeneral-purpose HTTP client iproute2Provides the ipcommand (required byfc-net-init)wgetUsed by fc-net-initto poll the MMDS metadata endpointca-certificatesTrusted CA bundle so HTTPS works out of the box -
aptcache is purged (apt-get clean+rm -rf /var/lib/apt/lists/*) before unmounting, keeping the final image around 200 MB on disk rather than 2 GB. -
All bind mounts are removed before the function returns, whether or not the install succeeded.
The resulting ext4 image is 512 MB (vs. 2 GB for a stock Ubuntu cloud image), comfortably fitting the installed packages with room for runtime state.
Boot-time: guest network autoconfiguration via MMDS
Every Ubuntu image gets /sbin/fc-net-init embedded at build time. On Ubuntu this script is wired into systemd as fc-net-init.service (enabled in multi-user.target).
When a clone VM resumes from its golden snapshot the service runs the following sequence:
1. ip addr add 169.254.169.2/32 dev eth0
— Adds a link-local address so the guest can reach the Firecracker MMDS
gateway at 169.254.169.254 without any prior routing state.
2. Poll GET http://169.254.169.254/ip (1-second timeout, retry every 1 s)
— Loops until the host has injected the per-clone IP config via
PUT /mmds on the Firecracker API socket.
3. Once /ip responds, fetch /gw and /dns from the same endpoint.
4. ip addr flush dev eth0
ip addr add <ip> dev eth0
ip route add default via <gw> dev eth0
echo "nameserver <dns>" > /etc/resolv.conf
— Applies the config atomically and exits.
The host side (see orchestrator/network.go) injects the three keys (ip, gw, dns) via the Firecracker MMDS API after the snapshot is loaded but before the VM is resumed, so the guest sees the data on its very first poll.
This design means the golden snapshot captures the polling loop already running. Clones that are spawned without FC_AUTO_NET_CONFIG=1 will still run the loop — it simply never exits, which is harmless and consumes negligible CPU.
Serial console
serial-getty@ttyS0.service is enabled at build time via a symlink in getty.target.wants. The root password is cleared so the console auto-logs-in without a password prompt. Connect with:
sudo ./fc-orch console <clone-id>
Appendix: Practical Examples
Creating Multiple Golden Images with Different Specs
You can manage a rich registry of different tagged images, provisioning them with varying specifications.
1. Standard Alpine (Default, 128 MiB RAM, 1 vCPU)
sudo ./fc-orch golden alpine alpine
2. Ubuntu Web Server (1024 MiB RAM, 2 vCPUs)
# assuming init ubuntu was already run
sudo FC_MEM_MIB=1024 FC_VCPUS=2 ./fc-orch golden my-ubuntu-server ubuntu
3. Debian Database Node (4096 MiB RAM, 4 vCPUs)
# assuming init debian was already run
sudo FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden my-debian-db debian
4. External Custom Image (E.g. CentOS via Manual Provision)
sudo FC_ROOTFS=/images/centos.ext4 FC_MEM_MIB=4096 FC_VCPUS=4 ./fc-orch golden tag-centos
Inspecting Your Hypervisor State
To easily visualize what your orchestrator has stored and where, you can run the following hypervisor commands:
View the structured layout of all golden image namespaces:
tree -a /tmp/fc-orch/golden
(If tree is not installed, you can use ls -R /tmp/fc-orch/golden)
View the exact disk usage and file sizes for a specific image artifact (like ubuntu):
ls -lh /tmp/fc-orch/golden/ubuntu/
Output will similarly demonstrate that mem represents your full allocated RAM (e.g., 1024M), while vmstate is essentially negligible.