Fifty VMs. Two of them broken. How do you know?
Fifty Debian VMs on a Proxmox cluster, each with OVMF firmware. Yesterday two of them rebooted into a kernel that panics. In GRUB-land, you’d SSH in, see nothing, eventually open the console and discover the initrd is wrong.
With LamBoot, lamboot-monitor.py on the Proxmox host shows the two VMs in CrashLoop state with crash count 3, flagged red. You know which VMs are broken before you’ve opened a single console.
GRUB-land vs. LamBoot-land
Without LamBoot
- — VM is “up” from the hypervisor’s perspective (VM is running)
- — SSH in — no response; kernel panicked before network came up
- — Open the noVNC console manually per VM
- — Discover panic message, guess at the cause
- — Repeat for every VM that might be broken
With LamBoot
- —
lamboot-monitor.pyreads every VM’s OVMF_VARS file - — Decodes
LamBootState,LamBootCrashCount,LamBootLastEntryfrom NVRAM - — One table shows every VM’s current boot-health state
- — Flag CrashLoop VMs red, BootedOK green, Fresh gray
- — Pipe to a webhook for Slack / email alerts
NVRAM variables, read from the host
Proxmox host Guest VM (OVMF firmware) ┌───────────────────────┐ ┌──────────────────────────┐ │ lamboot-monitor.py │ │ LamBoot (early boot) │ │ │ │ writes NVRAM vars: │ │ reads OVMF_VARS_*.fd │ ◀──────────────▶│ LamBootState │ │ decodes LamBoot vars │ NVRAM block │ LamBootCrashCount │ │ outputs table / JSON │ in .fd file │ LamBootLastEntry │ │ fires webhooks │ │ LamBootTimestamp │ └───────────────────────┘ └──────────────────────────┘
The NVRAM variables are stored in the per-VM OVMF_VARS_<vmid>.fd file on the Proxmox host. Reading them requires no VM agent, no SSH into the guest, no guest cooperation whatsoever.
What the host reads
LamBoot vendor GUID: 4C414D42-4F4F-5400-0000-000000000001
| Variable | Type | Purpose |
|---|---|---|
| LamBootState | u8 | 0=Fresh, 1=Booting, 2=BootedOK, 3=CrashLoop |
| LamBootCrashCount | u8 | Crash counter (resets on successful boot) |
| LamBootLastEntry | UTF-8 | ID of last booted entry |
| LamBootTimestamp | 8 bytes | Packed UTC y/m/d/h/m/s |
| LamBootVersion | u32 | Packed major.minor.patch |
Running the monitor
# Install the monitor (one file, pure Python 3, stdlib only)
sudo cp lamboot-monitor.py /usr/local/bin/
sudo chmod +x /usr/local/bin/lamboot-monitor.py
# One-shot: show current state of every LamBoot VM
sudo lamboot-monitor.py
# JSON output for pipelines
sudo lamboot-monitor.py --json
# Filter to a specific VM
sudo lamboot-monitor.py --vmid 100
# Daemon mode with webhook alerts on CrashLoop
sudo lamboot-monitor.py --watch --webhook https://hooks.slack.com/services/…
Pair with zero-touch Secure Boot
The monitoring story is strongest when VMs also boot under Secure Boot via LamBoot’s pre-enrolled OVMF_VARS_lamboot.fd template. Every boot writes both NVRAM health state and a trust-evidence log inside the guest.
- Fleet rollout: Apply
OVMF_VARS_lamboot.fdto one VM, install LamBoot with--signed --no-mok, convert to template. Every clone inherits both the signing chain and the monitoring surface. - Per-VM health rollup:
lamboot-monitor.pycan also pull the latestboot-trust.logvia SSH or a sidecar if guest access is available — optional, not required for the crash-state visibility.
See the Install page for the zero-touch deployment flow.
How LamBoot fits the Proxmox focus area
LamBoot is the host boot-observability layer in the Proxmox integration story.
Native RDP console in the PVE web UI. Companion product, different access problem.
Per-VM host-side RDP server. For guest-side access issues; complementary to LamBoot’s host-side view.