This is the fourth part of the Pi netboot series. Find the previous parts from the introductory article here.
In this part of the series, I will go over using a combination of HashiCorp Packer and Ansible to create an Ubuntu Linux image for a netbootable Raspberry Pi, using NFS for the boot partition and a Ceph RBD as the root device.
The goal here is to show you how to set up Packer and what configuration is needed to run an Ansible playbook in an aarch64 image on a x86_64 machine. While writing the article, I realized that it had already gotten pretty long, so I will move the description of the Ansible playbook as well as the final image deployment to future articles.
What is Packer?
HashiCorp’s Packer is a tool to conveniently and repeatably create OS images. It is mostly intended to create VM images for big cloud provider’s setups, but it serves very well for image creation for Raspberry Pis as well, as I have found.
The idea is to provide a base image, like in this case Ubuntu’s Raspberry Pi image, and then allow the user to define provisioning steps. These steps can be executed for example as shell scripts or Ansible playbooks. Packers task is to actually launch the image and execute the provider in it.
This does not just work for VM images, but also for example as a build tool for Docker images.
An example Packer file looks like this:
packer {
required_plugins {
docker = {
version = ">= 0.0.7"
source = "github.com/hashicorp/docker"
}
}
}
source "docker" "ubuntu" {
image = "ubuntu:xenial"
commit = true
}
build {
name = "learn-packer"
sources = [
"source.docker.ubuntu"
]
provisioner "shell" {
environment_vars = [
"FOO=hello world",
]
inline = [
"echo Adding file to Docker Container",
"echo \"FOO is $FOO\" > example.txt",
]
}
}
Here, the source
describes the virtualization to use (Docker in this case)
and the source image. The build
block then describes what to do once the
image (in this case a Docker container) has been launched. In this example,
a file is created in the Docker image.
To actually build the image, first the workspace has to be initialized by
running packer init .
in your working directory.
Then, execute the build, assuming the above example was pasted into a file
docker-ubuntu.pkr.hcl
:
packer build docker-ubuntu.pkr.hcl
This is a pretty simple example, just requiring Docker to be set up. So what
hoops do you have to jump through to run an aarch64
image and execute things
(like the bash script in the previous example) in it?
Running aarch64 images with Qemu and binfmt_misc
Our goal is to provision an image for a Raspberry Pi - notionally, not an x86_64 host. So how to do that on our (mostly, these days?) amd64 daily drivers?
The answer: Qemu. In combination with the extremely cool binfmt_misc Linux kernel feature.
Learning about this feature was another one of those Whoa, Linux is so cool! moments for me.
To begin with, let’s quote the kernel docs again:
This Kernel feature allows you to invoke almost (for restrictions see below) every program by simply typing its name in the shell. This includes for example compiled Java(TM), Python or Emacs programs.
(I only now realized that this doesn’t just work for Qemu stuff, but seemingly also for e.g. Java progs…)
So what it allows you to do is this:
wget https://releases.hashicorp.com/packer/1.8.3/packer_1.8.3_linux_arm64.zip
unzip packer_1.8.3_linux_arm64.zip
./packer version
And it will work. You will be able to execute aarch64 binaries without having to prefix them with anything, let alone fire up an entire VM.
To configure binfmt_misc
, first install qemu
on the host. It is important
to ensure that static user binaries are enabled. In Gentoo, this is done with
the static-user
use flag. In Debian, install the qemu-user-static
package.
This is needed because the qemu binary will later be copied into the chroot of
whatever image you chose to use. And that image’s shared libraries might be
wildly different than those installed on the system where you’re running Packer.
A static binary just makes life simpler here.
Next, the binary formats need to be configured, so the kernel knows how to run
a binary with a specific format. This is done through configuration in
/proc/sys/fs/binfmt_misc/
files. On my Gentoo host, the file /proc/sys/fs/binfmt_misc/qemu-aarch64
looks like this:
enabled
interpreter /usr/bin/qemu-aarch64
flags: OC
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
These files can be registered automatically, normally by an init system and depending on the distribution.
You can test whether you configured everything correctly by downloading any
aarch64
binary and trying to execute it on your host.
With this prerequisite fulfilled, we can continue to setup Packer itself.
Packer setup
Setting up Packer itself is pretty simple, as it is only a single Go binary.
You can either install it from your distro’s package manager or download the
binary directly from here and move it to
someplace in your $PATH
.
While Packer supports an init
command, it is not required for our purposes.
Instead, we will manually install the packer-builder-arm
plugin. This plugin
allows us to run image creation for aarch64 images via chroot
.
The plugin can be found at mkaczanowski/packer-builder-arm.
As there are no binaries provided, the repository needs to be checked out and build. This can be done with the following sequence of commands in the current directory:
git clone https://github.com/mkaczanowski/packer-builder-arm packer-builder-arm-src
cd packer-builder-arm-src
go mod download
go build
This will create a packer-builder-arm
executable. Copy that executable to your
future Packer workspace. If you would like to store the plugin in a more central
place, you can follow the instructions in the Packer config on the plugin directory.
And with that, the setup is finally complete and we can get to the image creation itself.
Finally: The image creation
To create images, Packer uses image template files. These template files are described here, but in most cases, the options are dictated by the builder and provisioners you are using.
Without further ado, here is the template file for my Raspberry Pi images. The template supports both, images for netboot and images for hosts with a local disc.
Don’t be overwhelmed, I will go through the different sections piece by piece. 😉
variable "hn_hostname" {
type = string
description = "Hostname for the machine which uses this image."
}
variable "hn_netboot" {
type = bool
description = "Should the host netboot or should it boot from a local disk?"
}
variable "hn_host_id" {
type = string
description = "ID of the raspberry pi"
}
local "mountpath" {
expression = "/tmp/packer-${uuidv4()}"
}
local "foobar-pw" {
expression = vault("secret/foobar", "pw")
sensitive = true
}
local "hn_ceph_key" {
expression = vault("secret/cephbar", "key")
sensitive = true
}
source "arm" "ubuntu" {
file_urls = ["https://cdimage.ubuntu.com/ubuntu/releases/22.04/release/ubuntu-22.04.1-preinstalled-server-arm64+raspi.img.xz"]
file_checksum_url = "https://cdimage.ubuntu.com/ubuntu/releases/22.04/release/SHA256SUMS"
file_checksum_type = "sha256"
file_target_extension = "xz"
file_unarchive_cmd = ["xz", "-T0", "--decompress", "$ARCHIVE_PATH"]
image_mount_path = "${local.mountpath}"
image_build_method = "reuse"
image_path = "${var.hn_hostname}.img"
image_size = "2.3G"
image_type = "dos"
image_partitions {
name = "boot"
type = "c"
start_sector = "2048"
filesystem = "fat"
size = "256M"
mountpoint = "/boot/firmware"
}
image_partitions {
name = "root"
type = "83"
start_sector = "526336"
filesystem = "ext4"
size = "3.4G"
mountpoint = "/"
}
image_chroot_env = ["PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"]
qemu_binary_source_path = "/usr/bin/qemu-aarch64"
qemu_binary_destination_path = "/usr/bin/qemu-aarch64"
}
build {
sources = ["source.arm.ubuntu"]
provisioner "ansible" {
extra_arguments = [
"--connection=chroot",
"--inventory-file=${local.mountpath},",
"--limit=${local.mountpath}",
"--extra-vars", "imhotep_pw=${local.foobar-pw}",
"--extra-vars", "hn_hostname=${var.hn_hostname}",
"--extra-vars", "hn_netboot=${var.hn_netboot}",
"--extra-vars", "hn_host_id=${var.hn_host_id}",
"--extra-vars", "hn_ceph_key=${local.hn_ceph_key}",
"--extra-vars", "hn_pi=true",
"--user", "ubuntu",
]
playbook_file = "${path.root}/../bootstrap-ubuntu-image.yml"
}
}
Variables
The first part consists of input and local variables:
variable "hn_hostname" {
type = string
description = "Hostname for the machine which uses this image."
}
variable "hn_netboot" {
type = bool
description = "Should the host netboot or should it boot from a local disk?"
}
variable "hn_host_id" {
type = string
description = "ID of the raspberry pi"
}
local "mountpath" {
expression = "/tmp/packer-${uuidv4()}"
}
local "foobar-pw" {
expression = vault("secret/foobar", "pw")
sensitive = true
}
local "hn_ceph_key" {
expression = vault("secret/cephbar", "key")
sensitive = true
}
The Packer guide on variables can be found here.
The difference between variable
and local
is that the variable
definition
expects the variable to be set from the outside, while the local
definition
is for purely local variables.
To decide which one to use, just ask yourself: Is this value going to change
between different Packer invocations or not?
My input variables are the following:
- hn_hostname This is the name of the host I’m currently creating
- hn_netboot Boolean determining whether the host is going to netboot or use a local disk
- hn_host_id This field contains the host ID. In the case of the Raspberry Pi,
this is the serial number from
/proc/cpuinfo
.
As you can see, all the input variables are things which will change from host to host.
The local variables are mostly things which can be automatically gathered:
- mountpath Just a random path in /tmp as a mount directory for chroot
- foobar-pw This variable uses Packer’s Vault function to collect a password from HashiCorp Vault. In this case, it is the password for my Ansible user.
- hn_ceph_key Again Vault access, this time to get the Ceph auth key for access to the root volume pool on the Ceph cluster
You can of course use other functionality to get at your passwords in a convenient
way. The sensitive=true
parameter tells Packer that it should not print this
value into any logs or stdout.
The input variables can be handed into the Packer invocation in several different ways. The first one is via the command line like this:
packer build -var "foo=bar" ...
The one I chose is via variable files. These files should have the .pkrvars.hcl
file ending. They look like this:
hn_hostname = "mypi"
hn_netboot = true
hn_host_id = "abcdef123"
They can be handed to a Packer invocation with the var-file
flag:
packer build -var-file=./foobar.pkrvars.hcl
So a full run of Packer with the above variable file in ./mypi.pkrvars.hcl
and the Packer template in ./pi.pkr.hcl
would look like this:
packer build -var-file=./mypi.pkrvars.hcl ./pi.pkr.hcl
The source definition
Next comes the definition of the image source. This definition largely depends on the builder being used, in this case packer-builder-arm:
source "arm" "ubuntu" {
file_urls = ["https://cdimage.ubuntu.com/ubuntu/releases/22.04/release/ubuntu-22.04-preinstalled-server-arm64+raspi.img.xz"]
file_checksum_url = "https://cdimage.ubuntu.com/ubuntu/releases/22.04/release/SHA256SUMS"
file_checksum_type = "sha256"
file_target_extension = "xz"
file_unarchive_cmd = ["xz", "-T0", "--decompress", "$ARCHIVE_PATH"]
image_mount_path = "${local.mountpath}"
image_build_method = "reuse"
image_path = "${var.hn_hostname}.img"
image_size = "2.3G"
image_type = "dos"
image_partitions {
name = "boot"
type = "c"
start_sector = "2048"
filesystem = "fat"
size = "256M"
mountpoint = "/boot/firmware"
}
image_partitions {
name = "root"
type = "83"
start_sector = "526336"
filesystem = "ext4"
size = "3.4G"
mountpoint = "/"
}
image_chroot_env = ["PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"]
qemu_binary_source_path = "/usr/bin/qemu-aarch64"
qemu_binary_destination_path = "/usr/bin/qemu-aarch64"
}
Here, most things are self-explanatory. I’m using Ubuntu’s most recent 22.04
Raspberry Pi image as the base for my images. One notable option is image_mount_path
.
This option can be left undefined and is then set to a random subdirectory
in /tmp
. But this is not an option here, as we need to know the precise directory
for later use in the Ansible provisioner.
Also important are the image_partitions
definitions. These describe what
partitions Packer can expect to find. To figure out the exact values,
the following command can be used on the future source image:
fdisk -l <IMAGE>
Disk 9e458e3cbbfebfb2e8c0d717665bc43e1c29f286: 3.79 GiB, 4068480000 bytes, 7946250 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xdeca7dfc
Device Boot Start End Sectors Size Id Type
9e458e3cbbfebfb2e8c0d717665bc43e1c29f286p1 * 2048 526335 524288 256M c W95 FAT32 (LBA)
9e458e3cbbfebfb2e8c0d717665bc43e1c29f286p2 526336 7946215 7419880 3.5G 83 Linux
That command shows all necessary information to correctly fill out the
image_partitions
entries in the template.
Finally, there are the Qemu/chroot config options. The most important one is
the qemu_binary_source_path
. This path indicates where the static loader binary
used by binfmt_misc
is located. As the packer-builder-arm plugin uses a chroot
for provisioning, the binary binfmt_misc
would use to run aarch64 binaries
needs to be available inside the chroot.
Here comes one of the few problems with this image creation setup: This config
option makes the Packer template file non-portable. This is due to the fact
that the qemu-aarch64
not only resides in different places on different distros,
but might also have different names. For example, on my main Gentoo desktop, the
binary is located in the directory indicated in the config: /usr/bin/qemu-aarch64
.
But in another Debian based machine, the file resides at /usr/libexec/qemu-binfmt/aarch64-binfmt-P
.
Provisioning
Finally, we come to the build/provisioning part. Here, I’m using the previously defined source and run Ansible on the mounted image, via packer-builder-arm’s chroot mechanism.
build {
sources = ["source.arm.ubuntu"]
provisioner "ansible" {
extra_arguments = [
"--connection=chroot",
"--inventory-file=${local.mountpath},",
"--limit=${local.mountpath}",
"--extra-vars", "imhotep_pw=${local.foobar-pw}",
"--extra-vars", "hn_hostname=${var.hn_hostname}",
"--extra-vars", "hn_netboot=${var.hn_netboot}",
"--extra-vars", "hn_host_id=${var.hn_host_id}",
"--extra-vars", "hn_ceph_key=${local.hn_ceph_key}",
"--extra-vars", "hn_pi=true",
"--user", "ubuntu",
]
playbook_file = "${path.root}/../bootstrap-ubuntu-image.yml"
}
}
The documentation of the Ansible provisioner can be found here.
The provisioner part is the part where Packer can be used to make changes to
a source image. In this case, Ansible will be run against the image, using the
playbook indicated in the playbook_file
entry. The ${path.root}
variable
always contains the directory where the template file lives.
The next part is the --connection
Ansible parameter. This needs
to be set to chroot
, because we’re only doing a chroot for the image, not launching
an entire VM which could run SSH.
Also connected to the connection is the inventory-file
parameter for Ansible.
Ansible always needs to operate against an inventory of hosts. This inventory
is automatically provided by Packer. The same goes for the limit
option, which
also uses the provided inventory.
Next, all of the input and local variables are forwarded to the Ansible playbook.
The only interesting part here is the hn_pi=true
variable. This is necessary
in my setup because the bootstrap-ubuntu-image.yml
file is used not only for
Pi hosts but also other Ubuntu hosts (more on doing a full Ubuntu install with Packer
and Qemu in a later post), while still having to do a couple of Raspberry Pi
specific things.
The Ansible playbook
Alright, I have just realized that this article is already pretty long. I think the Ansible playbook deserves it’s own article coming soon.