Opinions expressed here belong to your're mom

PXE Booting And Autobuilding Ubuntu 22.04

At the end of this blogpost you will be able to network-boot a fresh server and wind up with an auto-built Ubuntu Server box.

I use Ubuntu Server on a lot of systems. It is my go-to, primarily because I use Xubuntu on the desktop and having the same stuff everywhere is very comfy and reduces the overhead of things that I have to remember. I only need to keep one package manager in my head, among other things that are inconsistent across distributions. This does however, subject me to Canonical's occasional insanity. Canonical has had a long history of NIH, where they just reinvent things that were already perfectly fine for absolutely no reason. When Ubuntu 20.04 released, they started to move away from Debian Installer and towards their own homemade Subiquity. Subiquity has an interactive mode which you can use to manually provision a server, but doing that hundreds of times sucks. This post will cover the whole chain of automating this. The basic flow is:

Boot server -> BIOS/UEFI -> DHCP -> PXE (tftp) -> GRUB -> Subiquity -> cloud-init (http) -> Your server gets built

I'm assuming that you already have a functional DHCP server on the network that you want to autobuild servers on. If you don't, then go somewhere else to find instructions on that.

HTTP / TFTP Server

Shared Root

If you do not already have a server on your network to serve files over HTTP and TFTP for the PXE-boot process, then you will need to create it. With any luck, this will be the last machine that you need to build manually. Build out a server in your normal manner, I am using Ubuntu 22.04 as my base for obvious reasons. Once you have the server built out with your initial configuration and can log in to it, you will need to make a directory to serve the files from. It is not the deafult behavior, but I like to make one directory to serve the same files over both TFTP and HTTP since that is easier for me to remember.

mkdir /var/pxe


Then you will need to install some packages.

apt install nginx tftpd-hpa grub-pc grub-efi-amd64-signed

The nginx package will be your webserver. If you prefer Apache or lighttpd then install one of those instead and adjust the configuration later to accommodate. The tftpd-hpa package will be your TFTP daemon. There are a plethora of TFTP daemons available in the default repositories, but I chose this one because it is in the kernel git. The "hpa" in the package name represents the initials of the author, H. Peter Anvin. The grub-pc and grub-efi-amd64-signed packages are useful for building the GRUB PXE images that we will actually pull over the network to boot future servers from bare metal.

TFTP Config

Once these packages are installed, some configurations for nginx and tfptd need to be put into place. In order to have the TFTP daemon look into the /var/pxe/ directory for files, just modify /etc/default/tftpd-hpa to look something like this:


Then restart the service with systemctl restart tftpd-hpa. You will also need to open firewall port 69/UDP. With firewalld, you would run:

firewall-cmd --add-port=69/udp

A quick note on TFTP: If you want to test that everything is working correctly, you will need to install a TFTP client on your local machine. I suggest the one provided by the package tftp-hpa. Since this is developed by the same person as the daemon, I am pretty certain that they will work well together. Once that is installed, you can place a file at /var/pxe/test.txt on your server and try to get it on your commandline by running

tftp server-hostname -c get /test.txt

This should print no output and return exit code 0. It should also create a file in your CWD named test.txt with the same contents as the one on the server. If instead of this expected behavior, you get the error message Transfer timed out, then that implies that something is wrong with your firewall somewhere. It may be the firewall on your local machine. Generally if your local firewall is running at all, you won't be able to use the tftp client. This is because TFTP works over UDP, and the portion of the packet that says "hey I'm related to an earlier transaction" is not set on the response from the server, so as a result your local firewall rejects the packet. You have two options:

  1. Turn your local firewall off with sudo systemctl disable firewalld
  2. Open a specific port on your local firewall (sudo firewall-cmd --add-port=6969/udp) and use that specific port as the "return" port for TFTP (tftp server-hostname -R 6969:6969 -c get /test.txt)

Nginx Config

Adapt this section if you are using a different HTTPD provider. Create a new file /etc/nginx/sites-enabled/pxe-booting and fill it with the content:

server {
        listen 443 ssl;
        server_name server-hostname;
        location / {
                root /var/pxe/;

In the above snippet we start serving the /var/pxe/ directory over HTTPS. If you do not have a valid certificate for your server's DNS name, you can serve the files over HTTP, but they will travel across the wire unencrypted. As far as I know, there is no method in the Subiquity installer (which is what will be accessing these files over HTTP/S) to ignore the validity of SSL certificates. It is pretty easy to get a LetsEncrypt certificate for a wildcard these days so that's what I do in this case, but this is not a tutorial for that.

You can test that this Nignx config is working by going to https://server-hostname/test.txt in your web browser. You should be served the same test file that you created earlier for testing TFTP.


On the same build server that you have been on so far, you will need to create the GRUB files which will actually be booted by the bare metal servers.

PXE Images

Create and enter the directory where GRUB's files will live and create the PXE images:

mkdir /var/pxe/grub
cd /var/pxe/grub
grub-mkimage -d /usr/lib/grub/i386-pc/ -O i386-pc-pxe -o /boot/grub_i386.pxe -p '/grub' pxe tftp
grub-mkimage -d /usr/lib/grub/x86_64-efi/ -O x86_64-efi -o /boot/grub_x64.pxe -p '/grub' efinet tftp

The -p '/grub' option is critical. If you want to store your GRUB files somewhere else relative to your TFTP root, then you need to change this option. If you are serving your files from /var/pxe/grub/here/are/some/more/subdirs/, then that is what you should put after the -p instead. Now you have the GRUB PXE images, but these executables will try to find more libraries when they boot, and you need to put these libraries in place.

cp -r /usr/lib/grub/i386-pc .
cp -r /usr/lib/grub/x86_64-efi .

There is a reason that we create two .pxe images instead of just one. The i386 image is for legacy BIOS machiens. The x64 image is for modern UEFI machines. On some old hardware and some VMs, you cannot boot these modern UEFI images and so you will need the old BIOS images to fall back to.

Copy libraries from /usr/lib/grub/i368-pc/ and /usr/lib/grub/x86_64-efi/ to /grub. At this point your /var/pxe/grub/ should look like this:

root@server-hostname:/var/pxe/grub# ls
grub_i368.pxe  grub_x64.pxe  i386-pc  x86_64-efi


Grub will now happily start up, which you can test by network booting a machine. However, GRUB will not be very happy that it doesn't have a config file and it will dump you into a rescue shell. You need to write a grub.cfg file in the same grub root where the PXE files live, at /var/pxe/grub/grub.cfg in this example. You can write your GRUB's configuration to your heart's desire, picking whatever options you want. This is one my config and I have found it very useful:

menuentry "Ubuntu 22.04 Manual Install" --id=22_manual {
  echo "Loading Kernel..."
  linux /isos/ubuntu/22.04/casper/vmlinuz ip=dhcp url=https://server-hostname/isos/ubuntu/22.04/ubuntu-22.04.3-live-server-amd64.iso root=/dev/ram0 cloud-config-url=/dev/null net.ifnames=0 biosdevname=0
  echo "Loading Ram Disk..."
  initrd /isos/ubuntu/22.04/casper/initrd
menuentry "Ubuntu 22.04 No SWAP" --id=22_noswap {
  echo "Loading Kernel..."
  linux /isos/ubuntu/22.04/casper/vmlinuz ip=dhcp url=https://server-hostname/isos/ubuntu/22.04/ubuntu-22.04.3-live-server-amd64.iso root=/dev/ram0 cloud-config-url=/dev/null net.ifnames=0 biosdevname=0 autoinstall ds="nocloud-net;s=https://server-hostname/isos/ubuntu/22.04/cloud-init/noswap/"
  echo "Loading Ram Disk..."
  initrd /isos/ubuntu/22.04/casper/initrd
menuentry 'Installed OS (on disk)' --id=installed_os{
        exit 1

The cool part about this grub.cfg is that it defaults to booting an installed OS. Once the Subiquity installer successfully completes, it automatically reboots. If your default boot device is set to PXE (which it probably is if UEFI is enabled), then this means your server will keep bootlooping and rebuilding itself forever. This means that you have to sit there and babysit it to intercept a reboot and change the boot order. You could run a command that returns a non-zero exit code as the last thing that Subiquity does, but that is janky and also requires that you manually intervene before you can SSH into the server. If instead we just default to booting from the disk, then you don't have anything to worry about and you can even leave your default boot option to be PXE.

You will need to change the server-hostname in the above example to your server hostname. If you are serving files over HTTP instead of HTTPS then you will need to change that too.

If you modify your DHCP to point to one of these GRUB files, then they should boot and give you the menu that you configured.

Network Prerequisites

As mentioned earlier this is not a tutorial on how to set up DHCPD. However, you do need to modify your configuration to send the options to boot an image over the network. In ISC DHCPD that will look something like:

host hostnamehere {
  hardware ethernet de:ad:be:ef:12:34;
  filename "/boot/grub_x64.pxe"
  next-server tftpservername;
  option host-name "hostnamehere";

In OPNSense, you set the filename in Services -> DHCPv4 -> [Network] -> Network booting. There are various fields to fill in here, but you would fill them in with the same information as the above DHCPD config, except that you will use the IP instead of DNS name of the server.

Your DHCPD might be different. RTFM.


In order to network boot the Ubuntu ISO, you first need the Ubuntu ISO, which you can get from this page. Download that file to your server:

mkdir -p /var/pxe/isos/ubuntu/22.04/casper/
cd /var/pxe/ubuntu/22.04/
wget https://mirror.math.princeton.edu/pub/ubuntu-iso/22.04.3/ubuntu-22.04.3-live-server-amd64.iso

Once the download finishes, you will need to extract two critical files from the ISO: the initrd and vmlinuz. These are the inital ramdisk (the boot environment) and the kernel that will be pulled in by GRUB and will proceed to download the ISO itself and boot that so that the server can be built. Mount the ISO and extract the files:

mount ubuntu-22.04.3-live-server-amd64.iso /mnt
cp /mnt/casper/vmlinuz casper/vmlinuz
cp /mnt/casper/initrd casper/initrd
umount /mnt

This is all of the files that you actually need to network boot Ubuntu. You should now be able to PXE boot GRUB and, if you are using the example grub.cfg that I provided, you should now be able to use the entry titled Ubntu 22.04 Manual Install. This will, if everything is working, drop you into the manual Subiquity server installer, and now all you need to do is write the Subiquity autoinstall config to get the client servers autobuilding.


Subiquity will look for some files over HTTP(S) if you choose the "No SWAP" option above. The location that it looks in is determined by the ds=nocloud-net;s=URLHERE setting on the kernel commandline. To create the files that it needs:

mkdir -p /var/pxe/isos/ubuntu/22.04/cloud-init/noswap/
cd /var/pxe/isos/ubuntu/22.04/cloud-init/noswap/
touch meta-data user-data vendor-data

The file vendor-data is optional. I have this file on my system but it is completely empty. Subiquity will try several times to find this file if it does not exist, but if it exists and is empty, then Subiquity will only look for it the one time. As a result, Subiquity is faster to start up if this file exists and is empty.

The file meta-data should have the content instance-id: jammy-autoinstall. Update this when 24.04 comes out and you need to change the codename.

The file user-data is the real meat and potatoes of the autoinstall and it is where you will put your configuration. The documentation on some of the pieces of Subiquity are difficult to find. These resources can be very helpful when writing a configuration:

Here is an example configuration that will do what most people probably want

  version: 1
    preserve_sources_list: true
    hostname: localhost
  keyboard: {layout: us, variant: ''}
  locale: en_US.UTF-8
  # interface name will probably be different
      version: 2
          critical: true
          dhcp-identifier: mac
          dhcp4: true
    update: yes
    - packages
    - you
    - want
    - installed
    allow-pw: yes
      - "OUTPUT OF 'cat ~/.ssh/id_rsa.pub' GOES HERE"
    install-server: true
      name: lvm
      sizing-policy: all
    - sed -ie 's/GRUB_CMDLINE_LINUX_DEFAULT=.*/GRUB_CMDLINE_LINUX_DEFAULT="net.ifnames=0 biosdevname=0"/' /target/etc/default/grub
    - sed -ie 's/ro  $/ro net.ifnames=0 biosdevname=0  /' /target/boot/grub/grub.cfg
    - rm /target/etc/hostname
    - getent hosts $(ip -o -4 address show scope global | head -n 1 | awk '{print $4}' | awk -F '/' '{print $1}') | awk '{print $2}' | awk -F '.' '{print $1}' > /target/etc/hostname # set my hostname
    - rm /target/etc/apt/apt.conf.d/99needrestart # This prompt will prevent the apt command from completing

You should fill out the identity section above and put in your SSH key. This configuration gives you old-school interface names, sets your final hostname from DNS, and sets you up with one giant root partition. If you want a more granular setup for storage, you can configure it however you like, this is a preset. The best way to figure out what the hell you need to have in your storage section for the configuration that you want is actually to run through it once by hand and then look at the /var/log/installer/autoinstall-user-data file. The section in here will be a bit jumbled up and will be specific to the exact server that you built, but with a little bit of modification you can make it more general-purpose.

If you want to set up some custom apt source to install packages from during the install, then you can do that with this apt section (example is SaltStack repo):

    preserve_sources_list: true
    geoip: true
        source: deb [arch=amd64] https://repo.saltproject.io/salt/py3/ubuntu/22.04/amd64/latest/ jammy main
        key: |
          -----BEGIN PGP PUBLIC KEY BLOCK-----

          -----END PGP PUBLIC KEY BLOCK-----

Please note that you cannot both set an apt source and configure LUKS encryption. If you try, it will fail. I have reported this bug to the Canonical developers but as of the writing of this blogpost, they have still not acknowledged my issue. Remedy steps can be found in the linked ticket.


Hopefully the information in this blogpost can be useful to someone someday. I did a lot of searching when I first had the opportunity to implement Subiquity autobuilding on my own infrastructure and I found official documentation and comprehensive writeups difficult to locate. If you follow the instructions in this post you should be able to:

PXE -> GRUB -> initrd/linux -> ISO -> Subiquity -> Reboot -> PXE -> GRUB -> Disk OS

All automatically. If you are rebuilding hundreds or thousands of servers then this should hopefully save you a lot of work.

This page is being served digitally. For a physical copy, print it off

Dancing Baby Seal Of Approval Webserver Callout My Text Editor Bash Scripting 4 Lyfe yarg Blog RSS Feed