The default CBL process uses the native computer as the host system and a QEMU virtual machine as the target system: you source a shell script that configures the parameters for the build, run lb cbl to generate scripts, and then run the cbl.sh that was produced. If you’re not using a manual process to bridge from the host build to the target build (and if nothing crashes during the build process) then you eventually wind up with a complete LB Linux system, ready to use.

That works fine if you’re doing a build where the host system is a real computer, and the target system will be emulated via QEMU. But sometimes that’s not what you want! For example, after completing a build like that, an obvious next step is to boot the resulting LB Linux system and use it to build an LB Linux system that will run on the original computer. The default process doesn’t really make sense for that situation — it might work, but when QEMU is emulating a CPU, it’s pretty slow: for each CPU instruction executed in the virtual machine, QEMU must generate new instructions that perform the same function, and execute those on the actual physical CPU. If the program being run in the virtual machine is another QEMU emulating a different CPU, it has to do that translation into new instructions twice. I can’t imagine that process being efficient or reliable — and, indeed, I never run the QEMU test suite on an emulated virtual machine, because a lot of those tests run QEMU as an emulator, and I’ve found that the two levels of emulation causes the test suite to run seemingly forever.

A more straightforward alternative is to run the host-side build in a QEMU-emulated system, and then run the target-side build in a QEMU virtual machine — that is, not in an emulator, but in a native-architecture VM.

That requires some additional setup and execution work to be done. Obviously, we don’t want to have to do that work manually! Doing things manually is gross. Hence, this blueprint, which describes and automates a process for doing that.

You do have to set things up in a fairly specific way, but then you can just generate a script from this blueprint and run it, and the entire build will run to completion without any further interaction. (At least, in an ideal world, that’s what will happen. It’s entirely possible that something will go wrong and the build will crash. Fingers crossed!)

One really nice aspect of this approach is that it works regardless of which QEMU machines are emulated. You can run this process with an emulator — rather than simply a native-architecture virtual machine — for the host system, the target system, or both. The process is the same, the blueprint is the same, the only thing that’s different is the configuration being used. This scheme also eliminates a lot of dependencies on the build system; it makes heavy use of QEMU and virtual networking, but all the real work is done inside virtual machines. If you have a powerful-enough computer, you can do multiple builds at the same time! In fact, that’s what I do to test changes while I am working on CBL: I run an x86_64-to-AArch64 build and an AArch64-to-x86_64 build at the same time, both using this process.

Note
The terminology I’m using here is a little confusing, because "host" means two things: the actual host is the physical computer where the build is running, but the first part of the CBL process is called the "host-system" build — even though, here, it’s a virtual computer running on the actual host. To try to make this less ambiguous, I’m referring the the first (host-side) QEMU process run here as the "host QEMU" or something similar. Hopefully, whenever I refer to a "host system" here, the context will make it clear which one I’m talking about.

1. How To Use This

My goal here is to make it really easy and straightforward to automate the whole process of building many different types of Little Blue Linux systems. The eventual plan is for this to run as part of a continuous integration (CI) pipeline, every time the CBL repository changes.[1]

Wiring up the CI pipeline is outside the scope of what I’m trying to accomplish here! But this is the script that I run to do the full build, after I set things up as described in [Preparing For The Automated Build]:

#!/bin/bash
source cbl-configuration.sh
export SCRIPT_DIR="$(pwd)"
export LOGFILE_DIR="$(pwd)/log"
pushd cbl
echo "Built from: $(git rev-parse HEAD)" > ${SCRIPT_DIR}/BUILD-INFO
lb qemu-to-qemu-build
popd
echo "On $(hostname), using the qemu-to-qemu build process" >> BUILD-INFO
echo "and cbl-configuration.sh:" >> BUILD-INFO
echo "-----" >> BUILD-INFO
cat cbl-configuration.sh >> BUILD-INFO
echo "-----" >> BUILD-INFO
echo "Start timestamp: $(date +%s)" >> BUILD-INFO
time ./qemu-to-qemu-build.sh 2>&1 | tee log/AUTO-BUILD.LOG
echo "Finish timestamp: $(date +%s)" >> BUILD-INFO
mv BUILD-INFO package
pushd package
sha256sum * > SHA256SUMS
popd
chmod a-w package/*

As with everything else in CBL, of course, you should modify that script to suit your own personal preferences. The BUILD-INFO file produced by that script also has timestamps added to it throughout this process, so the final result lets you calculate how long every part of the build (and post-build process) takes.

The timestamps recorded by this script use the %s date format, which converts to the number of seconds since the UNIX epoch (midnight UTC on January 1, 1970). That’s not very readable, but it’s very convenient for doing time arithmetic — you can get the number of seconds taken for the full build just by subtracting the end timestamp from the beginning timestamp. If you want to convert these timestamps to a more readable date and time string, you can use the date program: to convert, for example, 1630600291 to a date and time, you can use date -d @1630600291. (You can also use a more verbose but also more straightforward syntax: date -d "1970-01-01 UTC 1630600291 seconds" means just what it looks like: the UNIX epoch plus some number of seconds.)

2. Preparing For The Build

This process assumes that you have a computer with:

  1. a build directory, writable by the current user, containing the following things:

    1. a LB Linux base system image that will be used as the host system, at the location host-system/lbl.qcow2, along with a kernel file at host-system/kernel (QEMU will be serving as the boot loader for all systems involved in the automated build, so the kernel needs to exist outside of the system image);

    2. a script cbl-configuration.sh that sets most of the parameters needed for CBL, as well as the additional configuration parameters described below (The ones that are overridden in override-cfg do not need to be specified, obviously);

    3. the CBL blueprints in a cbl subdirectory;

    4. all source tarfiles and patches for the basic CBL process in a materials subdirectory;

  2. the qemu and ruby packages available, and the litbuild gem installed; and

  3. about a hundred gigabytes of free storage space.

You don’t have to use LB Linux for the base system image, but this process makes some assumptions about the host system and will need to be adjusted if those assumptions are not met. For example, it expects to do the build in /home/lbl, so if there is no such directory you’ll have to change the definition of HDIR here. Also, there must be a getty process running on the serial device, configured to set TERM to a simple terminal type (such as tty); and the root account must not have a password set.

3. Configuration

This blueprint needs additional configuration parameters beyond what are needed for the CBL process; they are used primarily to construct the command lines to run the host and target QEMU processes. It’s most convenient to define these in the same cbl-configuration.sh file that will be used for the automated CBL process, since some of the configuration parameters defined and documented in CBL per se are also needed for this automated build process.

The default parameters are suitable for an x86_64 (that is, 64-bit Intel or AMD) host system and an AArch64 (64-bit ARM) target system — the same as the default CBL configuration parameters.

Most of these parameters are similar to parameters defined in CBL, and are used to construct an appropriate QEMU command line for the host system.

Parameter: HOST_QEMU_RAM_MB

Value: 32768 (default: 24576)

Parameter: HOST_QEMU_CPUCOUNT

Value: 8 (default: 12)

The QEMU for the host build should be given as much memory and CPU as possible, considering the hardware it’s running on and any other processing that will be going on during the build. My main CBL development machine has 128 GiB of RAM and 20 CPU cores; I give the host QEMU 24 GiB of RAM and 12 cores, as with the default values here.

Parameter: HOST_QEMU_ARCH

Value: aarch64 (default: x86_64)

Parameter: HOST_QEMU_MACHINE

Value: virt (default: pc-i440fx-2.8,accel=kvm,usb=off)

Parameter: HOST_QEMU_CPU

Value: cortex-a57 (default: host)

The HOST_QEMU parameters are similar to the CBL parameters that define how the target QEMU process will be constructed. The HOST_QEMU_MACHINE should specify the full -machine argument; if that argument needs to include directives to use KVM acceleration or something similar (as with the default value), include those.

By default, QEMU will be told to use the host CPU type. At least for Intel-architecture, this tells QEMU to inform the virtual machine that its CPU is of the same type as the actual physical CPU in the host computer; that avoids any kind of instruction emulation or transformation, so it’s the most efficient way to run QEMU.

Parameter: HOST_NET_DRIVER

Value: virtio-net-device (default: e1000)

Parameter: HOST_QEMU_DRIVE_PREFIX

Value: vd (default: sd)

Parameter: HOST_SERIAL_DEV

Value: ttyAMA0 (default: ttyS0)

And, similarly to the other parameters needed here, the virtual hardware to be simulated by the QEMU process and the device naming conventions for the host system must be specified.

Parameter: TARGET_NET_DRIVER

Value: e1000 (default: virtio-net-device)

Since we will eventually want to run the resulting Little Blue Linux system in QEMU and with networking enabled, we will need to know what network driver to use for it.

4. Overview Of This Process

This process will be done in a number of phases — the first few will set up and execute the CBL process itself, and then a few additional things will be done to get the resulting artifacts all tidied up and organized.

Throughout this process, we’re going to make extensive use of a trick for automating interactions with virtual machines: we will run QEMU processes from ruby scripts. All of these will be run using the -nographic directive, which (as mentioned in the launch-qemu blueprint) will cause the standard input and output streams from the QEMU process to be mapped to a serial device in the virtual machine. Then we’ll use the ruby expect library to interact with the QEMU virtual machine through that serial console — everything sent to the standard input of the child process will effectively be typed into the serial console of the virtual machine, and all of the output produced by those commands will show up on the standard output of the child process. Ruby expect provides a facility similar to the expect TCL program that is a part of the base LB Linux system and used by the DejaGnu test framework; I’m using the ruby library, rather than expect per se, because I’m way more comfortable with ruby than TCL.

Using expect like this is a bit verbose, but not complicated. Basically, expect lets you tell ruby to examine all of the output arriving from some stream and wait until a specific pattern shows up. Once it does, you can examine what was written and respond accordingly. Mostly, we’re just detecting shell prompts and then running the next command in sequence.

The first time we do this will be to run the host side of the CBL process: we’ll run the host-system QEMU, copy a bunch of stuff to the virtual machine, and then use the serial console to run litbuild and run the host-side script.

If that process completes successfully, the script will copy the kernel and the build log files back to the computer where the script is running, and shut down the VM.

Then we’ll run a second QEMU process that is essentially the same as the one found in the launch-qemu blueprint; that will run the target side of the CBL process without any manual (or automated) interaction.

That’s all there is to CBL itself, but as mentioned earlier there are a few other things I like to do after the build is complete — and, as always, I like to avoid doing manual work whenever possible!

Since I want to be able to boot the resulting LBL system, and some builds don’t set up a boot loader, we’ll want to copy the kernel file out of the target system so we can use QEMU as a boot loader. And as with the default build process, we might as well copy the log files from the target side of the build out as well, so we have all the build logs consolidated in a single location in case we want to review them later on. This will be done using another expect-driven host-architecture QEMU.

A number of packages don’t have their test suites run during the CBL process, because of idiosyncratic issues described in their respective blueprints. So we’ll then use a third expect-driven QEMU to launch the target system and run the blueprint that will repeat those builds with the test suites.

And, last, since we use a copy-on-write image format for all of these QEMU processes, the final disk image that contains the full Little Blue Linux system will be a lot larger than it really needs to be, so we will remaster it using a fourth expect-driven QEMU process.

As with everything else in CBL — if any of that does not sound good to you, change it! This is your system. You can do whatever you want with it.

5. The Host-Side Build

The first thing we need to do is set up disk image files that will be used by QEMU. For the host system, we’ll create an image that will be used for swap storage, and a working image based on the provided lbl.qcow2 image — that working image will be used as the root filesystem during the host-side build, so that the original lbl image will not be affected at all by the build (and can be a read-only file if desired).

Commands:
pushd host-system
qemu-img create -f qcow2 swap.qcow2 50G
qemu-img create -f qcow2 -b lbl.qcow2 -F qcow2 lbl-work.qcow2
popd

We’ll also need images for what will become the root filesystem for the target system, and a swap volume for the target.

Commands:
mkdir -p target-system
pushd target-system
qemu-img create -f qcow2 swap.qcow2 50G
qemu-img create -f qcow2 cbl-initial.qcow2 50G
popd

Some of the normal CBL configuration parameters — especially those dealing with directories to be used during the build — should always be set to specific values for this automated build, ignoring the normal default values and also ignoring whatever other values are specified earlier in the confiugration file. You may want to make some adjustments here — for example, you may want to modify the hostname or domain name used in the build. Rather than modifying the cbl-configuration.sh script in-place, we’ll concatenate a second file with overriding values to it and put the result in a new full-config.sh script; full-config.sh will be the configuration file actually used during the build.

File override-cfg:
export TARGET_BRIDGE="manual"
export LOGIN="lbl"
export HDIR="/home/lbl"
export TARFILE_DIR="${HDIR}/materials"
export PATCH_DIR="${HDIR}/materials"
export BASE_DIR="${HDIR}/work"
export QEMU_IMG_DIR="${BASE_DIR}"
export CROSSTOOLS="${BASE_DIR}/crosstools"
export SYSROOT="${BASE_DIR}/sysroot"
export WORK_SITE="${BASE_DIR}/build"
export LOGFILE_DIR="${BASE_DIR}/logs"
export SCRIPT_DIR="${BASE_DIR}/scripts"
export DOCUMENT_DIR="${BASE_DIR}/docs"
Commands:
cat cbl-configuration.sh override-cfg > full-config.sh

For the host-system QEMU, we’ll use user-mode networking. This is a pretty limited facility — much more limited than virtual networking with a bridge and TAP interfaces as described in the setup-virtual-network blueprint — but does not require any elevated privileges and provides a handy TFTP server so files can be copied to and from the virtual machine without any extra work. Here, we’re using tftp=transfer, which will permit TFTP to copy files to or from the transfer directory.

Notice the use of the -nographic directive — this is key to interacting efficiently with the virtual machine. The other arguments are pretty typical.

File run-host-qemu:
#!/bin/bash
export TAPDEV=$(ip -o link show master bridgevirt | \
  grep 'state DOWN' | head -n 1 | awk '{print $2}' | sed 's@:$@@')
export NUM=$(printf %02d ${TAPDEV#tapvirt})
qemu-system-aarch64 \
  -kernel host-system/kernel \
  -append "root=/dev/vda1 \
    console=ttyAMA0 ro init=/sbin/init" \
  -machine type=virt \
  -cpu cortex-a57 -smp cpus=8 \
  -m size=32768 \
  -drive file=host-system/lbl-work.qcow2,index=0,media=disk \
  -drive file=host-system/swap.qcow2,index=1,media=disk \
  -drive file=target-system/cbl-initial.qcow2,index=2,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device virtio-net-device,netdev=net0 \
  -nographic
Commands:
chmod 775 run-host-qemu

Now let’s write the ruby script that will launch QEMU and drive it with expect. I like to see what’s going on in the host QEMU system, so I set $expect_verbose in the script. That causes all the console output to be written to the standard output of the script, rather than swallowed silently; we’ll direct it all to a log file.

File run-host-build:
#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-host-qemu')

Once QEMU is running, we need to wait for the network in the virtual machine to be available. We can tell that that’s the case by looking at the kernel messages written out through the serial console and noticing when the default route has been added.

File run-host-build (continued):
r_io.expect(/eth0: adding default route via/)

Now we’re going to log in as root. Depending on how the boot process goes, we may have gotten a login prompt from the getty process running on the console already, or we might not; if we just hit return, getty will repeat its prompt, so we’ll do that.

Everything in the run-host-build script is designed to be idempotent. That is, if any command has already run and set things up as desired, it won’t be repeated in a way that will cause problems. So if the host build crashes partway through, you might be able to just run this script again and have it pick up where it left off. It’s hard to make any guarantees about that, but it’s at least worth a try.

File run-host-build (continued):
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)

Interacting with the host QEMU console through the expect facility is a little clumsy! Every command needs to be typed into the console through the I/O object that has been mapped to QEMU’s standard input, and after each command we need to call expect on the I/O where QEMU’s standard output is showing up. Otherwise, we can’t be sure that the script will wait long enough for the VM to process the command and provide a new shell prompt. Because of this clumsiness, I prefer to do as little interaction through that facility as I can get away with.

Accordingly, let’s write a couple more scripts that we can then run in the QEMU console.

First, we’ll want a script that sets up the disk volumes for the build: set up the swap volume if it’s not already set up (again, we want this to be idempotent), enable it, create a filesystem for the target system, mount it at the correct location, and make sure the lbl user can write to it.

File setup_volumes.sh:
#!/bin/bash
blkid | grep /dev/vdb | grep -q swap || \
  mkswap /dev/vdb
swapon /dev/vdb
cd /home/lbl
source full-config.sh
mkdir -p $SYSROOT
if [ ! -b /dev/vdc1 ]
then
  echo 'start=2048, type=83' | sfdisk /dev/vdc
fi
blkid | grep /dev/vdc1 | grep -q lblroot || \
  mkfs.ext4 -O ext_attr -L lblroot /dev/vdc1
mount /dev/vdc1 $SYSROOT
chown -R lbl:lbl $BASE_DIR
sync

You might notice the sync command there. We’ll run sync a few more times as well. If something goes wrong and the run-host-build script crashes, the QEMU process will get a kill signal as the ruby parent process is cleaned up. If the virtual machine has dirty pages that have not been written to the disk image files, this could potentially leave the filesystem in a bad or unexpected state. While I was writing and testing this blueprint, I had a lot of trouble figuring out why commands run through expect didn’t seem to have been completely processed, and had left lock files on the system that caused problems when restarting. Once I figured out that it was because QEMU was being killed before it flushed any buffers, it was trivial to fix by adding a sync command.

And second, we want a script that we can run as the lbl user to generate the CBL scripts and run the top-level script it produces.

File run_host_cbl.sh:
#!/bin/bash
source full-config.sh
if [ -f $SCRIPT_DIR/cbl.sh ]
then
  echo "CBL scripts already generated, resuming build"
else
  cd cbl
  lb cbl
fi
cd $SCRIPT_DIR
sync
./cbl.sh
sync
cd $DOCUMENT_DIR
asciidoctor -a toc=left -a toclevels=4 -a sectnumlevels=4 cbl.adoc
sync

Now let’s get everything onto the virtual host. Since we’ll be using TFTP for all the file copies, and it doesn’t provide any way to do recursive copies, we’ll create a tar file that contains everything we want to copy in to the VM. The -h directive tells tar to archive the files and directories that symbolic links point to, rather than the links themselves — when I set up build directories for use with this process, I often have symbolic links for the cbl and materials directories.

Commands:
mkdir transfer
tar -c -h -f transfer/copyin.tar cbl full-config.sh setup_volumes.sh \
  run_host_cbl.sh materials

The curl program that is part of the basic LB Linux system can act as a TFTP client, so we can use it to copy the file in. When using user-mode networking, the host system is accessible by default at 10.0.2.2.

File run-host-build (continued):
w_io.puts('cd /home/lbl')
r_io.expect(/^#/)
w_io.puts('curl -O tftp://10.0.2.2/copyin.tar')
r_io.expect(/^#/)
w_io.puts('tar -x -f copyin.tar')
r_io.expect(/^#/)
w_io.puts('rm -f copyin.tar')
r_io.expect(/^#/)

Everything we need for the build is now on the virtual machine, so we can run those handy scripts we just wrote to perform the host build.

File run-host-build (continued):
w_io.puts('chmod 775 *.sh')
r_io.expect(/^#/)
w_io.puts('./setup_volumes.sh')
r_io.expect(/^#/)
w_io.puts('sudo -u lbl ./run_host_cbl.sh > HOST-BUILD.LOG')
r_io.expect(/^#/)
w_io.puts('sync')
r_io.expect(/^#/)

If everything went well, we have a complete scaffolding environment on the target volume. If that’s the case, we should copy the scaffolding kernel and build logs off of the target volume, power down the host virtual machine, and proceed with the target side of the build.

If the host-side build did not run to completion, we’ll skip copying the scaffolding kernel and just power down the host virtual machine: the build failed, so the resulting system needs to be inspected to see how to proceed.

We can determine which of these happened by looking to see if the output includes the TOTAL SUCCESS line.

File run-host-build (continued):
cmdline = ["grep -q '^TOTAL SUCCESS$' HOST-BUILD.LOG",
           '&& echo "RESULT: SUCCESS"',
           '|| echo "RESULT: FAILURE"'].join(' ')
w_io.puts(cmdline)
result = r_io.expect(/^RESULT: ([A-Z]+)\r/)[1]
if result == 'SUCCESS'
  w_io.puts('sync')
  r_io.expect(/^#/)
  w_io.puts('pushd work/sysroot/scaffolding/boot')
  r_io.expect(/^#/)
  w_io.puts('curl -T kernel tftp://10.0.2.2/')
  r_io.expect(/^#/)
  w_io.puts('popd')
  r_io.expect(/^#/)
  w_io.puts('tar -c -f host-build-logs.tar HOST-BUILD.LOG work/logs')
  r_io.expect(/^#/)
  w_io.puts('curl -T host-build-logs.tar tftp://10.0.2.2/')
  r_io.expect(/^#/)
  w_io.puts('curl -T work/docs/cbl.html tftp://10.0.2.2/')
  r_io.expect(/^#/)
end
w_io.puts('poweroff')

There’s one issue with just running poweroff and then waiting for the QEMU process to terminate: as long as the root console is active, the s6 shutdown process will hang. So we need to exit the root console as well.

File run-host-build (continued):
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

Now that we’ve got that script written, we just need to run it.

Commands:
chmod 775 run-host-build
./run-host-build

If the host build succeeds, we’ll find the scaffolding kernel in the transfer directory, so we can put it where it belongs (and also take care of the other files that have been copied out to transfer). If not, we should bail out!

Commands:
if [ ! -f transfer/kernel ]; \
  then \
  echo "Host-side build failed, please inspect."; \
  exit 1; \
  fi
echo "Host build complete: $(date +%s)" >> BUILD-INFO
mv transfer/kernel target-system
rm -f transfer/copyin.tar
mkdir -p log/host
mv transfer/cbl.html .
mkdir log/host
pushd log/host
tar -x -f ../../transfer/host-build-logs.tar
popd
rm -f transfer/host-build-logs.tar

6. The Target-Side Build

To run the target side of the CBL process, we need to run another QEMU process. This will be essentially the same command that is used in the default (real-computer host, QEMU-emulated target) CBL configuration, so this command is basically a copy of the one in the launch-qemu blueprint. If I were a better person, I’d refactor to have no duplication between blueprints.

As with the launch-qemu blueprint, if you haven’t built the host-prerequisites and are not using an LB Linux system, you’ll need to either build and install the yoyo program before you do this, or remove the reference to yoyo from this blueprint.

Commands:
pushd target-system
qemu-img create -f qcow2 -b cbl-initial.qcow2 -F qcow2 cbl.qcow2
yoyo qemu-system-x86_64 \
  -kernel kernel \
  -append "root=/dev/sda1 \
  console=ttyS0 \
  ro init=/scaffolding/sbin/init" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=cbl.qcow2,index=0,media=disk \
  -drive file=swap.qcow2,index=1,media=disk \
  -nographic
popd
echo "Target build complete: $(date +%s)" >> BUILD-INFO

Once that QEMU command terminates, the entire CBL process is complete! The target-system/cbl.qcow2 disk image will contain the freshly-built Little Blue Linux system.

7. Post-Build Work

7.1. Copying Kernel and Logs From The Target

If a boot loader is installed on the Little Blue Linux system we just built, you’ll be able to run the resulting system via QEMU immediately. It’s not good to assume that a boot loader is installed, though — for many architectures, the CBL project doesn’t have any blueprints for setting up a boot loader, and the easiest way to boot the system is to provide a kernel file (along with the disk images) to a QEMU process. Also, even when the system has a boot loader, processes like this fully-automated build blueprint assume that the system should be booted by providing a kernel file to QEMU, because consistency across architectures is such a helpful simplification.

So, just as the default build does, we’re going to copy the kernel file out of the target image — and we’ll copy the log files from the target build as well.

Commands:
cat run-host-qemu | sed -e 's@cbl-initial.qcow2@cbl.qcow2@' > \
  run-host-qemu-final
chmod 775 run-host-qemu-final

This works just like run-host-build: ruby, expect, QEMU, -nographic, and so on.

File post-build-1:
#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-host-qemu-final')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)

We did most of the necessary setup earlier, so all we need to do now is mount the target system image, then copy the kernel file out of the target image — similar to the way we copied things into the host system QEMU earlier, only in the other direction.

File post-build-1 (continued):
w_io.puts('mount /dev/vdc1 /mnt')
r_io.expect(/^#/)
File post-build-1 (continued):
w_io.puts('pushd /mnt/boot')
r_io.expect(/^#/)
w_io.puts('curl -T kernel-* tftp://10.0.2.2/')
r_io.expect(/^#/)
w_io.puts('popd')
r_io.expect(/^#/)

There are target-side log files in a couple of places — in the /root/cbl-logs directory, and in all the package-user home directories. To make it easy to copy them all out of the target system, we can collect them all in a single place.

The tar command pipeline used to collect all the package user log directories and their contents is somewhat opaque, but it’s an idiom I’ve seen several times. I seem to recall that the first time I saw it was on the tar man or info page, but it doesn’t seem to be present in the latest release.

File post-build-1 (continued):
w_io.puts('mkdir -p /tmp/target/pkgusr')
r_io.expect(/^#/)
w_io.puts('cp -a /mnt/root/cbl-logs /tmp/target')
r_io.expect(/^#/)
w_io.puts('cd /mnt/usr/src')
r_io.expect(/^#/)
logcopy = ['tar -c -f - */logs |',
           '(cd /tmp/target/pkgusr; tar -x -f -)'].join(' ')
w_io.puts(logcopy)
r_io.expect(/^#/)

As with copying things into the host system earlier, TFTP lacks any convenient way to transfer multiple files, especially when the file names are not predictable. So we will tar up all the log files, and transfer them that way.

File post-build-1 (continued):
w_io.puts('cd /tmp')
r_io.expect(/^#/)
w_io.puts('tar -c -f target-logs.tar target')
r_io.expect(/^#/)
w_io.puts('curl -T target-logs.tar tftp://10.0.2.2/')
r_io.expect(/^#/)
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

As before, we need to run the script; and we may as well collect timings on this stage as well.

Commands:
chmod 775 post-build-1
./post-build-1 >> log/post-build-1.log 2>&1
echo "Post-Build Copy complete: $(date +%s)" >> BUILD-INFO
mv transfer/kernel-* target-system
pushd log
tar -x -f ../transfer/target-logs.tar
popd
rm -f transfer/target-logs.tar

7.2. Clean Up Scaffolding and Rebuild Untested Packages

Now we want to run the target system via QEMU, to tear down the (no-longer-used) scaffolding; and we are running the full Little Blue Linux system, so we can rebuild the packages that didn’t have their tests run during the original target build. Unlike the QEMU process used during the target-side build, this time we want the system to be accessible over the network, so we add additional stuff as in the original host QEMU process.

File run-target-qemu:
#!/bin/bash
qemu-system-x86_64 \
  -kernel target-system/kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=target-system/cbl.qcow2,index=0,media=disk \
  -drive file=target-system/swap.qcow2,index=1,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device e1000,netdev=net0 \
  -nographic
Commands:
chmod 775 run-target-qemu

By this time, controlling the Little Blue Linux system by using the expect library is getting to be almost second nature!

File post-build-2:
#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-target-qemu')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)
w_io.puts('rm -rf /scaffolding')
r_io.expect(/^#/)
w_io.puts('cd cbl')
r_io.expect(/^#/)
w_io.puts('lb rebuild-untested-packages')
r_io.expect(/^#/)
w_io.puts('/tmp/build/scripts/rebuild-untested-packages.sh')
r_io.expect(/^#/)

I haven’t had any problems with test suites hanging at this point, so I’m not spending any time or energy trying to deal with issues here. I hope that doesn’t change!

File post-build-2 (continued):
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

Run the script we just wrote, collect timings…​

Commands:
chmod 775 post-build-2
./post-build-2 >> log/post-build-2.log 2>&1
echo "Rebuild untested packages complete: $(date +%s)" >> BUILD-INFO

7.3. Remaster the QEMU Image and Package the Build

The disk image format most commonly used by QEMU is "qcow2." The "qcow" part stands for "QEMU Copy-On-Write"; the "2" just indicates that this is the second QEMU Copy-On-Write image format. We’ll talk more about "copy on write" a little later on. An important thing about qcow2 files is that when they are initially created, they take up very little space. As data blocks are written to a virtual disk corresponding to a qcow2 file, the qcow2 file grows as needed to contain the modified blocks.

When we use QEMU for CBL builds, we always specify a size of fifty gigabytes for the disk images we create, because storage is cheap and plentiful and there’s no reason to limit the disk size. Because qcow2 allocates space lazily, the image files initially take only about 200 kilobytes. When the scaffolding (including the source code archives) is added to an image file, it grows to about 1.6 gigabytes. The real increase in space usage, though, comes when the target-side build is executed: the build process creates a lot of files that are subsequently deleted — like the source code directory structures for all packages built during the process — which expands the qcow2 image file to about 25 to 30 gigabytes. After the build is complete, it is usually desirable to rebuild the root filesystem so the disk image is only as large as it needs to be to hold the final LB Linux system: generally I find this is between four and five gigabytes.

Note
If you’d like to have a smaller image, there are some easy things you can do to reduce the space used more than we’re doing here! If you don’t need to be able to rebuild any packages that are already present on the system, you can remove the source code archive and patches from the pacakge user home directories by running rm -f /usr/src//{src,patches}/. You can also strip symbols from the binary programs and libraries on the system; this will mean you can’t use gdb to debug those programs, but if you don’t ever use gdb or anything similar to it, that may not matter. The strip-binaries blueprint talks about how to do this.

We’re also going to package everything from this build into a consistent and convenient form, so it’s easy to keep track of and use later on. The package structure I use is just a directory with a bunch of files in it — you could tar this up into a single archive file if you want, but I don’t bother. In this blueprint I’m just putting this into a new subdirectory called package — I move it elsewhere separately.

Commands:
mkdir package
cp BUILD-INFO package
cp cbl.html package
cp target-system/kernel-* package

The BUILD-INFO and rendered cbl.html book go into the package directory, for pretty obvious reasons. The final target system kernel also goes there, since it’s needed to boot the system unless a boot loader is installed.

Commands:
find log -name '*.lz' -exec lzip -d {} \;
tar -c -f - log | lzip -9 > package/log.tar.lz

I also keep all the log files produced throughout the CBL process. Since I pack these up into a compressed tar archive, there’s no point in having the log files compressed within the archive file as well, so we decompress them before constructing the log archive.

I also always produce a run script that is used to launch the LB Linux system using QEMU. This is similar to the other QEMU run scripts we’ve generated and used already, but with some additional complexity. One important difference is that this script uses TAP networking rather than the user-mode networking we have used here. If you’re not familiar with TAP networking, you might want to look at the setup-virtual-network blueprint.

File run-lbl:
#!/bin/bash
if [ ! -f lbl-complete.qcow2 ]
then
    lrzip -d lbl-complete.qcow2.lrz
fi
if [ ! -f lbl.qcow2 ]
then
    qemu-img create -f qcow2 -F qcow2 -b lbl-complete.qcow2 lbl.qcow2
fi
if [ ! -f swap.qcow2 ]
then
    qemu-img create -f qcow2 swap.qcow2 50G
fi

The canonical system image is lbl-complete.qcow2, which I compress using lrzip in the build package to save space — lrzip generally gets the four-to-five gigabyte image file down to about 1.2 gigabytes. If there is no uncompressed version of the file present when running the system, the compressed verison is decompressed so it can be used.

As we did earlier, during the build, we can keep the actual target system image read-only and use a second, possibly ephemeral, image file for any virtual machine run using that image. If no lbl.qcow2 image file exists when running the VM, we simply create one. Similarly, I don’t bother to keep a swap.qcow2 disk image around as part of the packaged build, so I create a new one, if needed, any time the system is booted.

File run-lbl (continued):
export TAPDEV=$(ip -o link show master bridgevirt | \
  grep 'state DOWN' | head -n 1 | awk '{print $2}' | sed 's@:$@@')
export NUM=$(printf %02d ${TAPDEV#tapvirt})
qemu-system-x86_64 \
  -kernel kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=lbl.qcow2,index=0,media=disk \
  -drive file=swap.qcow2,index=1,media=disk \
  -netdev tap,id=net0,ifname=$TAPDEV,script=no,downscript=no \
  -device e1000,netdev=net0,mac=aa:bb:cc:dd:ee:$NUM \
  -nographic

Hopefully nothing else in the run script is surprising.

Commands:
cp run-lbl package/run
chmod 555 package/run
qemu-img create -f qcow2 package/lbl-complete.qcow2 50G
pushd target-system
qemu-img create -f qcow2 -F qcow2 -b cbl.qcow2 lblroot.qcow2
qemu-img create -f qcow2 -F qcow2 -b cbl.qcow2 lblsrc.qcow2
popd

We need to create the lbl-complete.qcow2 image file — this will be the destination for the remastering operation.

THe cbl.qcow2 image is the target system image that we are remastering here, and we want to use it in two different ways, without worrying about those different uses conflicting with each other. First, we want to use it as the root filesystem of the QEMU VM where we will execute the remastering operation; and, second, we want to use it as the source for the filesystem copy.

Why Not Just Copy From The Root Filesystem?

I don’t like to use the active root filesystem as the source for a filesystem copy operation, because there are invariably some files that are open for modification at the time you copy the files — log files, if nothing else — so you may get mangled or corrupted file content copied over. If you’re copying the root filesystem directly it’s also easy to get temporary files and server-specific files like sshd server keys. My preference is always to use a read-only filesystem as the source when making a copy, just to be sure that everything is in a consistent state.

This is where the copy-on-write aspect of the qcow2 format becomes really useful: we’ll create two image files, both based on the cbl.qcow2 image. The first one, lblroot.qcow2, will be the root filesystem volume for the QEMU virtual machine, and the second one, lblsrc.qcow2, will be the source for the filesystem copy operation.

The script that runs QEMU for this remastering operation will be just like the other target-system QEMU, except that we will provide it with the source and destination image files as well as the root filesystem and swap volumes.

File run-target-for-remastering:
#!/bin/bash
qemu-system-x86_64 \
  -kernel target-system/kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=target-system/lblroot.qcow2,index=0,media=disk \
  -drive file=target-system/swap.qcow2,index=1,media=disk \
  -drive file=target-system/lblsrc.qcow2,index=2,media=disk \
  -drive file=package/lbl-complete.qcow2,index=3,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device e1000,netdev=net0 \
  -nographic
Commands:
chmod 775 run-target-for-remastering

The remastering operation itself will be done by yet another ruby script using expect to drive the QEMU virtual machine through the serial console. The first part of this script is just like it always is.

File post-build-3:
#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-target-for-remastering')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)

As with every time we set up a new storage volume, we need to create a partition table and then create a filesystem on the partition we just created.

File post-build-3 (continued):
cmdline = ["echo 'start=2048, type=83' |",
           'sfdisk /dev/sdd'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)
cmdline = ['mkfs.ext4 -O ext_attr -L lblroot',
           '/dev/sdd1'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)

Remastering the filesystem is not complicated: create directories to serve as mount points for the source and destination volumes, mount those volumes, and then use tar to copy the source filesystem contents to the destination filesystem.

The tar command pipeline is the same trick for copying a full directory structure we used for collecting log files earlier. The other options I’m using here are the same ones I always use for complete file system backups.

File post-build-3 (continued):
w_io.puts('mkdir /tmp/lblsrc /tmp/lbldest')
r_io.expect(/^#/)
w_io.puts('mount -o ro /dev/sdc1 /tmp/lblsrc')
r_io.expect(/^#/)
w_io.puts('mount /dev/sdd1 /tmp/lbldest')
r_io.expect(/^#/)
w_io.puts('cd /tmp/lblsrc')
r_io.expect(/^#/)
cmdline = ['tar --create --numeric-owner --file=- --acl --xattrs . |',
           '(cd /tmp/lbldest; tar --extract --numeric-owner',
           '--file=- --acl --xattrs)'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)
w_io.puts('sync')
r_io.expect(/^#/)

That’s all we need to do! The LB Linux filesystem has been written to a fresh new image, and is only as big as it needs to be. We can shut down the virtual machine.

File post-build-3 (continued):
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)
Commands:
chmod 775 post-build-3
./post-build-3
Caution
The destination filesystem does not have a boot loader installed on it! This does not matter if you’re using QEMU as the boot loader, but if you want to be able to boot the system without anything beyond the root filesystem image file, you will need to reinstall the boot loader on it.
Commands:
lrzip -L9 -D -f package/lbl-complete.qcow2 || echo nevermind

As previously mentioned, I like to use lrzip to compress the root filesystem image in the package directory.

And now the build is completely complete!

8. Complete text of files

8.1. override-cfg

export TARGET_BRIDGE="manual"
export LOGIN="lbl"
export HDIR="/home/lbl"
export TARFILE_DIR="${HDIR}/materials"
export PATCH_DIR="${HDIR}/materials"
export BASE_DIR="${HDIR}/work"
export QEMU_IMG_DIR="${BASE_DIR}"
export CROSSTOOLS="${BASE_DIR}/crosstools"
export SYSROOT="${BASE_DIR}/sysroot"
export WORK_SITE="${BASE_DIR}/build"
export LOGFILE_DIR="${BASE_DIR}/logs"
export SCRIPT_DIR="${BASE_DIR}/scripts"
export DOCUMENT_DIR="${BASE_DIR}/docs"

8.2. post-build-1

#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-host-qemu-final')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)
w_io.puts('mount /dev/vdc1 /mnt')
r_io.expect(/^#/)
w_io.puts('pushd /mnt/boot')
r_io.expect(/^#/)
w_io.puts('curl -T kernel-* tftp://10.0.2.2/')
r_io.expect(/^#/)
w_io.puts('popd')
r_io.expect(/^#/)
w_io.puts('mkdir -p /tmp/target/pkgusr')
r_io.expect(/^#/)
w_io.puts('cp -a /mnt/root/cbl-logs /tmp/target')
r_io.expect(/^#/)
w_io.puts('cd /mnt/usr/src')
r_io.expect(/^#/)
logcopy = ['tar -c -f - */logs |',
           '(cd /tmp/target/pkgusr; tar -x -f -)'].join(' ')
w_io.puts(logcopy)
r_io.expect(/^#/)
w_io.puts('cd /tmp')
r_io.expect(/^#/)
w_io.puts('tar -c -f target-logs.tar target')
r_io.expect(/^#/)
w_io.puts('curl -T target-logs.tar tftp://10.0.2.2/')
r_io.expect(/^#/)
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

8.3. post-build-2

#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-target-qemu')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)
w_io.puts('rm -rf /scaffolding')
r_io.expect(/^#/)
w_io.puts('cd cbl')
r_io.expect(/^#/)
w_io.puts('lb rebuild-untested-packages')
r_io.expect(/^#/)
w_io.puts('/tmp/build/scripts/rebuild-untested-packages.sh')
r_io.expect(/^#/)
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

8.4. post-build-3

#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-target-for-remastering')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)
cmdline = ["echo 'start=2048, type=83' |",
           'sfdisk /dev/sdd'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)
cmdline = ['mkfs.ext4 -O ext_attr -L lblroot',
           '/dev/sdd1'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)
w_io.puts('mkdir /tmp/lblsrc /tmp/lbldest')
r_io.expect(/^#/)
w_io.puts('mount -o ro /dev/sdc1 /tmp/lblsrc')
r_io.expect(/^#/)
w_io.puts('mount /dev/sdd1 /tmp/lbldest')
r_io.expect(/^#/)
w_io.puts('cd /tmp/lblsrc')
r_io.expect(/^#/)
cmdline = ['tar --create --numeric-owner --file=- --acl --xattrs . |',
           '(cd /tmp/lbldest; tar --extract --numeric-owner',
           '--file=- --acl --xattrs)'].join(' ')
w_io.puts(cmdline)
r_io.expect(/^#/)
w_io.puts('sync')
r_io.expect(/^#/)
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

8.5. run-host-build

#!/usr/bin/env ruby
require 'pty'
require 'expect'
$expect_verbose = true
r_io, w_io, pid = PTY.spawn('./run-host-qemu')
r_io.expect(/eth0: adding default route via/)
w_io.puts
r_io.expect(/login:/)
w_io.puts('root')
r_io.expect(/^#/)
w_io.puts('cd /home/lbl')
r_io.expect(/^#/)
w_io.puts('curl -O tftp://10.0.2.2/copyin.tar')
r_io.expect(/^#/)
w_io.puts('tar -x -f copyin.tar')
r_io.expect(/^#/)
w_io.puts('rm -f copyin.tar')
r_io.expect(/^#/)
w_io.puts('chmod 775 *.sh')
r_io.expect(/^#/)
w_io.puts('./setup_volumes.sh')
r_io.expect(/^#/)
w_io.puts('sudo -u lbl ./run_host_cbl.sh > HOST-BUILD.LOG')
r_io.expect(/^#/)
w_io.puts('sync')
r_io.expect(/^#/)
cmdline = ["grep -q '^TOTAL SUCCESS$' HOST-BUILD.LOG",
           '&& echo "RESULT: SUCCESS"',
           '|| echo "RESULT: FAILURE"'].join(' ')
w_io.puts(cmdline)
result = r_io.expect(/^RESULT: ([A-Z]+)\r/)[1]
if result == 'SUCCESS'
  w_io.puts('sync')
  r_io.expect(/^#/)
  w_io.puts('pushd work/sysroot/scaffolding/boot')
  r_io.expect(/^#/)
  w_io.puts('curl -T kernel tftp://10.0.2.2/')
  r_io.expect(/^#/)
  w_io.puts('popd')
  r_io.expect(/^#/)
  w_io.puts('tar -c -f host-build-logs.tar HOST-BUILD.LOG work/logs')
  r_io.expect(/^#/)
  w_io.puts('curl -T host-build-logs.tar tftp://10.0.2.2/')
  r_io.expect(/^#/)
  w_io.puts('curl -T work/docs/cbl.html tftp://10.0.2.2/')
  r_io.expect(/^#/)
end
w_io.puts('poweroff')
r_io.expect(/^#/)
w_io.puts('exit')
Process.wait(pid)

8.6. run-host-qemu

#!/bin/bash
export TAPDEV=$(ip -o link show master bridgevirt | \
  grep 'state DOWN' | head -n 1 | awk '{print $2}' | sed 's@:$@@')
export NUM=$(printf %02d ${TAPDEV#tapvirt})
qemu-system-aarch64 \
  -kernel host-system/kernel \
  -append "root=/dev/vda1 \
    console=ttyAMA0 ro init=/sbin/init" \
  -machine type=virt \
  -cpu cortex-a57 -smp cpus=8 \
  -m size=32768 \
  -drive file=host-system/lbl-work.qcow2,index=0,media=disk \
  -drive file=host-system/swap.qcow2,index=1,media=disk \
  -drive file=target-system/cbl-initial.qcow2,index=2,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device virtio-net-device,netdev=net0 \
  -nographic

8.7. run-lbl

#!/bin/bash
if [ ! -f lbl-complete.qcow2 ]
then
    lrzip -d lbl-complete.qcow2.lrz
fi
if [ ! -f lbl.qcow2 ]
then
    qemu-img create -f qcow2 -F qcow2 -b lbl-complete.qcow2 lbl.qcow2
fi
if [ ! -f swap.qcow2 ]
then
    qemu-img create -f qcow2 swap.qcow2 50G
fi
export TAPDEV=$(ip -o link show master bridgevirt | \
  grep 'state DOWN' | head -n 1 | awk '{print $2}' | sed 's@:$@@')
export NUM=$(printf %02d ${TAPDEV#tapvirt})
qemu-system-x86_64 \
  -kernel kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=lbl.qcow2,index=0,media=disk \
  -drive file=swap.qcow2,index=1,media=disk \
  -netdev tap,id=net0,ifname=$TAPDEV,script=no,downscript=no \
  -device e1000,netdev=net0,mac=aa:bb:cc:dd:ee:$NUM \
  -nographic

8.8. run-target-for-remastering

#!/bin/bash
qemu-system-x86_64 \
  -kernel target-system/kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=target-system/lblroot.qcow2,index=0,media=disk \
  -drive file=target-system/swap.qcow2,index=1,media=disk \
  -drive file=target-system/lblsrc.qcow2,index=2,media=disk \
  -drive file=package/lbl-complete.qcow2,index=3,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device e1000,netdev=net0 \
  -nographic

8.9. run-target-qemu

#!/bin/bash
qemu-system-x86_64 \
  -kernel target-system/kernel-* \
  -append "root=/dev/sda1 \
    console=ttyS0 \
    ro" \
  -machine type=pc-i440fx-2.8,accel=kvm,usb=off \
  -cpu host -smp cpus=10 \
  -m size=32768 \
  -drive file=target-system/cbl.qcow2,index=0,media=disk \
  -drive file=target-system/swap.qcow2,index=1,media=disk \
  -netdev user,id=net0,tftp=transfer \
  -device e1000,netdev=net0 \
  -nographic

8.10. run_host_cbl.sh

#!/bin/bash
source full-config.sh
if [ -f $SCRIPT_DIR/cbl.sh ]
then
  echo "CBL scripts already generated, resuming build"
else
  cd cbl
  lb cbl
fi
cd $SCRIPT_DIR
sync
./cbl.sh
sync
cd $DOCUMENT_DIR
asciidoctor -a toc=left -a toclevels=4 -a sectnumlevels=4 cbl.adoc
sync

8.11. setup_volumes.sh

#!/bin/bash
blkid | grep /dev/vdb | grep -q swap || \
  mkswap /dev/vdb
swapon /dev/vdb
cd /home/lbl
source full-config.sh
mkdir -p $SYSROOT
if [ ! -b /dev/vdc1 ]
then
  echo 'start=2048, type=83' | sfdisk /dev/vdc
fi
blkid | grep /dev/vdc1 | grep -q lblroot || \
  mkfs.ext4 -O ext_attr -L lblroot /dev/vdc1
mount /dev/vdc1 $SYSROOT
chown -R lbl:lbl $BASE_DIR
sync

1. Actually running the CBL build in a CI pipeline is currently problematic, since any failure in the target-side build causes CBL to drop to a shell prompt. It would be better to terminate and fail the process entirely, if it’s running entirely automated.