Wed 01 June 2016

Lautsprecher Teufel Build Process Improvements Part 2

Last month we wrote about the improvements we made to the firmware build system for the Lautsprecher Teufel Raumfeld multiroom speaker system.

We had got about as far as we could go with improving the build-from-scratch time. It still took about an hour to produce any firmware images, which is a great improvement on 8 hours but it's not enough to make continuous build+test of the firmware a reality.

The next step was to look at doing incremental builds on the build server. The majority of the work done at Raumfeld is on the 'core' modules and we don't need to rebuild everything from the C compiler to the kernel every time someone commits to those. Developers have always done incremental builds to test their own changes locally but these can sometimes become broken. The Buildroot project avoids trying to prevent all such cases because the code would grow too complex if they tried to make incremental builds completely reliable. However, when the CI server starts to do incremental builds, you need a method that is completely reliable and predictable. There's no point speeding up the process of making a broken release.

Inspired by caching functionality built in to Baserock, and also by the Maven build tool, we implemented a CMake module that would allow us to share and reuse prebuilt artifacts. We considered other options for implementing caching, such as Apache Ivy, but in the end doing it in CMake won out because it meant we would only have to maintain the dependency information in one place.

There isn't really a de facto standard for artifact storage in the embedded software world at this point. Many larger projects use .deb or .rpm packaging to deal with binary artifacts but this brings in some extra complexity and requires ongoing work to maintain the packaging. In the Java world, Maven repositories are the standard (and Java developers sometimes mock us embedded folk for our primitive build tools). Raumfeld already had JFrog Artifactory set up for use by the Android and server-side Java development teams, so it was an obvious choice to use that for storing the artifacts as well.

Artifactory.cmake

The module that we wrote to integrate Artifactory with CMake is released as free software here. The actual implementation is rather specific to Raumfeld's use case, but I think it's still interesting: this is the first public implementation of binary artifact caching for CMake that I know of.

We puzzled for a while over how to 'conditionalise' CMake build commands based on whether a prebuilt artifact was found or not. What we wanted to do was add a command that would take some build command and check for a prebuilt artifact before calling that build command, thus making the build conditional on the existence of a prebuilt artifact. However, CMake’s programming language is quite primitive - it wasn’t originally intended to be a Turing-complete programming language at all - so there's no way to pass a build command as a parameter to another command.

The answer came from one of the managers at Raumfeld who pointed out that in Maven, each artifact corresponds to a directory, containing a POM.xml file. We rearranged the toplevel build system to be the same, using a CMakeLists.txt file in place of the POM.xml, which allowed us to pass the name of the directory containing the artifact around, rather than the actual command needed to build it. The result was the artifactory_add_artifact() function. This checks for a prebuilt version of an artifact and then calls the CMake add_subdirectory() command.

The CMakeLists.txt file for a given artifact is responsible for detecting if it needs to generate build instructions, or if it just needs to generate instructions for unpacking and preparing the prebuilt artifact. That separation means that Artifactory.cmake itself doesn't need any special knowledge about how the artifacts are built or packaged. Artifactory.cmake just puts any suitable prebuilt artifact that it finds into a well-known location where the individual build instructions can look for it. It will also generate an artifactory-submit custom target that, when run, pushes artifacts from a different well-known location to the Artifactory server.

Reusing built artifacts needs to be done carefully. We took the simplest and most conservative approach which was to track the commit SHA1 of the buildroot.git repository as an Artifactory property and only reuse Buildroot artifacts which were built from the exact same commit.

Relocatable Buildroot

Buildroot wasn't originally designed with the idea that you can build things on one machine and then use them on another. In particular, the host tools are dynamically linked against the host C library, so you can't, for example, build them on Fedora 22 and then run them on Fedora 20. Since the problem we were solving was speeding up builds on the build server, we could just mandate that artifact reuse is only supported on the same OS that the build server runs.

The host tools also contain some hardcoded paths. At the time of writing there is a series of patches available to help make them relocatable to anywhere in the filesystem. Those patches are apparently being reworked by the author but we found the existing ones from July 2015 good enough for our purposes.

Separating the core modules from Buildroot

Up till now, the Raumfeld modules were built as Buildroot packages. Since the modules are where most development happens, a key optimisation was to separate them from Buildroot's build process.

They are now built from the toplevel build system using CMake's ExternalProject command. To get these to build against the Buildroot trees took a little effort but we didn’t hit any major problems. Buildroot creates a toolchain file which, when passed in to CMake using the CMAKE_TOOLCHAIN_FILE option, tells CMake where to find the right cross compiler and where to look for libraries and headers. This and the CMAKE_PREFIX_PATH option were enough to cause the core modules to build correctly against the Buildroot builds.

After all this work was done we finally achieved our original goal. If a developer makes changes to the Buildroot configuration, there’s still an hour or so until a clean image is built and tested. But if a developer commits to one of the core modules, the CI server can produce images ready to flash on a device within a few minutes.

Conclusion

We're proud to have helped the Raumfeld firmware developers to achieve radically faster continuous integration. This has enabled unit tests to be run automatically on every commit and it will potentially help the Raumfeld engineering team in other significant ways. Already they are automating more of their QA process.

Other Content

Get in touch to find out how Codethink can help you

sales@codethink.co.uk +44 161 660 9930

Contact us