Last month we wrote about the improvements we made to the firmware build system for the Lautsprecher Teufel Raumfeld multiroom speaker system.
We had got about as far as we could go with improving the build-from-scratch time. It still took about an hour to produce any firmware images, which is a great improvement on 8 hours but it's not enough to make continuous build+test of the firmware a reality.
The next step was to look at doing incremental builds on the build server. The majority of the work done at Raumfeld is on the 'core' modules and we don't need to rebuild everything from the C compiler to the kernel every time someone commits to those. Developers have always done incremental builds to test their own changes locally but these can sometimes become broken. The Buildroot project avoids trying to prevent all such cases because the code would grow too complex if they tried to make incremental builds completely reliable. However, when the CI server starts to do incremental builds, you need a method that is completely reliable and predictable. There's no point speeding up the process of making a broken release.
Inspired by caching functionality built in to Baserock, and also by the Maven build tool, we implemented a CMake module that would allow us to share and reuse prebuilt artifacts. We considered other options for implementing caching, such as Apache Ivy, but in the end doing it in CMake won out because it meant we would only have to maintain the dependency information in one place.
There isn't really a de facto standard for artifact storage in the embedded software world at this point. Many larger projects use .deb or .rpm packaging to deal with binary artifacts but this brings in some extra complexity and requires ongoing work to maintain the packaging. In the Java world, Maven repositories are the standard (and Java developers sometimes mock us embedded folk for our primitive build tools). Raumfeld already had JFrog Artifactory set up for use by the Android and server-side Java development teams, so it was an obvious choice to use that for storing the artifacts as well.
Artifactory.cmake
The module that we wrote to integrate Artifactory with CMake is released as free software here. The actual implementation is rather specific to Raumfeld's use case, but I think it's still interesting: this is the first public implementation of binary artifact caching for CMake that I know of.
We puzzled for a while over how to 'conditionalise' CMake build commands based on whether a prebuilt artifact was found or not. What we wanted to do was add a command that would take some build command and check for a prebuilt artifact before calling that build command, thus making the build conditional on the existence of a prebuilt artifact. However, CMake’s programming language is quite primitive - it wasn’t originally intended to be a Turing-complete programming language at all - so there's no way to pass a build command as a parameter to another command.
The answer came from one of the managers at Raumfeld who pointed out that in Maven, each artifact corresponds to a directory, containing a POM.xml file. We rearranged the toplevel build system to be the same, using a CMakeLists.txt file in place of the POM.xml, which allowed us to pass the name of the directory containing the artifact around, rather than the actual command needed to build it. The result was the artifactory_add_artifact() function. This checks for a prebuilt version of an artifact and then calls the CMake add_subdirectory() command.
The CMakeLists.txt file for a given artifact is responsible for detecting if it needs to generate build instructions, or if it just needs to generate instructions for unpacking and preparing the prebuilt artifact. That separation means that Artifactory.cmake itself doesn't need any special knowledge about how the artifacts are built or packaged. Artifactory.cmake just puts any suitable prebuilt artifact that it finds into a well-known location where the individual build instructions can look for it. It will also generate an artifactory-submit custom target that, when run, pushes artifacts from a different well-known location to the Artifactory server.
Reusing built artifacts needs to be done carefully. We took the simplest and most conservative approach which was to track the commit SHA1 of the buildroot.git repository as an Artifactory property and only reuse Buildroot artifacts which were built from the exact same commit.
Relocatable Buildroot
Buildroot wasn't originally designed with the idea that you can build things on one machine and then use them on another. In particular, the host tools are dynamically linked against the host C library, so you can't, for example, build them on Fedora 22 and then run them on Fedora 20. Since the problem we were solving was speeding up builds on the build server, we could just mandate that artifact reuse is only supported on the same OS that the build server runs.
The host tools also contain some hardcoded paths. At the time of writing there is a series of patches available to help make them relocatable to anywhere in the filesystem. Those patches are apparently being reworked by the author but we found the existing ones from July 2015 good enough for our purposes.
Separating the core modules from Buildroot
Up till now, the Raumfeld modules were built as Buildroot packages. Since the modules are where most development happens, a key optimisation was to separate them from Buildroot's build process.
They are now built from the toplevel build system using CMake's ExternalProject command. To get these to build against the Buildroot trees took a little effort but we didn’t hit any major problems. Buildroot creates a toolchain file which, when passed in to CMake using the CMAKE_TOOLCHAIN_FILE option, tells CMake where to find the right cross compiler and where to look for libraries and headers. This and the CMAKE_PREFIX_PATH option were enough to cause the core modules to build correctly against the Buildroot builds.
After all this work was done we finally achieved our original goal. If a developer makes changes to the Buildroot configuration, there’s still an hour or so until a clean image is built and tested. But if a developer commits to one of the core modules, the CI server can produce images ready to flash on a device within a few minutes.
Conclusion
We're proud to have helped the Raumfeld firmware developers to achieve radically faster continuous integration. This has enabled unit tests to be run automatically on every commit and it will potentially help the Raumfeld engineering team in other significant ways. Already they are automating more of their QA process.
Other Content
- Using Git LFS and fast-import together
- Testing in a Box: Streamlining Embedded Systems Testing
- SDV Europe: What Codethink has planned
- How do Hardware Security Modules impact the automotive sector? The final blog in a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part two of a three part discussion
- How do Hardware Security Modules impact the automotive sector? Part one of a three part discussion
- Automated Kernel Testing on RISC-V Hardware
- Automated end-to-end testing for Android Automotive on Hardware
- GUADEC 2023
- Embedded Open Source Summit 2023
- RISC-V: exploring a bug in stack unwinding
- Adding RISC-V Vector Cryptography Extension support to QEMU
- Introducing Our New Open-Source Tool: Quality Assurance Daemon
- Long Term Maintainability
- FOSDEM 2023
- Think before you Pip
- BuildStream 2.0 is here, just in time for the holidays!
- A Valuable & Comprehensive Firmware Code Review by Codethink
- GNOME OS & Atomic Upgrades on the PinePhone
- Flathub-Codethink Collaboration
- Codethink proudly sponsors GUADEC 2022
- Tracking Down an Obscure Reproducibility Bug in glibc
- Web app test automation with `cdt`
- FOSDEM Testing and Automation talk
- Protecting your project from dependency access problems
- Porting GNOME OS to Microchip's PolarFire Icicle Kit
- YAML Schemas: Validating Data without Writing Code
- Deterministic Construction Service
- Codethink becomes a Microchip Design Partner
- Hamsa: Using an NVIDIA Jetson Development Kit to create a fully open-source Robot Nano Hand
- Using STPA with software-intensive systems
- Codethink achieves ISO 26262 ASIL D Tool Certification
- RISC-V: running GNOME OS on SiFive hardware for the first time
- Automated Linux kernel testing
- Native compilation on Arm servers is so much faster now
- Higher quality of FOSS: How we are helping GNOME to improve their test pipeline
- RISC-V: A Small Hardware Project
- Why aligning with open source mainline is the way to go
- Build Meetup 2021: The BuildTeam Community Event
- A new approach to software safety
- Does the "Hypocrite Commits" incident prove that Linux is unsafe?
- ABI Stability in freedesktop-sdk
- Why your organisation needs to embrace working in the open-source ecosystem
- RISC-V User space access Oops
- Tracking Players at the Edge: An Overview
- What is Remote Asset API?
- Running a devroom at FOSDEM: Safety and Open Source
- Meet the codethings: Understanding BuildGrid and BuildBox with Beth White
- Streamlining Terraform configuration with Jsonnet
- Bloodlight: Designing a Heart Rate Sensor with STM32, LEDs and Photodiode
- Making the tech industry more inclusive for women
- Bloodlight Case Design: Lessons Learned
- Safety is a system property, not a software property
- RISC-V: Codethink's first research about the open instruction set
- Meet the Codethings: Safety-critical systems and the benefits of STPA with Shaun Mooney
- Why Project Managers are essential in an effective software consultancy
- FOSDEM 2021: Devroom for Safety and Open Source
- Meet the Codethings: Ben Dooks talks about Linux kernel and RISC-V
- Here we go 2021: 4 open source events for software engineers and project leaders
- Xmas Greetings from Codethink
- Call for Papers: FOSDEM 2021 Dev Room Safety and Open Source Software
- Building the abseil-hello Bazel project for a different architecture using a dynamically generated toolchain
- Advent of Code: programming puzzle challenges
- Full archive