You’ve made a choice to use open-source software as part of your product release. That’s a great start. Open-source software projects usually have large contributing communities that help improve the software’s quality and functionality over time. They are also usually willing to provide suggestions on how to proceed if you happen to run into difficulties during your development process.
But how do you continue to benefit from something that might be constantly adding new features and also improving the reliability of the features it already has? And how do you manage the development of your own upgrades and fixes in parallel to this changing code base?
One of the most well-known pieces of open-source software is the Linux kernel. It is also probably the most dynamic in terms of code changes, due to its very large contributing community, so let’s take that as an example.
Aligning to the mainline Linux kernel
Mainline - This, as the name suggests, is the main kernel tree. It is where all new kernel features are introduced, and has a release schedule of every 2-3 months.
Stable - When a mainline kernel is released, it is classified as being 'stable'. This stable kernel is maintained until the next mainline kernel release, with bug fixes backported from the mainline kernel tree and released as stable kernel updates, typically once a week.
Long Term Support (LTS) - These kernel releases provide long term support for the backporting of important bug fixes to older kernel trees. They offer stability in API, at the cost of having no access to the latest kernel features. A new LTS kernel is selected from the Mainline once a year, and typically the last Mainline release of the calendar year.
As you can see, both the Stable and new LTS releases align with the Mainline release with different periodicity, so the choice between them would be based on your specific requirements. A Stable release offers fast patching of bugs that could be introduced by the new Mainline release, but the API stability of an LTS release is often chosen when there is a need to use proprietary drivers, such as for graphics/wireless, as these are often less well maintained on Mainline releases.
If you opt for using an LTS release, and over the weeks and months of your development, between one LTS release and another, you may find yourself in the situation where you have added new kernel functionality or even fixed bugs that were hindering your progress.If that's the case, you now have diverged from the Mainline and have two options on how to proceed, if you want to continue to benefit from the Linux kernel's development community for both minor bug fixes and new features that have occurred since you last obtained an update to the release.
Option 1 - You could continue to diverge and when it comes time to move to a future LTS release, you could apply your changes & fixes as a series of patches on top of it, porting those that cause conflicts.
Option 2 - The alternative is that you incorporate your changes and fixes in the Mainline tree, as you develop, so that the functionality is present, integrated and fully tested by the time a new LTS release is available.
'Option 1' is a typical approach for organizations that wish to maintain control of their code and its functionality. This may be because this additional functionality provides them with a strategic advantage in their market, making the option for sharing impossible. However, the approach of Option 1 comes at a cost, as it is usually a complex task and requires a specific expertise. This is especially true if the time between moving from one release to another is great. It is because of this very situation that Codethink have been contracted to assist customers with this process in the past.
'Option 2' requires a commitment to provide both time and resources to the development process that will allow for the proper implementation of a process to submit changes to the Mainline kernel. This process is usually referred to as 'upstreaming', and requires some effort, as it's not just a matter of submitting patches of your code changes to the Linux kernel's mainline tree and waiting for them to be accepted/rejected.
Aside from the smoother transition from one Mainline release to another, another benefit of aligning to Mainline, through the upstreaming of your code changes, is initially in terms of quality. In the beginning, the process can be quite a taxing one. The submission of a patch to the relevant maintainers in the Linux kernel will result in a technically critical review of your code. While this doesn't sound pleasant, the review might bring to light issues that had not been considered when the code was originally developed. The maintainers in the Linux community help ensure quality by encouraging developers to consider how their code could possibly impact on other parts of the kernel.
From a security point of view, the fact that open-source software can also be scrutinized by members of the security community, allows for it to be experimented on for vulnerabilities, which ultimately should make it more secure.
Option 2 obviously changes the way an organization consumes a kernel release, moving away from upgrading with each new LTS kernel release. Aligning to Mainline, in co-operation with a suitable testing infrastructure, would allow an organization to upgrade with each Stable release relatively easily.
The Linux kernel, like many open-source software projects, is only as functional as it is today thanks to community participation in its development. A Mainline release can consist of changes to several hundred thousand lines of code, from possibly thousands of different developers. For this reason they introduced the concept of a development merge window, which opens for two weeks after a Mainline release. Within this window, kernel maintainers send their patches to Linus Torvalds, who will determine if they are to be added to Mainline. If they are added, they will be further scrutinized and tested during the period before the next Mainline release.
Between the end of the merge window, and the next Mainline release, only patches that resolve issues should be submitted to the mainline tree. These patches make their way into Mainline on a weekly basis, as Release Candidate (RC) releases, until the kernel is considered to be stable enough for a new Mainline release, when the process starts all over again.
It is the kernel maintainers that developers have the most interaction with during this upstreaming process. Not only are they responsible for the merging of patches into the Mainline tree, they are also involved in the inspection and approval of submitted patches.
Upstreaming should not be an afterthought
For the case of the Linux kernel, upstreaming is something that needs to be embedded as part of your development culture. Its use of a merge window, and all the responsibilities that go with having a patch accepted and merged in the weeks thereafter, require proper planning and understanding of release cycles that are the target for features and active engagement upstream.
There are also advantages to development teams becoming actively involved in the upstream community as part of their job. Engagement with other developers could help them push the direction of development towards functionality that they are interested in, with their implementation ideas benefiting from the many eyes of the community.
Finally, if your development team plan to upstream and align against a Mainline release,it is essential to have an extensive testing infrastructure in place. This will help manage any potential regressions that occur during development, before they are upstreamed, as well as testing for any regressions introduced by other developers' contributions to the Mainline release.
Want to learn more?
While the Linux kernel may be an extreme example, when it comes to maintaining mainline alignment, the argument for upstreaming holds for open-source projects in general.
Related blog posts:
- RISC-V: A Small Hardware Project
- Build Meetup 2021: The BuildTeam Community Event
- A new approach to software safety
- Does the "Hypocrite Commits" incident prove that Linux is unsafe?
- ABI Stability in freedesktop-sdk
- Why your organisation needs to embrace working in the open-source ecosystem
- RISC-V User space access Oops
- Tracking Players at the Edge: An Overview
- What is Remote Asset API?
- Running a devroom: FOSDEM 2021 Safety and Open Source
- Meet the codethings: Understanding BuildGrid and BuildBox with Beth White
- Streamlining Terraform configuration with Jsonnet
- Bloodlight: Designing a Heart Rate Sensor with STM32, LEDs and Photodiode
- Making the tech industry more inclusive for women
- Bloodlight Case Design: Lessons Learned
- Safety is a system property, not a software property
- RISC-V: Codethink's first research about the open instruction set
- Meet the Codethings: Safety-critical systems and the benefits of STPA with Shaun Mooney
- Why Project Managers are essential in an effective software consultancy
- FOSDEM 2021: Devroom for Safety and Open Source
- Meet the Codethings: Ben Dooks talks about Linux kernel and RISC-V
- Here we go 2021: 4 open source events for software engineers and project leaders
- Xmas Greetings from Codethink
- Call for Papers: FOSDEM 2021 Dev Room Safety and Open Source Software
- Building the abseil-hello Bazel project for a different architecture using a dynamically generated toolchain
- Advent of Code: programming puzzle challenges
- Improving performance on Interrogizer with the stm32
- Introducing Interrogizer: providing affordable troubleshooting
- Improving software security through input validation
- More time on top: My latest work improving Topplot
- Cycling around the world
- Orchestrating applications by (ab)using Ansible's Network XML Parser
- My experience of the MIT STAMP workshop 2020
- Red Hat announces new Flatpak Runtime for RHEL
- How to keep your staff healthy in lockdown
- Bloodlight: A Medical PPG Testbed
- Bringing Lorry into the 2020s
- How to use Tracecompass to analyse kernel traces from LTTng
- Fixing Rust's test suite on RISC-V
- The challenges behind electric vehicle infrastructure
- Investigating kernel user-space access
- Consuming BuildStream projects in Bazel: the bazelize plugin
- Improving RISC-V Linux support in Rust
- Creating a Build toolkit using the Remote Execution API
- Trusting software in a pandemic
- The Case For Open Source Software In The Medical Industry
- My experiences moving to remote working
- Impact of COVID-19 on the Medical Devices Industry
- COVID-19 (Coronavirus) and Codethink
- Codethink develops Open Source drivers for Microsoft Azure Sphere MediaTek MT3620
- Codethink partners with Wirepas
- Testing Bazel's Remote Execution API
- Passing the age of retirement: our work with Fortran and its compilers
- Sharing technical knowledge at Codethink
- Using the REAPI for Distributed Builds
- An Introduction to Remote Execution and Distributed Builds
- Gluing hardware and software: Board Support Packages (BSPs)
- Engineering's jack of all trades: an intro to FPGAs
- Bust out your pendrives: Debian 10 is out!
- Why you should attend local open source meet-ups
- Acceptance, strife, and progress in the LGBTIQ+ and open source communities
- Codethink helps York Instruments to deliver world-beating medical brain-scanner
- Codethink open sources part of staff onboarding - 'How To Git Going In FOSS'
- Getting into open source
- How to put GitOps to work for your software delivery
- Open Source Safety Requirements Analysis for Autonomous Vehicles based on STPA
- Codethink engineers develop custom debug solution for customer project
- Codethink contributes to CIP Super Long Term Kernel maintenance
- Codethink creates custom USB 3 switch to support customer's CI/CD pipeline requirements
- Codethink unlocks data analysis potential for British Cycling
- MIT Doctor delivers Manchester masterclass on innovative safety methodology
- Balance for Better: Women in Technology Codethink Interviews
- Introducing BuildGrid
- Configuring Linux to stabilise latency
- Full archive