Tue 11 May 2021

Does the "Hypocrite Commits" incident prove that Linux is unsafe?

The recent furore around University of Minnesota’s “Hypocrite commits” research, which spilled over from the Linux Kernel Mailing List and into mainstream tech media, has provoked a lot of discussion about the Linux kernel community’s processes, and arguably provided ammunition to folks who have been saying all along that open source software cannot be trusted.

Over in the ELISA community, which is exploring how to use Linux in safety-critical systems, it was even suggested that this incident demonstrates that, for Linux to be fit for use in a safety application, the kernel process would need to be redefined and coding standards enforced.

For myself, I continue to believe that open source projects deliver software that is as good as, and often better than, proprietary initiatives. A key difference is that the mistakes and breakages happening behind closed doors often go unreported, and the participants learn less as a result.

I expect that the kernel community will continue to learn, and will evolve its processes in response to this and other events over time. And I totally understand Greg KH’s decision to revert all commits in response to the incident, not least because it signals that action must and will be taken when trust is lost.

But thinking about the use of Linux and open source in a safety context, it seems to me this situation only confirms something that we already know:

  • complex software generally has bugs
  • some bugs are likely to slip through whatever process is in place to catch them

It would be extremely naive to think that bad actors haven't already introduced bugs or vulnerabilities into widely used software.

Equally it would be wildly optimistic to hope that software at the scale and complexity of Linux could ever be considered bug-free, or completely deterministic, or 100% “safe”, or 100% “secure”.

For engineers thinking about how to use Linux or similarly complex software in safety critical applications, I suggest that the lesson here is not “We need to enforce coding standards and change the process to make the software fit for safety”.

I think it would be much more realistic to say:

“We expect occasional bugs in Linux (or any large-scale software). Our safety design recognises that, and our mitigations aim to minimise the risk of harm arising when bugs occur.”

And as a postscript… it now looks as though the claims of the researchers were fiction all along. In light of this, we can highlight a further benefit of the open source approach. This has all played out in the glare of public scrutiny, with the evidence visible to all parties. As a result we can all consider what happened and learn from it.

Related blog posts:

Other Articles

Get in touch to find out how Codethink can help you

sales@codethink.co.uk +44 161 660 9930