Jump to: navigation, search

Buildroot Developers Meeting, 14-16 October 2016, Berlin

Location and date

The Buildroot community is organizing a meeting on October 14th, 15th and 16th 2016 in Berlin (Germany), for Buildroot developers and contributors. This meeting will be a mixture of discussion and hacking session around the Buildroot project. This meeting takes place either right after the Embedded Linux Conference Europe in order to make it easy for participants to attend both events. It is not mandatory to attend all three days of the meeting.

The meeting will take place at In-Berlin, Lehrter Str. 53, 10557 Berlin (map). The meeting hours will be:

  • Friday: 9 AM to 6 PM, followed by a dinner in a restaurant.
  • Saturday: 9 AM to 10 PM.
  • Sunday: 9 AM to ~5 PM depending on when the participants will be leaving


We would like to thank Mind, which is sponsoring the Friday night dinner. Mind is the Embedded Software division of Essensium, which provides consultancy and services specifically in the field of Linux and Open Source SW for Embedded Systems.

We are still looking for a sponsor for the location. It is a hackerspace that accepts voluntary donations, so any amount is good!


  1. Thomas Petazzoni, arriving before ELCE, leaving on Oct 16 at 18:45 from TXL airport (flight SN2588 to BRU)
  2. Luca Ceresoli (Oct 14th and 15th)
  3. Yann E. MORIN, arriving Oct. 10th evening; leaving Oct. 17th early morning
  4. Waldemar Brodkorb (Oct 14th - Oct 15th, leaving early sunday morning)
  5. Maxime Hadjinlian (Oct 10th noonish; Oct 16th evening)
  6. Peter Korsgaard, arriving Oct. 10 evening, leaving Oct 16 at 18:45 from TXL airport (flight SN2588 to BRU)
  7. User:Arnout Vandecappelle, arriving Oct. 8, leaving on Oct 16 at 20:30 from TXL airport (flight SN2590 to BRU).
  8. Samuel Martin, arriving Oct. 14th noonish; leaving Oct. 16th evening
  9. Julien Rosener, attending on Friday only
  10. Vicente Olivert Riera, attending all three days
  11. Romain Naour, arriving/leaving at the same date as Samuel.
  12. Martin Thomas, attending only Friday morning


Below are the notes taken by Arnout during the meeting. One can also read the report sent by Thomas on the mailing list:

introducing gitcache (Julien Rosener)

Julien's use case is that he has a lot of custom git repositories that are hosted in China, so downloading from them is very slow. To improve that, the proposal is to introduce a git cache. Julien proposed to introduce a new BR2_GIT_CACHE_DIR option to define the cache, but after discussion it seems better to do it directly in DL_DIR. The initial proposal is that the user should populate the mirror manually, but it would be a lot nicer if that would be done automatically. So the conclusion is:

  • When a package is first built, it is cloned in DL_DIR and not removed.
  • If the clone already exists, it is updated with a git fetch. git fetch can specify an explicit local ref to make sure we can refer to it later.
  • It's not clear if a tarball still needs to be created. Probably yes, otherwise the extract step has to change.
  • Problem for parallel downloads (important in a shared DL_DIR): if something is locked, git will just fail. So either we have to do locking ourselves, or the download helper has to retry. It is possible to do two fetches in parallel as long as they download to two different local references. Needs more investigation.

Normally we would say that we do something simple first and improve later, but in this case it's something very user-visible, so it is not so nice to change the interface after it has been released.

This feature will significantly increase the size of DL_DIR, e.g. the Linux git directory is >4GB. Also there is no garbage collection in this setup.

The shallow cloning/fetching should still work.

Debugging with _OVERRIDE_SRCDIR (Julien Rosener)

When there are errors/warnings during the build, the paths it refers to are in the build directory, not the source directory. When you open it in an IDE and fix the issue and then do a pkg-rebuild, the just-edited file will by overwritten again...

We don't see a good solution except for out-of-tree builds. Using hardlinks during the rsync doesn't work because most editors will destroy hardlinks. rsync doesn't have an option to create symlinks. Note that a disadvantage of symlinks is that if the build system regenerates a source file, it will be overwritten in the original directory. Out-of-tree builds doesn't work for everything, but it's at least a step forward.

Out-of-tree builds only work for packages with decent build systems, but this would then encourage people to use decent build systems :-)

It is also possible to post-process the output of buildroot and replace the build directory with the override source directory.

Unfortunately, the patches for the out-of-tree builds are lost. Thomas only posted v1 on the list ( etc.), but v2 and v3 were only posted as a reference to a git repo that no longer has that branch.

Multiple br2-external trees (Yann E. Morin)

The external mechanism was initially introduced to help people keep their Buildroot tree clean of proprietary packages, while still allowing them to use the buildroot infra for them. This way the buildroot tree can be provided as-is for license compliance.

Yann sees two use cases for multiple br2-external:

  1. Separate teams that work on separate types of packages (e.g. BSP and application). Each team could maintain their own br2-external.
  2. A br2-external for FLOSS packages that are not in buildroot (yet) (i.e. a kind of staging tree), and another br2-external for proprietary packages.

Peter's original idea was to aggregate all the external trees into a single external tree in the build dir. But that doesn't work because you typically use wildcard constructs in your br2-external, so everything would be included twice. Similarly the two trees could have files with the same name (e.g.

So back to the initial idea. The main roadblock is that in Kconfig variables are really global, they don't change value while processing the Kconfig files. Also, the BR2_EXTERNAL variable can't be used anymore in or .mk files because it contains multiple directories. Therefore, several BR2_EXTERNAL_xxx variables are introduced, one for each external tree. Each external tree has a name, the name is defined in the br2-external tree itself.

Arnout proposes that the file that defines the names should just be a makefile, to give maximum flexibility. However, the parsing of that file is done in a shell script, so it's not possible to use .mk fragments there - parsing is appropriate after all.

Testing (Thomas Petazzoni)

Autobuilder tests a lot, but not:

  • Core infrastructure (BR2_EXTERNAL, patching logic, download methods, OVERRIDE_SRCDIR, scp, ...)
  • Bootloaders, kernel and packages that depend on kernel, filesystem images
  • Toolchain logic
  • Runtime testing

Most likely many of those aspects can be solved with the same basic infrastructure.


  • Python-based test suite. Robot framework was considered, but since our test combinations are rather simple we consider that the additional complexity is not worth it. Common functionality can still be gathered in functions.
  • Using Python unittest and nose2 as runner - this allows running tests in parallel. OTOH a CI infra will also allow to run in parallel so nose2 may not be needed.
  • Each tests consists of a defconfig and a test_run method that can verify that the output is as expected. E.g. boot it in qemu and check some things. There is a helper to boot in qemu. There are also prebuilt kernel images for a few architectures.

Currently implemented: runtime test for Python, dropbear, filesystems (ext2/3/4, iso9660 with grub and syslinux, jffs, squashfs, ubifs, yaffs2 (not runtime)); post-build scripts; post-image scripts; rootfs overlays.

Current issues:

  • nose2 is annoying, but it can be removed.
  • Pre-built artefacts (kernel images) have to be hosted somewhere.
  • Where to put test cases, next to package or in a support/testing/ tree?
  • Runtime tests can't be run in parallel because qemu is opened with a fixed port mapping for telnet so two instances conflict.

Independent of this test infrastructure, it would also be nice to have a script that a developer can run to test a package in different configurations, e.g. static, musl, blackfin. That would help for new package submissions, which now often trigger autobuild failures very quickly.

Security issues

We currently have no security updates for packages. Thomas has had questions about this from several people. If we don't do anything, we may become progressively irrelevant. E.g. ELBE has the benefit of all the Debian updates. We are otherwise quite relevant for IoT stuff, but on a security perspective it's not so good.

One solution would be to make an LTS branch. It seems like a lot of effort to maintain an LTS.

Rockwell-Colins has a prototype tool to check for packages if CVEs exist for them. That would be interesting both for maintaining an LTS, or alternatively for a user to check if he's vulnerable.

This would make it easier to distribute the LTS effort. The idea is that we don't actively do LTS patches, but only if a user submits an LTS patch. However, in that case, can we really advise people to use the LTS release instead of the latest release? Most likely the LTS will have more security issues than the latest release because LTS just gets a few contributed updates.

An LTS branch could be as short as a year. Pengutronix had a presentation at ELC-E 2016 about security and updates, and they advice to plan time every year to upgrade the complete distro used by your product. Still, there should be a bit of overlap between LTS releases so people can easily transfer. Although Yann doubts that anyone would actually use that overlap.

Anyway, many people doubt that buildroot users really are going to update their environment.

Next problem is what to do with autobuilders. We really do want to run autobuilds on the LTS, but we don't want to see all the stupid failures that didn't get fixed at the time of the release. So at least it should be an autobuilder that doesn't do static or nommu or musl builds. Also, this autobuilder should use the toolchains from the time of the release, not the current toolchains.

The questions is really where we want to spend our time: do we want to spend it on reviewing crazy new packages, or do we want to spend it on making sure the core packages are secure?

Also having a bit of an LTS is a bit of a marketing thing. At least, it's better than nothing. Currently people mostly just take one random (release) version, base their product on that and never update. With an LTS, they probably take the LTS release and there's at least a chance that they will update at some point.

For sure, we first want the CVE-check tool in place. We also need the autobuilder to work on several branches. Then in the FOSDEM meeting we can decide if we want to launch an LTS, e.g. 2017.05. We will see what happens to it and if we want to continue that. We don't have to decide on the grand scheme of how to do LTS releases now.


We can more or less reuse the list of last year. So even if the outcome of last year was really discouraging, we don't waste much time by submitting these proposals again. And the topics are still relevant, even if some of them are (or will be) already partially implemented by that time.

CFLAGS override problems (Waldemar Brodkorb)

Not all package build systems properly propagate the CFLAGS that we pass in. Sometimes we need to override optimisation flags because they trigger compiler bugs.

There is a gcc patch that detects when some CFLAGS don't arrive in the compiler, or give a warning when double or conflicting flags are given. This would allow us to detect such issues in the autobuilders. We should of course do it in the wrapper instead.

But this doesn't solve the problem when a package overrides the optimisation option passed in CFLAGS, because that is still possible. To solve this issue, we could add an environment variable that is defined in TARGET_MAKE_ENV that sets additional flags that will be appended to the command line by the wrapper. Individual packages can still override that variable if needed (e.g. for glibc).

Actually, does it really make sense to have Os as the default? Shouldn't we change it to O2? Conclusion is that it doesn't matter too much, and changing defaults is annoying, so let's keep it.

cmake also has a problem that the RelWithBlablabla options appends flags at the very end of the argument list, so they can't be overridden. The solution here is to override the options that are appended by cmake and defining them as empty. Samuel and Maxime are working on it.l

Deprecation of BR2_DEPRECATED (Arnout Vandecappelle)

As discussed on the mailing list, the legacy handling is better than the deprecated logic. So let's just get rid of BR2_DEPRECATED.

Buildroot'lers Best Practice: How you do this in your devices? Dual Boot and Update without loosing local configuration changes (Waldemar Brodkorb)

How do the core buildroot-users do field upgrades of their systems? There are two topics here: how to initiate the upgrade, and how to execute the upgrade.

Initiating the upgrade

Yann uses TR-69, which is a remote management protocol. Through this protocol, the server tells a device it should upgrade + what it should upgrade to. provides both the server and the client for doing device management in a proprietary way. But both server and client are open source.

For upgrade by a technician, there are several approaches, usually involving the technician's laptop contacting the devices over either USB or network in a proprietary way.

In general, the approach is that the device is stupid, it just says "this is me" and the server decides what is the appropriate firmware.

Peter uses a dpkg to manage upgrades and versions, because it has support for version comparison, signing, and post-install scripts. The post-install script contains the logic of how to upgrade, so you can do anything during the upgrade. Problem is that upgrade file is there 3 times: once in the dpkg, once extracted, and then in the actual flash partition. The dpkg file itself is built in post-image script.

Executing the upgrade

Basically: dual bank setup, perhaps with an additional rescue. Always full image upgrade.

There is also the possibility to always get the image from the network, i.e. always update. There is just a very small "bootloader" kernel that fetches the image. This gives you always up-to-date devices.

There are several generic updaters: swupdate, fwup,, rauc. only supports ext2/3/4 filesystems at the moment so no raw flash.

For configuration, there should be a separate partition from the rootfs. Then you need some tricks at boot time to generate the actual config files based on the static parts of the configuration and the configurable parts. A simple approach could just be that it's overlayed on /etc (using unionfs-fuse or overlayfs). A factory reset then simply wipes the config partition and replaces it with a default. systemd has more or less builtin support for this with tmpfiles. Versioning if also needed because the config may be invalid. Migration is super difficult.

Update the todo list

The [[Buildroot#Todo_list][TODO list on the wiki]] has been updated. Stuff that has already been implemented is removed.