Revision as of 12:17, 19 February 2018 by ThomasPetazzoni (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Buildroot Developers Meeting, 5-6 February 2018, Brussels

The Buildroot Developers meeting is a 2-day event for Buildroot developers and contributors. It allows Buildroot developers and contributors to discuss the hot topics in the Buildroot development, work on patches, and generally meet each other, facilitating further online discussions. Attending the event is free, after registration.

Location and date

The next Buildroot Developers meeting will take place on February 5th and 6th 2018 in Brussels, right after the FOSDEM conference. The meeting will take place in Google offices, located Chaussée d'Etterbeek 180, 1040 Brussels, very close to the Schuman metro station.



We would like to thank our sponsors:

  • Google, providing the meeting location, with Internet connection, but also free lunch and refreshments for the meeting participants.
  • Mind is the Embedded Software division of Essensium, which provides consultancy and services specifically in the field of Linux and Open Source SW for Embedded Systems. Mind offers the Monday dinner to the participants of the meeting.

We are looking for sponsors to sponsor travel expenses.


Note: As of 2018-01-31, registration is now closed.

  1. Thomas Petazzoni
  2. Peter Korsgaard
  3. Matt Weber
  4. Sam Voss
  5. Bryce Ferguson
  6. Romain Naour
  7. Samuel Martin
  8. Valentin Korenblit
  9. Luca Ceresoli (Monday only)
  10. Thomas De Schampheleire
  11. Joris Lijssens
  12. Angelo Compagnucci
  13. Arnout Vandecappelle
  14. Roberto Muzzì

Note: As of 2018-01-31, registration is now closed.

Meeting agenda

Yann wrote: There is no real ordering here, except I tried to put important things that require an urgent fix first, whith other non-ciritical things later...

  • Local archive generation [Yann]
    • archives locally generated (from git/cvs/svn) depend on the tar version
    • options:
      • investigating more those tar issues, and perhaps with the upstream tar folks, find a way of solving them
      • use a different archiving solution, that is more stable (cpio)
      • stop archiving, but then not clear how to support primary_site/backup_mirror
      • build our own tar
  • Namespace collision in package infra [Yann]
    • for example foo and foo-base will collide with variables $(1)_NAME and $(1)_BASE_NAME
    • already hit by:
      • alljoyn <-> alljoyn-base
      • alljoyn-tcl <-> alljoyn-tcl-base
      • perl-xml-sax <-> perl-xml-sax-base
    • so, we need to better separate the package part from the infra part, for example:
      • two underscores as a separator: FOO__NAME and FOO__BASE_NAME
      • a dot: FOO.NAME and FOO.BASE_NAME
  • Top-Level Parallel Build (TLPB) [Thomas]
    • How to trigger it?
      • option in menuconfig
      • automatic via top-level 'make -jN' ?
    • How to handle non-make tools (e.g. meson/ninja or mksquashfs...) that have their own parallel build that does not talk to a job-server?
  • SELinux - [Adam/Matt]
    • Functional Tests
      • Target test of basic x86 QEMU
        • Checks busybox and/or full tools can correctly interact with an SELinux enabled kernel
        • Verifies audit debug tools function correctly
      • Host tool check that the APOL testing tools are functional against an example policy (APOL is used for doing proofs of levels of policy requiring compromise before an application is reached.
    • Modular Policy with packages including policy for their configuration (Other option is document how to use refpolicy and use the SDK to do your building of a custom policy outside of the Buildroot build vs trying to integrate a solution for each buildroot package as part of the build. Maybe the SDK option is a test case?)
  • Merge of staging/ and target/ [Yann]
    • only ever install packages once, in staging/
      • a bit faster
    • generate target/ from staging/
      • target-finalize copies staging/ to target/
      • then resumes with the current cleanups
    • staging contains everything and is not stripped
      • keeps debug symbols and the likes untouched


Local archive generation

  • Tarballs we generate locally are different with recent versions of tar, causing hash mismatches. It eventually works because we fallback to sources.b.o that has the "right" tarball.
  • This is due to a bugfix in upstream tar, previous versions were in some cases (related to long filenames) generating the wrong content.
    • it changes all tarballs that have filenames longer than 99~100 chars, including our <pkg>-<ver> prefix which can be long (as in "linux-firmware-65b1c68c63f974d72610db38dfae49861117cae2"
  • Building host-tar is the easiest/most immediate option. Which version to choose? Stick to 1.29 ? And talk with upstream tar in parallel to see if we can find a good solution in the future ?
  • CPIO archives ? Not really nice, nobody remembers how to manipulate them, they are not very convenient/widespread
  • Idea of Arnout: hashing the tarball contents instead of the tarball itself. Sounds a bit complicated?
  • Related discussion about Git caching. According to ThomasDS, it should also be possible with Mercurial
  • How do we keep sources.b.o working with Git caching. Answer: keep a Git repository on sources.b.o (as stored by make source) and have Buildroot use this Git repo instead of downloading a tarball.
  • Additional problem if we ever require tar >= 1.30 instead of < 1.30: will have different tarballs than the hashes in older buildroot releases.
    • change the name to .tar.xz when we do this change
    • this would require mass update of hashes, but only about 70 packages at the moment so doable
    • at the same time, we could also try to change to some other tar format that is not affected by the bug, e.g. ustar?
  • Conclusion:
    • Move Git/Mercurial to caching, no tarballs
    • We probably want to verify that a tag really is what it is, so perhaps have a new hash "type" in .hash file ? Indeed, Git tags can be changed, but we want to make sure we really build what we think. This hash type would associate the Git tag name with the corresponding Git commit hash
    • For Subversion and other version control systems, we anyway don't support hashes for the resulting tarballs, and since their use is very limited (only 3 subversion packages in BR today), we shouldn't bother
    • For 2018.02 obviously this is too much work, so we will build host-tar when the version of tar on the machine is wrong
    • ACTION: do the immediate fix for 2018.02

Download script overhaul + Git caching

  • Patches from Maxime, reworked by Peter:
  • Yann's WIP branch is at
  • Initial goal is to have just cache, ideally we have no tarballs anymore
  • Problem with that: legal-info. But the tarball could just be created on the fly during legal-info
  • Other problem: tags may be the same for different repos of the same package, e.g. two linux repos that use the same tag for something different. This is already a problem at the moment since if the tarball exists already, we just assume it to be correct.
  • For submodules the tarballs are also simpler since we have that handling already. The submodules are either not in the cache (in case of bare repo) or are there but under a different name (for non-bare repo).
  • ACTION: continue on the Git caching topic. Since the topic has already been started by Maxime H., taken over by Peter S., and now by Yann E. M. it probably doesn't make much sense to have yet another person take over it. However, reviewing it would be good.
  • For the time being, still keep tarballs. will have the git cache repo, but it's not meant to be used (it's not even accessible as a proper git repo over http since it misses the necessary indexes).

Discussion on LTS releases

  • Used by Nokia, going to be used by Rockwell. Like the 1 year cadence
  • Discussion on how much effort this requires for Peter
  • ThomasDS: do we need to "tag" commits like the kernel does. Peter: we already tag "security bump to ..."
  • Release notes for 2018.02, highlighting major issues/topics that people might face when updating from 2017.02

Namespace collision in package infra

  • Switching to a different separator, or generally changing the naming of all variables has been considered has being way too much work, with lots of consequences for people having external packages. It would also make backporting patches annoying, etc. We are not ready to go down this route.
  • Instead, we will rename internal variables that might clash. Indeed, there are only a few of these: variables with at least two parts (like BASE_NAME), where the second part (NAME) is also a variable, and the first part is likely to be used as the suffix of a package name (perl-xml-sax-base). DL_DIR might be affected as well since we have _DIR and _DL_DIR, which would clash if a package is named <foo>-dl.
  • Potential clashes:
    • <pkg>_DL_DIR and <pkg>_DIR
    • <pkg>_BASE and <pkg>_BASE_NAME
    • <pkg>_SOURCE and <pkg>_TARGET_SOURCE
    • <pkg>_VERSION and <pkg>_DL_VERSION
    • <pkg>_BASE_NAME and <pkg>_RAW_BASE_NAME
    • <pkg>_INSTALL_IMAGES and <pkg>_TARGET_INSTALL_IMAGES (same for _STAGING and _TARGET)
    • there probably are more, a full analysis could be done if needed later.
  • Conclusions:
    • A fundamental solution for even detecting issues like this is terribly complicated.
    • ACTION: We should just fix the one issue we have now, any new one when/if it ever arises. The variable _BASE_NAME can simply be renamed to _BASENAME.

Top-level parallel build

  • Slides from ThomasP:
  • Remark from ThomasDS: the per-package target actually doesn't really need to have all its dependencies installed in there, only the package itself is sufficient. Doing it this way would make it easy to create a tarball or opkg of a single package. However, there is still a problem with the skeleton. Some parts of the skeleton are really needed in the per-package target (e.g. the usr/lib directory, the lib32 or lib64 symlink). But if the entire skeleton is copied in all per-package target directories, there are some files in there that really shouldn't be in an opkg - both in the default skeleton (e.g. /etc/passwd) and certainly in custom skeletons.
  • Remark from Arnout: make pkg-dirclean should also remove pkg's per-package SDK and target directories.
  • Remark from Peter: rpath fixup is not really needed, since the old path with the original package's HOST_DIR hardcoded in it will still exists, so it's fine.
  • Why is per-package target needed? Because otherwise we can't build the file list, since you don't know if it's due to this package or some other package installing in parallel. Also, we could not detect the case where two packages install the same file because that uses the file list.
  • How to trigger it?
    • option in menuconfig
    • automatic via top-level 'make -jN' ?
  • How to handle non-make tools (e.g. meson/ninja or mksquashfs or gcc -flto ...) that have their own parallel build that does not talk to a job-server?
    • either over-use of CPU (worst case N^2 jobs instead of N)
    • or under-use of CPU (1 job instead of N)
    • gcc can use the jobserver, but probably not with another layer of make or ninja in between
    • See e.g. (we could carry this patch ourselves, if it becomes a problem)
    • Not a problem today, not enough packages are affected by this problem.
    • BR2_JLEVEL is used to parallelize everything except make invocations, so this can be tweaked as needed. This is actually the same as what is done in OpenEmbedded and in Gentoo.
  • Biggest problem foreseen at this time is packages that touch the same file in staging or target.
    • ACTION ThomasP will grep through the autobuild logs to find these conflicts.

GObject introspection

  • Bottom line: qemu is unavoidable without completely rewriting the scanner part of gobject-instrospection
  • The g-ir-scanner tool has to be called under qemu for every package using gobject-introspection, so it is also not possible to have some preconfigured files lying around that could be used.
  • ThomasP is convinced that is should be possible to avoid calling qemu with only moderate hacking of gobject-introspection. But for sure it is work.
  • It sucks, but probably we should accept the qemu approach.
  • Still review on the patches is needed, there are a few shortcomings. But with those fixed we should merge it.

Security hardening

  • See slides from Matt Weber.
  • We definitely want to try change the wrapper instead of fixing zillions of packages
  • This topic might however open the can of worms of packages that do not properly pass CFLAGS/LDFLAGS from the environment down to the actual gcc calls. Some fortify options need specific optimization levels, and optimizations options are currently passed through the environment (and not the wrapper), so we will discover if some packages are incorrectly not passing CFLAGS/LDFLAGS and/or overriding optimization flags
  • Discussion on the fact that we will want genrandconfig to generate random configurations with some hardening options enabled, but we of course need to get to a good level of success/error rate before this can be deployed on the official autobuilders
  • Perhaps it is good idea to review how other distros are handling this (OpenWRT)
  • Include option to allow a package to disable flags when getting called by the wrapper

CPE / CVE management

  • Matt's slides:
  • CPE database = list of software packages, mostly commercial but recently FLOSS packages can be added as well.
  • This database also contains package versions.
  • A fixed CVE will refer to a CPE entry where the issue is fixed.
  • This could be used to get a list of fixed and open CVEs for a particular package version. I.e., look up the CPE, look up CVEs open and closed in that version.
  • Anyone can submit a CPE entry for a new version of the software. However, the linking with CVEs still has to be done by the CVE maintainers.
  • CPE information could be used for various other automation as well, e.g. detecting new versions of a package.
  • Can be retraced with fuzzy matching of package names, but a lot easier if metadata is added to Buildroot.
  • For the time being, CPE information and the link to CVEs is not at all complete. However, adding this type of support would help making the information more complete.
  • The relationship between the Mitre CVE and NIST NVD databases (
  • Conclusions:
    • Add <PKG>_CPE and <PKG>_VENDOR to the infra
    • Add 'make cpe-info' to the infra
    • Add a script in utils/ that uses cpe-info output to check the CVE database for open vulnerabilities.
      • This script will miss a *lot* of CVEs if it is based just on CPE info.
      • Ideally, the script should also do fuzzy matching on the general CVE list to find possible vulnerabilities, filter out the ones that we are sure that are fixed based on CPE info, and report these as well.
      • That would also motivate people to get CVE/CPE info updated.
    • Some packages have multiple CPE entries, e.g. curl and libcurl are separate CPE entries. Therefore, <PKG>_CPE should be a space-separated list.
    • There will still be bikeshedding about the names on the mailing list.


  • Would be good to have a runtime test with a config with a number of SELinux-enabled packages in it. Then check if the policies work correctly. It was also discussed to come up with some Manual section to cover special cases for pkgs like SELinux to help users get started.
  • Modular policies allow every package to define their part of the policies. Would be installed as part of the post-install hook; the policy file also has to be compiled into binary form, but still a separate policy file for each package. Sounds nice, but unless there is an active user out there, there is not much point to add the feature. Also briefly discussed having each package own a snippet of a policy file and bring those together for a monolithic policy. The maintenance was a concern and the conclusion followed that of the moduler suggestion. Instead new users should review the runtime test and doc as a way to get started making their own policy.


  • Slides by Valentin:
  • It's questionable whether there is really a need for clang in Buildroot. You anyway need gcc to build the kernel. It would be possible to build a complete userspace (not hte kernel) with clang (though some packages would certainly break). But the question is how useful this is - gcc has mostly caught up with clang w.r.t. performance, diagnostics, and static analysis.
  • LLVM itself is very useful for other things (not clang), e.g. mesa3d's llvmpipe or OpenJDK's JIT compiler.
  • LLVM doesn't have a stable API between major releases, so we'll need llvm5, llvm6, llvm7.
  • For the time being, it is mainly LLVM that is used in other packages that is useful.
  • Could also be useful to have a host-clang package that is user selectable and that is not used by any package. An external package can use that to build that specific package only with clang instead of gcc, e.g. because it has better diagnostics or optimisations.
  • The long-term goal is to have a complete clang-based toolchain. The usefulness of this is questionable however.

Merging of staging/ and target/

  • Nice to have. The only externally visible feature about this is that debugging becomes easier because you have unstripped binaries.
  • However, a large number of packages currently have non-identical install-target and install-staging commands, so getting there will be a serious effort.

Caching host packages

  • ThomasDS needs to build 2 configurations for the same target. These configurations contain a lot of host packages that are the same between them. So ideally it should be possible to reuse the host packages from the SDK.
  • This is currently not at all possible in Buildroot. ThomasP suggests to switch to Yocto. For some specific cases, hacks are possible, e.g. a tarball containing the host dir and the stamp files for the selected host packages.


Photo of hackers taking part to the meeting
Photo of hackers taking part to the meeting