Buildroot:GSoC2019Ideas

Below is the list of topics proposed by the Buildroot project for the Googler Summer of Code 2019 edition.

= Mentors =

The following people will be your mentors. Arnout Vandecappelle will be the referent mentor and main responsible. However, all mentors will be available to help you.


 * Arnout Vandecappelle
 * arnout@mind.be
 * arnout on Freenode
 * Peter Korsgaard
 * peter@korsgaard.com
 * Jacmet on Freenode
 * Romain Naour
 * romain.naour@gmail.com
 * Kubu on Freenode
 * Thomas Petazzoni
 * thomas.petazzoni@bootlin.com
 * kos_tom on Freenode
 * Yann E. MORIN
 * yann.morin.1998@free.fr
 * y_morin on Freenode

(Note that those mentors are all in UTC+0100 or UTC+0200, so consider any timezone delta in your interactions with the mentors. Presence on IRC fluctuates, but usually someone is available from ~07:00Z to ~22:00Z)

= Suggestions for candidates =

If you want to apply, we recommend you to prove that you have some basic knowledge about Buildroot and open-source contributions:


 * Subscribe to the mailing list and come to the IRC channel
 * Post some patches to the mailing list. Here are some possible contribution ideas:
 * Look at some items in our TODO list at http://elinux.org/Buildroot#Todo_list, implement some of them, and send patches
 * Find some software component that is useful on embedded systems, create a Buildroot package for it, and send patches
 * Find some embedded hardware platform, create a Buildroot defconfig for it, and send patches
 * Look at some build failures in our autobuilders at http://autobuild.buildroot.org and send patches to fix them. Note that some build failures may be difficult to solve.
 * Look at the bug tracker at http://bugs.busybox.net and send patches to fix them. Some bugs may be difficult to investigate.

= Topics =

The following topics are suggested by the Buildroot developers; they are not sorted in any specific order, not even of preference. Feel free to propose your own ideas.

Reproducible builds
Short: Ensure that two runs of Buildroot with the same configuration yield the same result.

Abstract

A very important feature of a build system is to provide reproducible builds. Reproducible builds means that, given the same inputs, multiple builds of the same source will generate identical output, even when they are executed on different machines, at different times, in different locations, by different users. It is then possible to assess and independently verify that the source code is indeed what was used to generate a binary. It is then also guaranteed that a developer build and an automatic production build yield the same results.

Description

We would like to assess how reproducible the builds made by Buildroot are, identify and fix all the sources for non-reproducibility. This will involve creating an infrastructure to run builds twice and compare the results, and provide tools to identify the reasons for the differences. Finally, when possible, fix the source of the differences.

Reproducible builds are a hot topic, there is an entire website dedicated to it.

Skills


 * Intermediate Embedded Linux knowledge (cross-compilation, ELF format...)
 * Knowledge in the Python scripting language (existing testing infrastructures are written in Python, so it makes sense to continue in Python)
 * Knowledge of tools like Jenkins would be a plus.

Testing infrastructure
Short: Improve upon the existing testing infrastructures, build-time and runtime tests.

Abstract

The Buildroot project uses automated testing to help validate the stability of the project. There are currently three types of automated testing:


 * random configurations are used to help validate that the millions of possible Buildroot configurations build correctly. This automated build testing has been running for about three years (results visibles at ),


 * regression testing of a set of known configurations is used to track the sample configurations for a set of commonly-available embedded boards (Beagle Bone, Raspberry Pi, Wandboard...). This regression testing has been running for a few months now as a test-bed and now reports build results every other days.


 * runtime test by building a configuration, launching it in Qemu, and checking that it does something sensible. This is very simple, e.g. for a Python package it is usually just importing it.

These testing infrastructures have helped improve the quality of Buildroot. However, the project would like to bring a number of improvements to the random configuration testing infrastructure.

Description

We would like to improve the three forms of testing, specifically the following topics.


 * Improve the autobuild infrastructure, e.g. improve the randomisation coverage, improve the presentation of the results (in particular searching the database), make it possible to test different branches and different repositories.
 * Improve the presentation of the results of tests that run on Gitlab-CI. E.g. make sure that a mail is sent to the mailing list and the people responsible when a test fails.
 * Add more runtime tests of specific packages, e.g. external toolchains.
 * Add more regression tests of core features, e.g. legal info, version specification of the Linux kernel.

Skills


 * Basic Embedded Linux knowledge (cross-compilation, kernel configuration/build, etc.)
 * Knowledge of the Python scripting language, used for the development of the testing infrastructure.
 * Some Web development skills and/or knowledge of tools like Jenkins would be a plus.

Follow upstream updates and CVEs of packages
Short: Provide infrastructure to easily follow version changes in upstream packages and track security issues.

Abstract

Buildroot can build a large collection of packages, currently over 2200. It is very important that the most up-to-date versions be used, to get new features, security and bug fixes.

However, keeping track of all the new version releases for all those packages is a tedious, if impossible task. Thus we may miss very critical updates or only get them late; also some packages seldom get updated in Buildroot.

Description

Buildroot already has rudimentary tracking of upstream versions, through https://release-monitoring.org. We would like to extend this with information from the [https://nvd.nist.gov/products/cpe/search CPE database] and reported CVEs. Also, we would like to post reports on the mailing list, e.g. every week.

There have already been a few attempts to provide such a script, but they were far from being complete and readily useable. They can however serve as a starting point and be improved.

Skills


 * Basic Embedded Linux knowledge (cross-compilation...)
 * Knowledge of the Python scripting language (to match the existing helper infrastructures)

Support for distributed builds with IceCC
Short: Integrate distributed building using IceCC into Buildroot

Abstract

Buildroot cross-compiles a significant amount of source code, which requires a lot of time especially on large configurations with complex packages having lots of dependencies. In order to reduce the build time, the Buildroot community would like to see if distributing the build on several machines using IceCC would help, and if it does, integrate support for IceCC in Buildroot to make it easily usable.

Description

The goals of this project are:
 * Setup IceCC to understand how it works and how it is configured
 * Prototype an integration in Buildroot to validate if it helps reducing the build time of large configurations
 * Turn the prototype into a production-ready solution integrated in Buildroot
 * Submit the appropriate patches to the Buildroot mailing list and follow-up until they are accepted and merged

Skills
 * Basic Embedded Linux knowledge (cross-compilation...)
 * Knowledge of  would be a plus