< BeagleBoard‎ | GSoC
Revision as of 04:15, 12 April 2021 by Steven100 (talk | contribs) (Timeline)
Jump to: navigation, search


About Student: Steven Schuerstedt
Mentors: Hunyue Yau
Code: current sample code:


This project is currently just a proposal.


I have completet the requirements on the ideas page. ARM cross compiling pull request:

About you

IRC: steven100
School: Karlsruhe Institute of Technology
Country: Germany
Primary language: German, English
Typical work hours:5AM - 3PM US Eastern
Previous GSoC participation: I love the idea of open source and especially open hardware. First time participant.

About your project

Project name: GPGPU with OpenGL ES


The beagleboard ARM A8 Processor has an integrated graphics accelerator from PowerVR (SGX530, 544 or 550). As the name implies this chip is mainly used and built for graphics rendering, but as the time shows there exist alot of other applications that profit from the parallel nature of graphic chips, like deep learning, bitcoin mining or analyzing DNA sequences. This is called GPGPU (general purpose computations on graphic processing units) and is done with api's like OpenCL or CUDA. The PowerVR SGX only supports the OpenGL ES 2.0 specification (there also exist a propiertary openCL driver from IT, this api is heavily targeted towards graphics rendering, but can also be exploited for general purpose computations. The goal of this project is, to show how to use the mostly unused graphics accelerator chip for general purpose computations using the OpenGL ES api. Therefore I will create samples and a tutorial, showing how to use the GPGPU and also show the timing difference when doing computations on CPU vs GPU, to show what computations can benefit from the GPU. Due to the limited nature of OpenGL ES 2.0, its best fit for GPGPU is image processing. The samples and tutorial are targeted towards all beagleboards GPU's (SGX 530, 544, 550,..), so I will research the subtle differences between them and what capabilities in terms of supported texture targets, texture formats etc they have.


The first part of the implementation is to get the GPU drivers up and running, to create an EGL rendering context for offscreen rendering and use OpenGL ES 2.0. Hunyue Yau is willing to help me do that. In the next part I will use OpenGL ES 2.0 to access the GPU of the beagleboard and run the sample programs.

OpenGL ES 2.0 is a subset of modern OpenGL and targeted towards embedded devices. It is a more lightweight api and does not support all features of modern OpenGL, since it is really old.

The importance difference to modern OpenGL is, that no compute shaders are supported. This means the computation cannot be divide into work-groups and so there also is no possibility for shared memory. Work-Groups are a way to seperate the computations in smaller chunks and every work-group has access to very fast shared memory. This shared memory can be used to accelerate computations even more and is a standart procedure in OpenCL / CUDA. In OpenGL ES 2.0 on the other hand no work distribution is possible, so every work-item is independent of one another. But memory barriers can be simulated with multiple rendering passes, to sync the computations when needed.

Also there exists limited precision in the texture data. In the OpenGL ES 2.0 specification only RGBA4 is said to be supported (so 4 bits per color). But I could find a datasheet with information about available texture formats for the SGX530 (?), which says the SGX530 supports RGBA8 (so 8 bits per color). Im still looking for datasheets for the SGX550 and SGX544 and I think this kind of information is hard to find, so it would probably be the best to just test what runs on what device. The implementation then could differ, depending on what specific GPU the beagleboard uses. In the project I would like to clarify and test what formats run on which device.

In general only the GL_RGBA or GL_RGB format is supported as color-renderable format (see OpenGL ES 2.0 specification, section 4.4.5). This means image processing would be a favorable way to use the SGX for GPGPU and convolution is a good example for that. But one also could store 32 bit floating values in textures, by dividing them into their correspoding bits. In the project I will give sample code how to do that and measure if its efficent.

I provide a first example how to add two vectors using OpenGL ( I will use this as a starting point for this project.

The samples will be convolution and matrix multiplication. convolution input image => perform convolution (sobel filter..) output convoluted image. Convolution can be used for pre processing, edge detection, feature extraction etc..

Data transfer between CPU and GPU will be done using textures. The fragment shader will include the actual computations for the data and the result will be written to a output texture attached to a framebuffer.

- ARM neon intrinsics - BBAI (SGX 544) - upstream? what happens after GSoC

Code Example:

On I provide a first example how to use OpenGL for general purpose computations. This example involves adding two Vectors of size N.

Architecture of sample program:

  • GPGPU_with_OpenGL.cpp

main code to setup data on CPU, copy to GPU and run rendering

  • shader.cpp / shader.h

helper class to handle shaders

  • gpgpu.vert

vertex transformation with orthogonal projection matrix

  • gpgpu.frag

actual computation on interpolated data from the vertex shader

The program creates two vectors of size N and fills them with random floating point values. The vectors are then transferred to the GPU with OpenGL Textures. This is the most important step, since it is crucial to find a good mapping between the data / problem one tries to solve, and the mapping / accessing of the data on the GPU. In this simple example the mapping is straight forward. I use the GL_TEXTURE_2D texture target and GL_RGBA as internal texture format. This gives the following mapping:

After GSoC:

I would like to extend the approach and provide a library for the beagleboard users to use the GPU for a predefined set of computations.


reference pages for OpenGL ES 2.0

common profile specification, very detailed information

datasheet for the PowerVR SGX 530 (?), containing detailed information what texture / texture formats are supported etc

a good overview how to do GPGPU with OpenGL, but it needs to be adapted for OpenGL ES 2.0

master thesis about GPGPU on mobile devices, also has a chapter about OpenGL ES 2.0 and some sample code


Provide a development timeline with a milestone each of the 11 weeks and any pre-work. (A realistic timeline is critical to our selection process.)

Mar 29 Applications open, Students register with GSoC, work on proposal with mentors
Apr 13 Proposal complete, Submitted to
May 17 Proposal accepted or rejected
Jun 07 Pre-work setup OpenGL ES drivers for beagleboard, Coding officially begins!
Jun 17 validate OpenGL calls, add two vectors together, Introductory YouTube video
June 24 setup elinux page for the GPGPU tutorial, validate different texture formats
June 30 create matrix multiplication sample program
July 12 18:00 UTC create convolution sample program (separable and non-separable convolution), Mentors and students can begin submitting Phase 1 evaluations
July 16 18:00 UTC Phase 1 Evaluation deadline
July 23 measure timings between CPU / GPU
July 30 finish tutorial on elinux how to do to GPGPU (is this a good place?)
Aug 06 clean up code, add one more sample if time allows (vector reduction, compute histogram...)
August 10 finish everything, Completion YouTube video
August 16 - 26 18:00 UTC Final week: Students submit their final work product and their final mentor evaluation
August 23 - 30 18:00 UTC Mentors submit final student evaluations

Experience and approach

I have a decent experience in programming, computer-graphics and mathematics. I developed a 2D platformer game with C++ and OpenGL (StevieJump), a Monte-Carlo Pathtracer with C++ (StevieTrace) and I'm very interested in computer architecture and embedded systems. I followed Ben Eaters excellent youtube series to build a 8-Bit Breadboard Computer (8-Bit). I currently work as a C++ / OpenGL software developer at my university. I have experience in OpenCL and did several GPGPU courses at my university.


I got stuck many times in my life, especially with programming related tasks. Programming and computer science can sometimes be a very unforgiving and frustrating experience. There is no easy way around this, so I will just keep trying and do my best, there is no shame in failure, just in giving up. So if I dont give up I will eventually succed. If I really get stuck I just take a break and do some outdoor exercise, this always helps.


Enable more people to use the GPU on a beagleboard. Accelerate computations. Free up the main processor to do other stuff. If successfully completed, what will its impact be on the community? Include quotes from community members who can be found on and


Please complete the requirements listed on the ideas page. Provide link to pull request.


Is there anything else we should have asked you?