< BeagleBoard‎ | GSoC
Revision as of 14:52, 13 March 2020 by Pradan (talk | contribs) (Status)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Proposal : YOLO models on the X15/AI


The project aims at deploying YOLO models (preferrably v2, v3 and tiny) on the BeagleBoard x15, BeagleBone Black or BeagleBone AI at an improved performance. We leverage state-of-the-art inference library: FeatherCNN[1] to reduce the inference time of a frame to less than 1s. The library supports only Caffe models which are then converted to an intermediate format: feathermodel format before performing inference.

The darknet [2] based You Only Look Once (YOLO) model is popular among embedded devices running Single shot detection (SSD) [3] at a reasonable frame rate.

Student: Prashant Dandriyal
Mentors: Hunyue Yau
GSoC: entry


This project is currently just a proposal.


Completed the requirements listed on the ideas page and have created a pull request here.

About you

IRC: pradan
Github: PrashantDandriyal
School: Graphic Era University, Dehradun
Country: India
Primary language: English, Hindi
Typical work hours : 12PM-6PM IST
Previous GSoC participation: None. The aim of participation stays bringing inference to edge devices; bringing the computation to the data than the other way round.

About your project

Project name: YOLO models on the X15/AI


Through this project, we propose to run widely-common SSD models (YOLO v2, v2-tiny, v3 and YOLO tiny) on the BeagleBone boards : BeagleBone AI, BeagleBone Black or the BeagleBone x15. Previous works have attained performances upto 15-30s using algorithm optimizations and brute-force implementation respectively. Our aim is to reduce this time to order of milliseconds. For this, we use the FeatherCNN library to leverage Embedding of TensorGEMM (Generalized matrix multiplication), reduction of memory movement and improvement of TensorGEMM efficiency. The methodology can be summarised as:- Converting YOLO model to Caffe format, which is then converted to an intermediate format (Feathermodel). This format includes optimised form of the model, which is then use to make inference on the frames. The approach can be compared with the Model porting into TIDL approach', (the Texas Instruments API for Deep Learning [4]). A merge of these both can be tried after successful implementation and satisfactory results.

The project will be a step towards contributing to the recent and booming niche field of Embedded AI/Edge AI. Also, the hyper-active BeagleBone will be directly benefited owing to the use to the commonly used boards. As a by product, the pros and cons of newer hardware and software can be tested against the results of this project as it employs state-of-the-art library.

The project demands mainly software skills including C, C++, Understanding of Neural Networks and Convolution. Also, as the Feathermodel follow the prototxt schema of the Caffe framework, a good understanding of Caffe is also required to handle and understand the data transaction at different layers of the model.


Provide a development timeline with a milestone each of the 11 weeks and any pre-work.

April 27 Pre-Work Community Bonding Period and discussion on the project and resources available.
May 25 Milestone #1, - Introductory YouTube video

- Discuss and decide the scope of the project with the mentors - List and try collecting all the resources (documentation, hardware, etc)

June 1 Milestone #2 - Setup environment for development
June 8 Milestone #3 - Convert and maintain intermediate format models to be used

- Test the model by running inference locally (possibly emulating using executables)

June 19 milestone #4 - Demonstrate improved performance by running local system inferences and comparing with previous works.

- Document it - Submit report for Phase 1 Evaluation

June 22 Milestone #5 - Discuss modifications for the project plan with the mentors
June 29 Milestone #6 - Finalise the model to be used after performance results comparison

- Run first test on BeagleBone board - Document the issues/modifications made

July 6 Milestone #7 - Optimise the image feeding method

- Optimise the other parts of the pipeline

July 13 Milestone #8 - Test on image and video data

- Gather performance results and compare with previous works - Plan scope of second evaluation report

July 17-20 Milestone #9 - Submit second evaluation report

- Look for opportunity to use on-board hardware accelerators to further improve performance - Discuss possible improvements with mentors

July 27 Milestone #10 - Look for potential combination of hardware accelerators and TIDL and document it
August 3 Milestone #11 - Completion YouTube video

- Detailed project tutorial

August 10 - 17 Final week - Get the final report reviewed by mentors and improvise changes advised

- Submit final report

Experience and approach

I am familiar with 8 bit microcontrollers and 32 bit ones by Texas Instruments since my sophomore year of my Electronics and Communications Engineering. I have been a quarter-finalist in the India Innovation Challenge and Design Contest (IICDC-2018) during which our team was provided with Texas Instruments resources like the CC26X2R1 and TIVA LAUNCHPAD (EK-TM4C123GXL) EVM. I have been studying Machine Learning for about an year now. Mostly, I have used TensorFlow-Keras as the primary API. I have also participated in some of the ML competitions for a better exposure to problems. In my current semester, I have Neural Networks as a credit subject although I am already working on the topic in relation to On-device learning for low-computation-capable devices. I have implemented some simple neural networks in C and can be found in my GitHub account. I have studied digital signal processing as a credit subject and assume that it will also strengthen the foundation of my understanding of the convolutional neural networks used in this project. Regarding the languages, I have good experience of C and C++, both of which are he primary languages needed for this project. I have used C++ for several coding competitions held by Google. I have also been using docker containers to boost up the workflow needed in my ML projects. I am currently using it to understand and use Caffe framework, in relation to this project.


What will you do if you get stuck on your project and your mentor isn’t around? I will use the following strategy:

  • Look for documentation of the related framework and its GitHub issues section if related to the models. The YOLO models have been around for a while now, so the support is quite good by now.
  • Refer to the communities if the problem is related to BeagleBone boards. I have observed the open-source community (including the Google group) to be quite active.
  • Besides the above methods, I will be putting to use, my skill of web scraping/surfing. In most of the cases, owing to the scale of the problem, I will be using this method foremost.



Please complete the requirements listed on the ideas page. Provide link to pull request.


Is there anything else we should have asked you?