The project aims at deploying YOLO models (preferrably v2, v3 and tiny) on the BeagleBoard x15, BeagleBone Black or BeagleBone AI at an improved performance. We leverage state-of-the-art inference library: FeatherCNN to reduce the inference time of a frame to less than 1s. The library supports only Caffe models which are then converted to an intermediate format: feathermodel format before performing inference.
Student: Prashant Dandriyal
Mentors: Hunyue Yau
GSoC: https://github.com/BeaglePilotGSoC entry
This project is currently just a proposal.
School: Graphic Era University, Dehradun
Primary language: English, Hindi
Typical work hours : 12PM-6PM IST
Previous GSoC participation: None. The aim of participation stays bringing inference to edge devices; bringing the computation to the data than the other way round.
About your project
Project name: YOLO models on the X15/AI
Through this project, we propose to run widely-common SSD models (YOLO v2, v2-tiny, v3 and YOLO tiny) on the BeagleBone boards : BeagleBone AI, BeagleBone Black or the BeagleBone x15. Previous works have attained performances upto 15-30s using algorithm optimizations and brute-force implementation respectively. Our aim is to reduce this time to order of milliseconds. For this, we use the FeatherCNN library to leverage Embedding of TensorGEMM (Generalized matrix multiplication), reduction of memory movement and improvement of TensorGEMM efficiency. The methodology can be summarised as:- Converting YOLO model to Caffe format, which is then converted to an intermediate format (Feathermodel). This format includes optimised form of the model, which is then use to make inference on the frames. The approach can be compared with the Model porting into TIDL approach', (the Texas Instruments API for Deep Learning ). A merge of these both can be tried after successful implementation and satisfactory results.
The project will be a step towards contributing to the recent and booming niche field of Embedded AI/Edge AI. Also, the hyper-active BeagleBone will be directly benefited owing to the use to the commonly used boards. As a by product, the pros and cons of newer hardware and software can be tested against the results of this project as it employs state-of-the-art library.
The project demands mainly software skills including C, C++, Understanding of Neural Networks and Convolution. Also, as the Feathermodel follow the prototxt schema of the Caffe framework, a good understanding of Caffe is also required to handle and understand the data transaction at different layers of the model.
Provide a development timeline with a milestone each of the 11 weeks and any pre-work.
|April 27||Pre-Work||Community Bonding Period and discussion on the project and resources available.|
|May 25||Milestone #1,||- Introductory YouTube video
- Discuss and decide the scope of the project with the mentors - List and try collecting all the resources (documentation, hardware, etc)
|June 1||Milestone #2||- Setup environment for development|
|June 8||Milestone #3||- Convert and maintain intermediate format models to be used
- Test the model by running inference locally (possibly emulating using executables)
|June 19||milestone #4||- Demonstrate improved performance by running local system inferences and comparing with previous works.
- Document it - Submit report for Phase 1 Evaluation
|June 22||Milestone #5||- Discuss modifications for the project plan with the mentors|
|June 29||Milestone #6||- Finalise the model to be used after performance results comparison
- Run first test on BeagleBone board - Document the issues/modifications made
|July 6||Milestone #7||- Optimise the image feeding method
- Optimise the other parts of the pipeline
|July 13||Milestone #8||- Test on image and video data
- Gather performance results and compare with previous works - Plan scope of second evaluation report
|July 17-20||Milestone #9||- Submit second evaluation report
- Look for opportunity to use on-board hardware accelerators to further improve performance - Discuss possible improvements with mentors
|July 27||Milestone #10||- Look for potential combination of hardware accelerators and TIDL and document it|
|August 3||Milestone #11||- Completion YouTube video
- Detailed project tutorial
|August 10 - 17||Final week||- Get the final report reviewed by mentors and improvise changes advised
- Submit final report
Experience and approach
I am familiar with 8 bit microcontrollers and 32 bit ones by Texas Instruments since my sophomore year of my Electronics and Communications Engineering. I have been a quarter-finalist in the India Innovation Challenge and Design Contest (IICDC-2018) during which our team was provided with Texas Instruments resources like the CC26X2R1 and TIVA LAUNCHPAD (EK-TM4C123GXL) EVM. I have been studying Machine Learning for about an year now. Mostly, I have used TensorFlow-Keras as the primary API. I have also participated in some of the ML competitions for a better exposure to problems. In my current semester, I have Neural Networks as a credit subject although I am already working on the topic in relation to On-device learning for low-computation-capable devices.
What will you do if you get stuck on your project and your mentor isn’t around?
If successfully completed, what will its impact be on the BeagleBoard.org community? Include quotes from BeagleBoard.org
Please complete the requirements listed on the ideas page. Provide link to pull request.
Is there anything else we should have asked you?