BeagleBoard/GSoC/2021 Proposal/TensorFlow Lite Compatibility with BeagleBone AI

=TensorFlow Lite Compatibility with BeagleBone AI=

About Student: Leah Pillsbury Mentors: Contacted Robert C Nelson Code: TensorFlow Lite Support Wiki: Proposal/TensorFlow Lite Compatibility with BeagleBone AI GSoC: N/A

=Status= This project is currently just a proposal.

About you
IRC: lpillsbury Github: lpillsbury School: Pasadena City College (Boston University prior to PCC) Country: United States Primary language : English Other languages : Spanish, Swahili, Hebrew, some Hindi/Urdu, Telugu, Bengali Typical work hours : 9AM-5:30PM US Pacific, though I may be on Eastern time part of the summer Previous GSoC participation: First time participation in GSOC. I am interested in learning more about embedded software, and participating in the community. I use open source tools regularly, and it is exciting to make something useful that other people would use too.

About your project
Project name: TensorFlow Lite Compatability with BeagleBone AI

Description
Currently, TensorFlow Lite is supported by Raspberry Pi, Arduino, ESP32, multiple Adafruit Boards, Android, and iOS, among others. No BeagleBoard is on this list, and it should be. The specs of BeagleBone AI are impressive, and it does allow for other AI platforms such as Caffe, it seems limiting for an AI board for the general public to not be able to access TensorFlow Lite. I would like to change that by choosing a current release of TensorFlow Lite, and writing the code necessary to run it on the Arm M4 processors that the BeagleBone AI board already has.

This project will be a combination of C coding, dealing with the Linux kernel, and doing some example cases with TensorFlow Lite in python. Obviously, given the diversion between OpenCL and TensorFlow, there is no perfect way to seamlessly integrate TensorFlow Lite, but there seem to be enough people in the community who want it, that it is worth at least making a gesture towards a compatibility solution. I first realized this was a problem when I decided I wanted to learn more about AI on the edge. One of the first things that comes up when I search for information on this topic is using TensorFlow Lite and the wide variety of other boards that it plays well with.

Ideally, I'd spend the first few weeks getting TensorFlow Lite working on a BBAI, document my hacking process, and then create a smooth stable way to make it work more out of the box. Then I'd spend the rest of the time doing examples that would be highlighted both on the BeagleBoard website, and also on the TensorFlow Lite Examples page by making a pull request to the owners of the github repo. Given that there are already examples of using TensorFlow Lite on Raspberry Pi, with picamera, the starting point would be to have equivalent BeagleBone examples, most likely doing image capture through OpenCV (OpenCV with BeagleBone Black). I'm also very interested in sound recognition, and I think that recognizing sound patterns is very important for industrial uses.

Getting TensorFlow Lite Working on BBAI
There is an outstanding question as to how hard it will be just to get TensorFlow Lite working on the BeagleBone AI board. I know that: There are bits out there, but rcn-er hasn’t integrated them.
 * Nothing that sounds easy rarely is on an embedded device
 * Many people have struggled to get TensorFlow Lite working on BBAI. Some of these have had to switch platforms when they couldn't get it working (See advice to one user in Benefits
 * Communication with @jkridner suggests that the issue still needs to be dealt with:

If you're talking about the native building (building BeagleBone binary on BeagleBone), you need to make sure you have enough RAM installed. As I said, 1G ram + swap could work but it would be slow.
 * Communication with Terry (Woncheol) Heo at TensorFlow Lite suggests that getting TensorFlow Lite working on BeagleBone AI should be relatively simple, if it is first cross compiled on another machine. He said:

That's why I recommend using cross compilation. I don't think we have a known issue for ARM cross compilation now.

FYI, CMake support was added recently. For ARM cross compilation with CMake. you may want to check the following page. https://www.tensorflow.org/lite/guide/build_cmake_arm I think ARMv7 NEON (armhf) binary will work nicely with BeagleBone AI. But please let me know if you have any issues with it.

Given this information, I expect that it will take some time to track down what has been tried before and try a few different options to develop a recommended procedure for using TensorFlow Lite on BeagleBone AI. I also don't anticipate it being impossible, given the support for ARM processors and the "bits out there". If it is so easy that I have extra time, I could also try additional tools that haven't seen much testing such as PyTorch, mlpack etc.

Hardware Required

 * 1) BeagleBone AI
 * 2) Power Cable
 * 3) External Fan (many blogs say that even though the BBAI runs hot, this is necessary: one example A simple start
 * 4) Serial FTDI Cable
 * 5) External Microphone (such as BeagleMic)
 * 6) Camera (I have a Logitech USB webcam I could use, but for showing off what BBAI can do with images I'd probably choose the HD Camera Cape)
 * 7) (if time or the scope of the project allows: BeagleBone-X-15)

Timeline
Provide a development timeline with a milestone each of the 11 weeks and any pre-work. (A realistic timeline is critical to our selection process.)

Experience and approach

 * I have experience programming in python and C/C++, both of which I'll need for this project.
 * I have used Arduinos, Fubarino Mini, and Raspberry Pi at work (at a swarm robotics company), and for fun/school projects. At the swarm robotics company, we also made a vehicle communications device that used an STM32 microcontroller.
 * I've switched to using Linux almost exclusively so that I can be able to work with embedded devices more readily.
 * I took machine learning courses at school, and am familiar with many of the main algorithms. So far, I've mainly used Matlab and PyTorch. While I am less familiar with TensorFlow, I expect to be able to pick it up quickly.
 * I also have experience collaborating with teams around the world and I enjoy learning from everyone.
 * I've successfully designed and implemented software projects.

Contingency
All of my managers have described me as tenacious. Since becoming an engineer, I've successfully completed many projects that I had to figure out how to solve all of the sub-tasks independently while only having an initial understanding of the general ark of the problem. At one internship, I made a python application to do complex modeling. When I started I didn't know all of the python features I ended up using, how to use them, or some of the mathematical concepts in the model. Basically I try to understand the overall picture and then see my way through the piece I'm working on and a few steps ahead. I am also good at asking my peers for help in addition to outside resources. In making this application, I contacted people within BeagleBoard IRC and GSOC chat room as well as on the TensorFlow Lite Google Group.

Benefit
BeagleBoard options are strong contenders with Raspberry Pi and Arduino as tools that are easy to use to do IoT and industrial projects. While BeagleBones are geared more towards engineers and have some special features that actually make them quite different than the others (PRUs, ability to do low level and high level control simultaneously, power usage etc), they often get compared with the other frameworks. Given that Raspberry Pi and Arduino have TensorFlow Lite compatibility, I think it is important for BeagleBones to do the same. Also, there are some algorithms that are easier to access with TensorFlow Lite than in other settings.

For example, I tried tackling YOLO deployment on BBAI but it is currently impossible because of TIDL library restrictions. TFLite is the way to go with DL on BBAI and it uses TIDL underneath
 * -Jakub Duchniewicz

In large laboratories and plants, experts often rely on the “sound” of machines to know quickly that a system is working properly. And most often small changes in pulsing tones or anomalies in the sound environment keys us into identifying problems. Much of this can be automated with machine learning with environmental monitoring sensors. I'd like to train a system to recognize when my helium reliquifiers or pulse tube coolers aren't functioning properly.
 * -Ritoban Basu Thakur, Caltech physicist Section 9.2 on audio monitoring for cryogenic equipment

and several frustrated users have posted on the internet:

I've been attempting to build the TF Lite library on my BeagleBone AI (32-bit ARMv7 MCU) for several days now, to no avail.
 * -Matt

I wish to run my network on the AM5729 found on the BeagleBone AI board. I understand that the ti-processor-sdk-linux-am57xx-06.03.00.106 does not yet support the BeagleBone AI tool. However, the Debian image distributed for the BeagleBone AI has all the TIDL libraries packaged with it.

Therefore the only thing I need to be able to do is convert the tensorflow lite network into the TIDL format using the TIDL import tool. I assume I can do this using the following flow: 1. download the ti-processor-sdk-linux-am57xx-06.03.00.106-linux-x86-Install.bin from TI 2. run the installer on a linux (Ubuntu) host 3. navigate to the tidl_model_import.out tool 4. run the tidl_model_import.out tool on an appropriate configuration file (below)
 * -Alex Beasley

The error is because TIDL does not support importing tensorflow lite model. Please try with Caffe/tensoflow/onnx models, for more details refer to TIDL datasheet/user-guide.
 * -Praveen

In addition to the requested need for TensorFlow Lite, this is something that BeagleBoard has already promised: We've got a path to deliver the best AI training platform available, but, the actual materials are deeply lacking today. The https://github.com/beagleboard/cloud9-examples repository has a starting point for using the TIDL library, but that library is based on C/C++ and requires separate information on training and converting the model from Tensorflow/Caffe/etc. The TI Tensorflow Lite support is coming around the first quarter of 2020.

So, this is a work in progress. We'll be integrating a ton of examples (and associated software tools) to work with stuff like Tensorflow and Coral Accelerator in the short term. The expectation is to both crowd source this around the cloud9-examples repository and grow this over time. The hope is the Cloud9 development environment and mjpg-streamer presentation layer is enough inspiration and platform to get this moving quickly.

Today, conversion from Tensorflow or Caffe models to TIDL such that you can run them accelerated is covered at http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html. You can also choose to run Tensorflow or Caffe natively on ARM.
 * -BBAI FAQ

Misc
I've completed the other requirements listed on the wiki. Link to cross compilation pull request

Suggestions
I like the collaborative setup of the chatroom and the IRC. It was a good way to learn more about the platform and my peers.