BeagleBoard/beaglecv stereo

=beaglecv_stereo =

A short summary of the idea will go here.

Student: Kumar Lekkala Mentors: Micheal Welling, ds2, Jason Kridner Code: https://github.com/kiran4399/beaglecv Wiki: http://elinux.org/BeagleBoard/beaglecv_stereo GSoC: GSoC entry

=Status= This project is currently just a proposal.

=Proposal= Please complete the requirements listed on the ideas page and fill out this template.

About you
IRC: kiran4399 Github: https://github.com/kiran4399 School: Indian Institute of Information Technology, SriCity Country: India Primary language (We have mentors who speak multiple languages): English Typical work hours (We have mentors in various time zones): 8AM-5PM IST Previous GSoC participation: https://summerofcode.withgoogle.com/archive/2016/projects/6295262146330624/

About your project
Project name: Stereo Vision support for BeagleBone using BeagleCV The aim of the project is to create support for the stereo vision on the Beaglebone using the BeagleCV. This would consist of developing the BeagleCV library and creating APIs OpenGL ES2 for SVX530 3D accelerator present onboard. This would allow other users to write their own CV algorithms and also enable faster computation. and then implementing the stereo vision algorithm on the GPU. Finally if time permits, this project will be added to the Beaglebone Blue APIs.

Description
The kernel which I would using is 4.4.56-bone17. This version has prebuilt kernel modules omaplfb, tilcdc and pvrsrvkm which are essential for running SGX530. Following are the complete set of project goals and their challenges which I plan to deliver at the end of the tenure: Creating shaders for utilizing SGX530: Beaglebone Blue/Black has an inbuilt PowerVR SGX530 3D accelerator which is capable of performing image processing. By the end of this project, I will create some shaders using the OpenGL ES2 graphics library and GLSL 1.0 shader implementation. Images will be loaded as a sampler2D type. The images will be preprocessed by converting into grayscale with POWERVR Texture Compression (PVRTC) format with a compression ration of 8:1 for better performance. Fragment shaders will be used to modify the pixel values and Vertex shaders will be used for transformation and computation of the indices of the sampler2D. These shaders will be accessible by using the BeagleCV APIs.

Porting libcvd as BeagleCV: One of the most important deliverable of this project is to develop BeagleCV library. BeagleCV is a minimized forked library of libcvd to which the OpenGLES APIs will be added. The sequential execution would be removed and the optimized shaders would be added. With the help of the shader programs created for low level image processing, I will be implementing the following 2 algorithms: SURF features Stereo matching

Stereo vision implementation: Implementing stereo matching algorithm for getting disparity map from stereo images. Block Matching (BM) algorithm is the most widely used stereo matching algorithm used in the embedded community because of its favourable computational characteristics for parallel implementation. The cost function in this local block matching is NCC (Normalized Cross Correlation). To improve the accuracy of the disparity map a LR-RL consistency check can be included which will be implemented in a separate shader.

Documentation and examples: I will provide extensive and accurate documentation for whatever I build in this project. Functional documentation for the BeagleCV will be done in doxygen. Code documentation will be as comments in the source file. to utilize the GPU and also create appropriate documentation of how to use the SGX530 on the Beaglebone Blue/Black

(Future work) Adding BeagleCV support for Beaglebone Blue APIs: Once BeagleCV is implemented and the v1 released, I will add the support to the Beaglebone Blue API repository. This would enable users to implement sensor fusion algorithms that help in robotic localization, tracking, detection and navigation.

Timeline
Google Summer of Code stretches over a period of 12 weeks with the Phase-1, Phase-2 and final evaluations in the 4th, 8th and the 12th week respectively. Following are the timelines and milestones which I want to strictly follow throughout the project tenure:

May 30 - June 13 Aim: Cleaning, Minimizing and testing bare BeagleCV Description: I created beaglecv from libcvd, a C++ library designed to be easy to use and portable for real time applications. All the GUI stuff like X11, GUI, Viewer headers, imagedisplay etc. would be removed to increase the efficiency of the library. Utility functions for a test shader will be implemented and APIs will be framed. SGX530 will be tested and installation will be documented.

June 14 - June 27 Aim: Implementing basic OpenGLES shaders Description: Phase-1 of the shader implementations. Following are the shaders which will be coded: Convolution (support for 2x2, 3x3 and 4x4 kernels) Gradient computation Image Thresholding Matrix operations

July 28 - July 4 Aim: Implementing example algorithm using OpenGLES shaders Description: SURF will be implemented as an example program to make use of the APIs coded earlier. Unit tests will be run and any bugs pertaining to this source code will be fixed.

July 5 - July 11 Aim: Performance evaluation and documentation of the SURF algorithm Description: Reserve week for testing SURF algorithm. If time permits, LK optical flow using SURF features will be implemented for motion estimation. I will also be checking the performance of those examples wrt to CPU, GPU and CPU+GPU support. All the functionalities will be properly documented.

July 12 - July 25 Aim: Implementing OpenGLES shaders for Block Matching (BM) algorithm Description:Phase-2 of shader implementations. Following shader functions will be implemented this week: Block operations (with block size 4x4) Correlation computation (NCC with 4x4 support window) Epipolar search

July 26 - August 8 Aim: Implementing BM using OpenGLES shaders Description: The algorithm will be implemented using the shaders created in the previous week. Based on the performance of the algorithm, vertex shaders will be added to minimize the cost of computation.

August 9 - August 15 Aim: Testing stereo matching algorithm and checking performance Description: Generating the final disparity map given 2 stereo images. Evaluating the algorithm using the popular Tsukuba stereo dataset. Checking the accuracy of the algorithm, fixing bugs and documenting the approach.

August 16 - August 22 Week-12 Aim: FINAL EVALUATION !! Description: Checking and fixing bugs. Refining the previous documentation so that it is more easy to understand. Checking the final implementation and doing the runthrough again. Final commit to the beaglecv repository and releasing v2.

August 23 onwards: Future work Aim: Adding BeagleCV with Beaglebone Blue APIs Description: Once BeagleCV is stable, it will be added in the Beaglebone Blue APIs to create support for vision along with other sensors. I will also maintain the BeagleCV library by adding more algorithms and fixing any bugs pertaining to Beaglebone.

Experience and approach
I am a fourth-year undergraduate student studying in India. Besides having a key interest towards Robotics, Computer vision and Machine learning. I also like hacking on embedded boards especially to make agile robots. I like to work on an open-source project this summer because it is interesting and contributing to the project is fun and exciting. I did not work much on open-source before, but I have some idea about how things work in open-source community which seem to be very fascinating.

Object segmentation and tracking in RGB-D images: Developed a robust segmentation method using deep learning which accurately extracts an object from an RGB-D image and subsequently tracks the object in the RGB-D stream. Currently this is an ongoing project.

Accurate and Augmented Localization and Mapping for Indoor Quadcopters: In this project, a state-estimation system for Quadcopters operating in indoor environment is developed that enables the quadcopter to localize itself on a globally scaled map reconstructed by the system. To estimate the pose and the global map, we use ORB-SLAM, fused with onboard metric sensors along with a 2D LIDAR mounted on the Quadcopter which helps in robust tracking and scale estimation.

Enhancing Visual SLAM using IMU and Sonar: Increased the accuracy and robustness of ORB-SLAM by integrating Extended Kalman Filter (EKF) by fusing the IMU and sonar measurements. The scale of the map is estimated by a closed form Maximum Likelihood approach.

Semi-Autonomous Quadcopter for Person Following: Developed an IBVS based robotic system, implemented on Parrot AR Drone, which is capable of following a person or any moving object and simultaneously measuring the localized coordinates of the quadcopter, on a scaled map.

API Support for Beaglebone Blue: Created easy-to-use APIs for Beaglebone Blue. With these APIs, applications can be directly ported onto the board. This project was a collaboration of Beagleboard.org with the University of California, San Diego as part of Google Summer of Code 2016.

Intelligent Parking system: This module is a part of ADS(Autonomous Driving System) used for accurate autonomous parking. The Beaglebone Black in the robot finds the set point by matching features using SURF descriptors on the template image and directs the output to the actuators(motors) connected to PRU(Programmable Real-time Unit).

Contingency
If I get stuck on my project and I don’t find my mentor, I will google the error and research about it myself. I personally feel that there is nothing in this world which is not present on the internet. I will also try take help from the other developers present on the IRC.

Benefit
kiran4399: As a robotics researcher, I personally feel that just by using the data from low-level sensors. By developing these BeagleCV, its functionalities and applications, students be greatly benefited can apply many high-level concepts like visual tracking, localization, detection, pose estimation etc. By adding BeagleCV to BeagleBone APIs, it would be very easy to implement sensor-fusion algorithms. roject.

Suggestions
I plan all my work properly and sketch out a perfect routine so that the work planned gets completed within the given time. I always sketch out priorities and keep priority management above time management. My policy is: “Hard-work beats talent when talent doesn't work hard !!”. I strongly feel that striving to know something is the best way to learn something. I can assure that I will work around 50-55 hours a week without any other object of interest. I also hope for lot of learning experience throughout the program and come closer to the open-source world.