ECE597 Project Auto HUD

Project Overview
The goal of this project is to use the beagle board to run image recognition on a camera feed located inside a car, and then signaling to the driver via a pico projector various objects of interest.

Team Members
Chris Routh

J. Cody Collins

Greg Jackson

Keqiong Xin

Steps

 * Create Minimal Linux Image that can run OpenCV and run the display
 * Determine hardware needed for the project
 * Work on getting a camera functioning on the beagle board
 * OpenCV running nativly on beagle with min config
 * OpenCV working on video stream
 * Projector working on Beagle
 * Car integration (power)
 * Algorithm development

Installing OpenCV (Development Machine)
This is a script that will install OpenCV on a debian-based development machine. The script will add the debian testing repositories and install OpenCV and its dependencies. The repositories are then removed to avoid conflicts with existing packages during regular updates.


 * 1) !/bin/bash

echo "deb http://mirrors.kernel.org/debian/ testing main" > /tmp/opencv-temp.list echo "deb-src http://mirrors.kernel.org/debian/ testing main" >> /tmp/opencv-temp.list sudo mv /tmp/opencv-temp.list /etc/apt/sources.list.d/opencv-temp.list

sudo apt-get update sudo apt-get install -y --allow-unauthenticated libcv4 libcv-dev libhighgui4 libhighgui-dev libcvaux4 libcvaux-dev

sudo rm -f /etc/apt/sources.list.d/opencv-temp.list

sudo apt-get update

Installing OpenCV on the Beagle
Probably the easiest place to start is by using narcissus. Choose beagleboard as the machine type and unstable for the release. In order for highgui to work (necessary for camera capture unless you are using GStreamer), you must build an image with X11 support. Therefore, choose X11 for the user environment. The choice for the X11 desktop environment is not critical, but it would be wise to choose something fairly lightweight, such as Enlightenment. It took several hours for Gnome to configure upon first boot. Once the filesystem has been extracted to a properly formatted SD card with an appropriate kernel on the boot partition (we tested this using 2.6.29), you should be able to boot. Upon boot, you will need to run opkg update. After this, you will need to run opkg install with the following packages:


 * gcc
 * gcc-dev
 * binutils
 * binutils-dev
 * opencv
 * opencv-dev
 * g++

You should now be able to compile using: g++ signdetect.cpp -o signdetect -I /usr/include/opencv/ -L /usr/lib -lm -lcv -lhighgui -lcvaux

Gather Samples
Due to the large volume of sample data needed to create a effective Haar Cascade (about 1000 positive images) it is easier to gather video of a positive target and then break apart the video frame by frame and use the results as images. There are 2 two types of images, good and background. Both types of images are important in order for the cascade to be trained accurately.

Create Index File
There are two index files needing to be created in order for the system to train on the images, a background index file creating a list of filelocations, and a positive index file containing the positive file locations, the number of objects in the picture and the rectangular locations for the object.

Creating the Negative Index File
Use the following automated script from within the background images folder:
 * 1) !/bin/bash

find ./*.jpg -maxdepth 2 -print > background.idx find ./*.png -maxdepth 2 -print >> background.idx find ./*.bmp -maxdepth 2 -print >> background.idx find ./*.jpeg -maxdepth 2 -print >> background.idx

Creating the Positive Index File
Use the following source code to create a training program that allows the user to click on the upper left and lower right of a object to select it and then press a key to return. //////////////////////////////////////////////////////////////////////// // // Calibrate.cpp // Date: 4/20/10 // Author: Christopher Routh // This program brings up the necessary images and allows the user to // select correct training spots for objects and generates a index file. // ////////////////////////////////////////////////////////////////////////
 * 1) include 
 * 2) include 
 * 3) include 
 * 4) include 
 * 5) include 
 * 6) include 
 * 7) include 
 * 8) include 
 * 9) include
 * 10) include
 * 11) include
 * 12) include

using namespace std;

static CvSize imageSize; static CvPoint old_click_pt; static CvPoint new_click_pt; static std::vector pointsClicked;

/*function... might want it in some class?*/ int getdir (std::string dir, std::vector &files) {   DIR *dp; struct dirent *dirp; if((dp = opendir(dir.c_str)) == NULL) { std::cout << "Error(" << errno << ") opening " << dir << std::endl; return errno; }

while ((dirp = readdir(dp)) != NULL) { files.push_back(string(dirp->d_name)); }   closedir(dp); files.erase(files.begin, files.begin+2); return 0; }

// handle mouse clicks here void mouse_callback(int event, int x, int y, int flags, void* obj) {	if (event == CV_EVENT_LBUTTONDOWN) {		cout << "(x,y) = (" << x << "," << y << ")" << endl;

// reset old_click_pt old_click_pt.x = new_click_pt.x;		old_click_pt.y = new_click_pt.y;

// get new click point -- note the coordinate change in y		new_click_pt.x = x; // coming in from the window system new_click_pt.y = imageSize.height-y; // window system and images have different y axes pointsClicked.push_back(cvPoint(new_click_pt.x, new_click_pt.y)); } }

int getCorrectObjectLocation(string file, vector &results) { IplImage* img = 0; // load an image cout << "Trying to load: " << file << endl; img=cvLoadImage((file.insert(0, "./good/").c_str)); if(!img){ printf("Could not load image file: %s\n",(file.insert(0, "./good/").c_str)); exit(0); }

// get the image data int height   = img->height; int width    = img->width; int step     = img->widthStep; int channels = img->nChannels; uchar* data = (uchar *)img->imageData; printf("Processing a %dx%d image with %d channels\n",height,width,channels);

// create a window cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);

// show the image cvShowImage("mainWin", img );

cvSetMouseCallback( "mainWin", mouse_callback ); // wait for a key cvWaitKey(0);

int properClicks = pointsClicked.size % 2; if(properClicks != 0 || pointsClicked.size == 0){ cout << "Improper Number of Points Clicked: " << pointsClicked.size << " Selected" << endl; return -1;

}else{ //Process new rectangles

for(unsigned int i = 0; i < pointsClicked.size; i++){ cout << "Recieved Points X: " << pointsClicked.at(i).x << " Y: " << pointsClicked.at(i).y << endl; }

for(unsigned int i = 0; i < pointsClicked.size; i=i+2){ CvPoint topLeft = pointsClicked.at(i); CvPoint bottomRight = pointsClicked.at(i+1); results.push_back( cvRect(topLeft.x,abs(topLeft.y), (bottomRight.x - topLeft.x), abs((topLeft.y - bottomRight.y)) ) ); cout << "Rectangle Created --- X: " << topLeft.x << " Y: " << abs(topLeft.y) << " Width: " << (bottomRight.x - topLeft.x) << " Height: " << abs((topLeft.y - bottomRight.y)) << endl; }

pointsClicked.clear; }

// release the image cvReleaseImage(&img ); return 0; }

int main(int argc, char *argv[]) { IplImage* img = 0; int height,width,step,channels; uchar *data; int i,j,k;

string dir = string("./good/"); vector files = vector ; getdir(dir,files); vector objectRects = vector;

ofstream vecFile; vecFile.open ("./good/signs.idx");

for (unsigned int i = 0;i < files.size;i++) { int success = getCorrectObjectLocation(files.at(i), objectRects); if(success != 0){ i--; pointsClicked.clear; }else{ vecFile << "good/" << files.at(i) << " " << objectRects.size;

for(int j = 0; j < objectRects.size; j++) {		vecFile << " " << objectRects.at(j).x << " " << objectRects.at(j).y << " " << objectRects.at(j).width << " " << objectRects.at(j).height; }

vecFile << endl; objectRects.clear; }   }

//Close the Vector File vecFile.close;

cout << endl << endl << "****** INDEX FILE CREATED ******"; return 0; }

Generate Samples
Using the positive samples the creasamples cammand can apply transforms to the images and add them to the background images creating a wider range of images to train on. The syntax for this command is: Usage: ./createsamples [-info ] [-img ] [-vec ] [-bg ] [-num <number_of_samples = 1000>] [-bgcolor <background_color = 0>] [-inv] [-randinv] [-bgthresh <background_color_threshold = 80>] [-maxidev <max_intensity_deviation = 40>] [-maxxangle <max_x_rotation_angle = 1.100000>] [-maxyangle <max_y_rotation_angle = 1.100000>] [-maxzangle <max_z_rotation_angle = 0.500000>] [-show [<scale = 4.000000>]] [-w <sample_width = 24>] [-h <sample_height = 24>]

Run the Training Program
Commands differ based upon application and intended sample size, the command my group used was: opencv-haartraining -data signs_drivedata_rev1 -vec signs.vec -bg background.idx -nstages 20 -nsplits 2 -minhitrate 0.995 -maxfalsealarm .5 -npos 111 -nneg 2239 -w 40 -h 40 -nonsym -mem 512 -mode ALL

Display Buffer Mapping
In order to get access to the display buffer on the beagle, you will need to run the following in U-Boot setenv bootargs console=ttyS2,115200n8 root=/dev/mmcblk0p2 rw rootwait omapfb.mode=1024x768MR-16@60 omapfb.debug=y omapdss.def_disp=dvi omapfb.vram=0:10M,1:10M vram=20M

Other sources have mentioned setting a value for mmcargs. However, we were not able to get it to work properly until the options were applied directly to the bootargs variable.

Pico Projector Integration
As of revision C4 of the Beagleboard there is no necessary configuration needed to display native resolution on the projector.

GStreamer on the DSP
There is a package available for the beagle called gst-dsp, which is a native GStreamer plug-in to give it access to the DSP. Along with gst-opamfb and the dsp-bridge driver, this should allow us to access the DSP directly and output video directly to the framebuffer. OpenCV can interact with GStreamer, so this appears to be a very promising direction for the project. See this article for more information and a demonstration. That article also has a link to a minimal beagle image that provides a native framebuffer video player without requiring X.