Difference between revisions of "ECE497 Project Set Playing Beagle"

From eLinux.org
Jump to: navigation, search
Line 85: Line 85:
 
|}
 
|}
  
Also list here what doesn't work yet and when you think it will be finished and who is finishing it.
 
  
 
== Conclusions ==
 
== Conclusions ==

Revision as of 19:53, 16 May 2011

Team members: Samuel Allen, Stephen Mayhew, Julian Ametsitsi

Executive Summary

The game Set uses a 3 x 4 grid of 12 cards that have different symbols on them. These symbols differ in shape, fill, and number. The goal is to find a set of three cards. See here for Set game instructions.

Using the BeagleBoard, we have developed a system that uses a webcam to take a picture of the cards in play and uses OpenCV and Python to find the sets from the picture.

Our system correctly identifies all of the sets in the hand that the image recognition provides.

It can run completely on the Beagleboard and processes aproximately 1 frame per second.

Currently, the image recognition is the downfall to our success. The image recognition does not find the correct symbols on the cards, and so the rest of the system does not find the correct sets.

The sets that are found are correct for the found cards, but not the actual cards in play.

Our system is almost functional, but there are still some problems that need to be overcome. The image recognition is the biggest hurdle that we need to overcome and after that is completed, the system will work correctly.

Installation Instructions

  • First clone our github repository: git@github.com:mayhewsw/BeagleSetGame.git
git clone git@github.com:mayhewsw/BeagleSetGame.git 
  • The SPEd image comes with OpenCV already installed, but if you do not have it, you will need it.
 $ opkg install opencv 
  • Install python-opencv
 $ opkg install python-opencv 
  • You will also need a webcam and a way to mount it. We used a Sony Playstation Eye, but any webcam with at least 640x480 resolution should work. Our mount held the camera about 7 inches above the playing surface, and had it pointing straight down.

User Instructions

Once everything is installed, how do you use the program? Give details here, so if you have a long user manual, link to it here.

  • First, run the autogain executable. If you are using the SPEd2 Angstrom image, then this executable is found the root directory.
  • Next, run trackbartest.py. When this runs, it opens up 2 images. One of the images is streaming live from the webcam, and it has slider bars that set the properties for the camera. The other image is a reference image. Slide the bars until the live image looks like the reference image.
 $ python trackbartest.py 

Once you have found a configuration you like, press enter. This will save those values to a file called cameraConfig.cfg, which is read later. Exit the window by pressing Escape.

  • Now you can run the actual program, runner.py.
 $ python runner.py 

This will open up 2 windows. One window is an image of what the camera sees, the other window is the same image, but with the results of the processing overlaid onto it. Because of the current state of the image recognition, these results are often very wrong, but they are also affected by how close the camera configuration is.

When the program finds a set in the recognized cards, it will print out the result to the console, and wait for a keypress to continue. Make sure one of the images has focus, and press any key.

You can quit the program either by pressing Enter while the program is in the ordinary recognizing loop, or by doing Ctrl-C in the terminal.


Highlights

The interest of our project is in how it demonstrates the use of OpenCV on the Beagleboard. It shows that OpenCV functionality on the Beagleboard is very similar to what one would find on any other computer, if a little slower.


Theory of Operation

Program Flow

There are 2 steps to running the project: configuration and running.

In the configuration step, it simply creates trackbars that set camera configuration options as they change position. When the user presses enter, it saves the 3 values (brightness, contrast, and hue) to a file, cameraConfig.cfg.

In the running step, there is a file called runner.py that contains the main loop. Before it enters the main loop, it opens and reads cameraConfig.cfg and sets the camera properties accordingly. Inside the main loop, it runs three separate functions (all found in processCards.py) to recognize the cards. The first function, extractCards, finds symbols in the image, and groups them by location, and returns those groups. Each group is made up of rectangles which represent the bounding boxes of the symbols on any given card. Thus, each group is of size one, two, or three. The next function, getMeaningFromCards, takes the cards and classifies each of them, and returns a list of Card objects (defined in processCards). Next, this list of Card objects is passed to the SolveGame function, which finds sets.

Image Recognition

  • To find fill: the program examines the center line of the symbol, and a line of the same length on the white part of the card, just to the left of the symbol, and compares their intensities. In order to set intensity thresholds correctly, there is a preprocessing loop at the beginning of the extractMeaningFromCards function.
  • To find color: the program sets a mask around the image, and colors the background to black. Then it converts the image to HSV, increases the saturation by some amount, and converts back to BGR. It splits this image into channels and gets the sum of the entire channel. The max value of the sum decides on the color.
  • To find shape: the program finds the contour of the symbol subimage, and divides the bounding box of the symbol by the perimeter of the symbol. This ratio is naively used as the decider.


Work Breakdown

Finished:

Sam Allen Did some work on getting the camera values set correctly so that the image was more usable.
Julian Ametsitsi Tested using TI Pico Projector to project indication of set of card. Developed algorithm with Stephen to identify individual cards in a single image.
Stephen Mayhew

To be finished:

Better image classification Stephen - if it gets done, it will be done during 10th week
Espeak integration Sam - if it gets done, it will be done during 10th week


Conclusions

We were pleased with how well the Beagleboard runs OpenCV. It was not at all difficult to install and run. While it is noticeably slower than on larger machines, it is not prohibitively so.

Originally, it was in the scope to have a projector project indication of sets back onto the actual cards. We had planned to use a DLP Pico projector for this, but we found early on that the Pico projector projects at a very small resolution, and we would have had to mount it significantly higher than the camera if it was to cover the whole playing area. We experimented with putting lenses in front of the projector to magnify it, but this turned out to be much more difficult than we imagined, so we focused our efforts on the image recognition, and on making the system coherent and smooth.

Another facet of the original idea was to have the computer actually playing against humans. This would involve the computer accepting input when a human found a set. Since the computer turned out to be so bad at actually playing the game, it was never worth putting this sort of functionality into it. If the recognition ever becomes reliable, this would be interesting to implement.

There is a trade-off between camera quality and image recognition success. In the first stages of our image recognition process, we worked with images of much higher resolution, and found that we had much better success. On the other hand, we could have used higher powered classifiers (Support Vector Machines, for example) to classify the images, instead of our rather fragile manual tweaked approach.

Thanks

Thanks to

  • Mark Crosby and Gary Meyer for making us a camera mount.
  • Dr. Rob Bunch for providing advice and lenses for the projector
  • Dr. Zach Dodd at Harvey Mudd University for providing starting image recognition code.