Difference between revisions of "ECE434 Project-Image recoginition with openCV and Siri"
(→Theory of Operation)
|Line 1:||Line 1:|
Latest revision as of 12:49, 8 April 2021
Team members: Haoxuan Sun, Heda Wang
In this project, we will use OpenCV to do image recognition. A servo will make the camera centered at the desired target. Also, it is controlled by Siri through shortcuts and ssh.
Currently, this project only supports three colors. But you can easily add colors you want to find. This project is limited by the processing speed of the beagle bone. Because the processing takes a long time for each frame, we can only point to the color instead of tracking. With enough optimization and computational power, this can be easily turned into an image tracker.
To achieve this project you need:
1. BeagleBone Black
2. A servo
3. A USB webcam
4. LED or other colorful things for target
5. (optional). iPhone or iPad Supports Siri
A demo video is available on YouTube: https://youtu.be/_aZgPR_QLXI
Connect the USB camera to the USB port. Connect the servo to 5V on the Beaglebone. If the servo is not powered, you might need to plug in the 5v power supply on the beagle bone. The servo signal is connected to pin 8_13
Tie or glue the servo and the camera together. We will use the camera for other use laster, so we tie the camera and servo using some wire.
We need to install openCV To install it, connect to the BeagleBone and type:
sudo apt install python-opencv
sudo pip3 install opencv-python
and if error occurs, type:
pip3 install opencv-python sudo apt-get install libcblas-dev sudo apt-get install libhdf5-dev sudo apt-get install libhdf5-serial-dev sudo apt-get install libatlas-base-dev sudo apt-get install libjasper-dev sudo apt-get install libqtgui4 sudo apt-get install libqt4-test
Connet your beagle bone to your router though a cable Also, make sure there is shortcut on your ios device. You can install it from APP store. Click create add shortcut and scripting. Then select the Run Script Over SSH. Touch the show more to see all the option. Go to your router config page and find your beagle bone's IP address. Enter the IP address to "Host" and the port as 22. The user is usually Debian and also enters the password. In the script enter sudo python3 red.py. Click the + under the block you just added and select scripting again. Select show Notification and configure it as "I am pointing red". Finally, rename the shortcut as "point red" or whatever you want Siri to recognize. If you want to deal with more color, just copy this shortcut and change the code it run.
After configer all the shortcut the screen should look like this.
Activate Siri by pressing the home button or saying hey Siri. Tell the Siri "point red/blue/green" and wait for the servo and camera to point at the color. After the process is done, there will be a notification pop up in your device indicating it is done.
This is a demo video with our explaination. https://youtu.be/ch4CfHEJSlM
Theory of Operation
1. Scanning for color. Because the image processing is so slow and our servo is not very accurate. We choose to scan several angles and find the best angle that aligns the camera and the color. You can change the scanning step in the code. But more steps means there is more time until you can see the result.
2. Image processing and recognition
To process the image and find the position of target, lots of openCV functions are used. We have VideoCapture, bitwise_and, GaussianBlur, cvtColor, inRange, erode, threshold, adaptiveThreshold, HoughCircles, and imwrite.
(a) First, use videoCapture to keep capture frames form the webcam. Due to the limit bandwidth of the USB port on beaglebone black board, we have to set the frame into smaller size. This will hurt the resolution. One solution is to use another webcam which have H264 compress protocols build in the webcam. Compressing the frame and decompressing it on the board will allow the USB port transmit a higher resolution frame. Use VideoCapture.read() to capture the current frame, and save it as a matrix. imwrite is used to save the image in local file.
(b) Use cvtColor to set the frame to COLOR_BGR2HSV, which is to transform the picture to hsv picture. Set a range of hsv values to the certain color, then use inRange(hsv, lower_range, upper_range) to separate the target color from surrounding environment. The function bitwise_and(original-picture, hsv-picture) is used to show the separated part on the original picture, and remove the environment.
(c) GaussianBlur is able to blur the the picture.
(d) Set the picture to gray scale by using cvtColor(frame, cv2.COLOR_RGB2GRAY)
(e) Right now, the target object has already been separate from the environment. However, the picture usually contains lot of noise point, which caused by the similar color in complex background. To eliminate the noise, erode will take the role on it.
(f) Chose either threshold or adaptiveThreshold to make the picture in binary colors. A threshold value is required to type in manually when using threshold. The adaptiveThreshold is capable of using adaptive threshold value in each small area on the image.
(g) Finally, a axises coordinate need to be obtained from the picture. One easy way is to use for loop to go through each pixel on the picture. Since the picture is in binary colors, and the target has already been separated. Finding the pixel that is different from the surrounding are are will be the target position. Another way is to use HoughCircles, which is to draw circles based on any curves find in the picture. The circles information will be saved, and the center position will be the target position.
According to this method, we can find the target with over 95% success rate and over 90% accuracy.
Heda Wang: Learn and config Siri shortcuts, Wrote the servo code and the scan algorithm, Integrate the servo code with image recognition code.
Haoxuan Sun: Install Open-CV, Wrote the image processing and recognition code. Figure out how to find ip address on campus network.
Other things we done them together.
Our initial plan was to make the servo track the object. But the image processing speed is so limited. Every frame need several seconds to process. If we can optimize the algorithm, we can achieve track the boject.
Also, The short cut on Siri can accept parameter input. With more development, we could let siri accept any color input with only one python file.
There is a lot of potential with the Siri shortcut. Adding more fun things to the shortcut will make this project connect better to ios devices.