Difference between revisions of "ECE497 Project - Object Detection w/ DNN"

From eLinux.org
Jump to: navigation, search
(Hardware)
(Theory of Operation)
(17 intermediate revisions by the same user not shown)
Line 30: Line 30:
 
== Executive Summary ==
 
== Executive Summary ==
  
Picture that summarizes the project.
+
'''GET PICTURES OF IMAGE RECOGNITION'''
  
 
+
We are using tensor flow and Open-CV to detect items in the frame of a web camera.  The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well.  Due to the intensive nature of the object detection, we are using a local web server to process the image and find the objects within it.  The web server returns an error vector which the Pi coverts to a control vector.  It can then adjust its angle to keep the tracked object in the middle of the frame.  In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the Pi as well however we were unable to get a reasonable response time with either the Pi or the Beagle Bone.
We are using tensorflow and OpenCV to detect items in the frame of a web camera.  The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well.  Due to the intensive nature of the object detection, we are using a local web server to process the image and find the objects within it.  The web server returns an error vector which the '''Pi''' coverts to a control vector.  It can then adjust its angle to keep the tracked object in the middle of the frame.  In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the '''Pi''' as well however we were unable to get a reasonable response time with either the Pi or the BeagleBone.
 
 
 
Give two sentences telling what works.
 
Give two sentences telling what isn't working:
 
End with a two sentence conclusion.
 
 
 
The sentence count is approximate and only to give an idea of the expected length.
 
  
 
== Packaging ==
 
== Packaging ==
If you have hardware, consider [http://cpprojects.blogspot.com/2013/07/small-build-big-execuition.html Small Build, Big Execuition] for ideas on the final packaging.
+
In the spirit of small build, big execution, we created an enclosure for our project using mdf. We CNC'd two pieces that were then glued together, sanded and painted.  A notch was cut into the back for the cables to access the Raspberry Pi that was mounted on the underside.  The Pi was mounted using 4x 3M plastic standoffs and some 3M screws.  The tilt pan kit was mounted on the top piece and a hole was drilled through to allow the servo motor cables access to the Pi.
  
 
== Installation Instructions ==
 
== Installation Instructions ==
Line 67: Line 60:
 
== Theory of Operation ==
 
== Theory of Operation ==
  
[[File:High level diagram.png|frame|center|High-level Hardware Overview]]
 
  
 
=== Hardware ===
 
=== Hardware ===
  
[[File:Fritzing Diagram.png|300x300px|frame|center|Schematic]]
+
[[File:Fritzing Diagram.png|400px|thumb|none|left|Schematic]]
  
 
=== Software ===
 
=== Software ===
  
Give a high level overview of the structure of your software.  Are you using GStreamer?  Show a diagram of the pipeline.  Are you running multiple tasks?  Show what they do and how they interact.
+
[[File:High level diagram.png|400px|thumb|none|left|High-level Hardware Overview]]
  
  
== Work Breakdown ==
+
The camera sends the image to the Raspberry Pi over usb.  The Pi then sends the image to the web server.  The web server processes the image, finds the nearest person it has the highest confidence for, and returns an error vector to the pi of the distance between the identified object and the center of the frame.  Using a PID control loop, this error vector is processed by the Pi and converted into a control vector.  Finally, the control vector is then turned into PWM signals that are sent to each servo.
  
Milestones:
+
This whole process takes anywhere from 100-130 ms.  Our greatest bottleneck in this process is the time it take to transfer the image to and from the web server.  Despite the delays inherent to file transfer over the internet, this is still significantly faster than trying to do all the processing on the Pi or the Beagle bone.  Due to these hardware limitations of the Pi and Beagle bone it took nearly 3 sec on the Pi, and 6 sec on the Beagle to process a single image.
  
Getting OpenCV on Pi/Beaglebone (10/28)
+
In an effort to further optimize our system we used both cores on the Pi to parallelize some of the tasks.  Currently, the image transfer and display is handled on one core, while the control loop runs on the other.  This was done to help preserve the timing of the control loop as well as take some load off off the core that was handling the image.
 +
 +
Give a high level overview of the structure of your software.  Are you using GStreamer?  Show a diagram of the pipeline.  Are you running multiple tasks?  Show what they do and how they interact.
  
Testing Pi vs Beaglebone operation (10/29)
+
== Work Breakdown ==
  
Image sending and receiving (11/5)
+
=== Milestones: ===
  
Web server configuration (11/5)
+
*Getting OpenCV on Pi/Beaglebone (10/28)
 
+
*Testing Pi vs Beaglebone operation (10/29)
Servo and tilt pan kit assembly (11/10)
+
*Image sending and receiving (11/5)
 
+
*Web server configuration (11/5)
Control loop and tuning for servos (11/14)
+
*Servo and tilt pan kit assembly (11/10)
 
+
*Control loop and tuning for servos (11/14)
Enclosure design and construction (11/16)
+
*Enclosure design and construction (11/16)
 
+
*Documentation (11/19)
Documentation (11/19)
 
  
 
== Future Work ==
 
== Future Work ==
  
Creating our own libraries to train our model on would be a very interesting addition to this project.  This would allow us to detect and recognize individuals and only track certain people.   
+
*Creating our own libraries to train our model on would be a very interesting addition to this project.  This would allow us to detect and recognize individuals and only track certain people.   
 
 
Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.
 
 
 
  
 +
*Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.
  
 
== Conclusions ==
 
== Conclusions ==
  
 
Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.
 
Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.

Revision as of 17:03, 16 November 2019


Team members: [Paul Wilda, Leela Pakanati]

Grading Template

I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!!

00 Executive Summary
00 Installation Instructions 
00 User Instructions
00 Highlights
00 Theory of Operation
00 Work Breakdown
00 Future Work
00 Conclusions
00 Demo
00 Late
Comments: I'm looking forward to seeing this.

Score:  10/100

(Inline Comment)

Executive Summary

GET PICTURES OF IMAGE RECOGNITION

We are using tensor flow and Open-CV to detect items in the frame of a web camera. The camera is mounted onto a tilt pan kit to allow us to track the objects in frame as well. Due to the intensive nature of the object detection, we are using a local web server to process the image and find the objects within it. The web server returns an error vector which the Pi coverts to a control vector. It can then adjust its angle to keep the tracked object in the middle of the frame. In order to dramatically decrease the complexity of the project, we would have liked to preform all the processing on the Pi as well however we were unable to get a reasonable response time with either the Pi or the Beagle Bone.

Packaging

In the spirit of small build, big execution, we created an enclosure for our project using mdf. We CNC'd two pieces that were then glued together, sanded and painted. A notch was cut into the back for the cables to access the Raspberry Pi that was mounted on the underside. The Pi was mounted using 4x 3M plastic standoffs and some 3M screws. The tilt pan kit was mounted on the top piece and a hole was drilled through to allow the servo motor cables access to the Pi.

Installation Instructions

Give step by step instructions on how to install your project.

  • Include your github path as a link like this to the read-only git site: https://github.com/MarkAYoder/gitLearn.
  • Be sure your README.md is includes an up-to-date and clear description of your project so that someone who comes across you git repository can quickly learn what you did and how they can reproduce it.
  • Include a Makefile for your code if using C.
  • Include any additional packages installed via apt. Include install.sh and setup.sh files.
  • Include kernel mods.
  • If there is extra hardware needed, include links to where it can be obtained.

User Instructions

Once everything is installed, how do you use the program? Give details here, so if you have a long user manual, link to it here.

Highlights

Here is where you brag about what your project can do.

Include a YouTube demo the audio description.

Theory of Operation

Hardware

Schematic

Software

High-level Hardware Overview


The camera sends the image to the Raspberry Pi over usb. The Pi then sends the image to the web server. The web server processes the image, finds the nearest person it has the highest confidence for, and returns an error vector to the pi of the distance between the identified object and the center of the frame. Using a PID control loop, this error vector is processed by the Pi and converted into a control vector. Finally, the control vector is then turned into PWM signals that are sent to each servo.

This whole process takes anywhere from 100-130 ms. Our greatest bottleneck in this process is the time it take to transfer the image to and from the web server. Despite the delays inherent to file transfer over the internet, this is still significantly faster than trying to do all the processing on the Pi or the Beagle bone. Due to these hardware limitations of the Pi and Beagle bone it took nearly 3 sec on the Pi, and 6 sec on the Beagle to process a single image.

In an effort to further optimize our system we used both cores on the Pi to parallelize some of the tasks. Currently, the image transfer and display is handled on one core, while the control loop runs on the other. This was done to help preserve the timing of the control loop as well as take some load off off the core that was handling the image.

Give a high level overview of the structure of your software. Are you using GStreamer? Show a diagram of the pipeline. Are you running multiple tasks? Show what they do and how they interact.

Work Breakdown

Milestones:

  • Getting OpenCV on Pi/Beaglebone (10/28)
  • Testing Pi vs Beaglebone operation (10/29)
  • Image sending and receiving (11/5)
  • Web server configuration (11/5)
  • Servo and tilt pan kit assembly (11/10)
  • Control loop and tuning for servos (11/14)
  • Enclosure design and construction (11/16)
  • Documentation (11/19)

Future Work

  • Creating our own libraries to train our model on would be a very interesting addition to this project. This would allow us to detect and recognize individuals and only track certain people.
  • Making the tilt pan kit more robust would allow us to mount a nicer camera to the system and would significantly improve the image quality as well as the recognition accuracy.

Conclusions

Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.