ECE497 Project Web Face Recognition

Team members: Yvette Weng, John Wang

Grading Template
I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!

 00 Executive Summary 00 Installation Instructions 00 User Instructions 00 Highlights 00 Theory of Operation 00 Work Breakdown 00 Future Work 00 Conclusions 00 Demo 00 Late Comments: I'm looking forward to seeing this.

Score: 10/100

(Inline Comment)

Executive Summary
The goal of this project is to use webcam to capture face and compare with the database to identify the person. We will use Microsoft's face API to compare the photos. The database will contain students' pictures and can be modified at any time.

There are two buttons connected to the bone. One is for capturing and identifying the student's face and the other one is to add new face to the database.

Here is the picture of the overall system.

Packaging
There is no special packaging for this project.

Installation Instructions
Give step by step instructions on how to install your project.

Here is the github repository: github. In this project we use Playstation Eye Camera and ili9341 LCD screen. The LCD screen should be included in your lab kit, and you can borrow the camera from Dr. Yoder or ECE instrument room.
 * Be sure your README.md is includes an up-to-date and clear description of your project so that someone who comes across you git repository can quickly learn what you did and how they can reproduce it.
 * Include a Makefile for you code.
 * Include any additional packages installed via apt.
 * Include kernel mods.
 * If there is extra hardware needed, include links to where it can be obtained.

To install required library and environment follow the instructions below. These installation instructions are for Debian and Ubuntu distributions:

Please make sure you have python3 installed. You can find the instruction here.

1 - Update your Operating System and install the dependencies: bone$ sudo apt-get update bone$ sudo apt-get install v4l-utils libv4l-dev fbset fbi imagemagick

2 - After installation, some of v4l2 utilities should be available to use, you can confirm by issuing the following command: bone$ v4l2-ctl -h

Next we are going to make sure the camera is suitable for the job. You can find a detailed video instruction from Derek Molloy here.

3 - Make sure the camera is plugged in and run: bone$ lsusb

You should be able to see your camera listed on there. To know more about your camera, run: bone$ '''v4l2-ctl --all

If v4l2 gives you an error or it won't recognize your camera, sorry buddy, you have to use a different camera that supported by v4l2.

4 - Install hardware To install the ili9341 LCD screen, you need to connect it to GPIO0 and SP1 as shown in HW6

Besides that, the red button is connected to GP1-3, the green button is connected to GP1-4.

5 - Setup program Now we are good to setup the main program. To do this, first go to the file: bone$ cd ECE497_Final bone$ ls -l You should see a file called apitest.py . Open it with your favorite editor. In this case, we simply use nano: bone$ nano apitest.py Once you open up the python file, you need to change the subscription_key and uri_base. For example, ours are: subscription_key = '7ffe7a3ee1844bc2aa3211c4f02bbc55'
 * 1) Replace the subscription_key string value with your valid subscription key.

uri_base = 'eastus.api.cognitive.microsoft.com'

You can get yours from Microsoft Azure Face API webpage for free. One thing to notice here is free trial key will only allow up to 20 transactions per minutes. We are using a paid version here for faster transaction.

Finally, make sure you have saved your change and run: bone$ ./setup.sh

You are now good to try the face recognition system.

User Instructions
Programming Tools and Instructions

Our group used C for capturing with webcam and we used python for face identification. You can do multiple tings with the code base that is in Github.

database

The database folder contains all students' pictures and all the names are stored in face.dat.

How to Run

1. Connect your BeagleBone Blue to your PC using the USB cable.

2. ssh into your Bone and make sure it has Internet connection. You can test this by ping google: bone$ ping -c2 google.com

This program won't work without Internet connection.

3. Run the following line: bone$ sudo ./main.py

4-1. In order to add new face to the database: push the green button, enter your name, wait for the program to finish adding.

4-2. In order to make face identification: push the red button, wait for the program to identify, see your name shows up on LCD screen.

Highlights
This project works with both pictures and humans. The program can successfully identify a person even if the person's face is partially covered or out of the frame.

You can see a pretty cool demo here, which uses pictures of Justin Bieber.

Theory of Operation
Give a high level overview of the structure of your software. Are you using GStreamer? Show a diagram of the pipeline. Are you running multiple tasks? Show what they do and how they interact. A high resolution pdf version of our operation flowchart can be found here:

Future Work
There are a lot of things you can do with this project.

You could make the program auto-run, so that each time you power up your Bone, the program will start running.

You could also add an error handler when adding the new faces. Right now the program adds a face no matter the captured picture contains a face or not, so if there is a "bad" picture, one needs to go to the database and delete the photo manually. An error handler could simplify this process by automatically deleting a "bad" picture when it detects no face in the photo.

One cool thing we came up at the very end of this project was you can use the touch screen to replace two physical buttons. This need more research and experiment on using the LCD screen as an input device but will definitely make the whole system more compact and useful.

Conclusions
The design featured in this project is a good foundation for IoT devices like smartlock. We finished the first part which is analysis and verification face. Next step we can focus on the Internet part. We have looked into Blynk which is an Internet of Things platform with a drag-n-drop mobile application builder and saw great potential in it. By utilizing the command line display function and streaming function we can shrink the physical size one step further. Besides that, as we mentioned in previous section, we haven't released full potential of LCD touch screen. Currently we only use it as a display. There are still much to do in this project and can be a good topic for future classes.