ECE497 Project Web Face Recognition

Team members: Yvette Weng, John Wang

Grading Template
I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!

 00 Executive Summary 00 Installation Instructions 00 User Instructions 00 Highlights 00 Theory of Operation 00 Work Breakdown 00 Future Work 00 Conclusions 00 Demo 00 Late Comments: I'm looking forward to seeing this.

Score: 10/100

(Inline Comment)

Executive Summary
The goal of this project is to use webcam to capture face and compare with the database to identify the person. Besides that, we also aim to create an easy-to-use adding face mechanism and error handling system. We will use Microsoft's face API to generate faceID and compare photos. The database will contain students' pictures and can be modified at any time.

There are two buttons connected to the bone. One is for capturing and identifying the student's face and the other one is to add new face to the database.

Here is the picture of the overall system.

Packaging
There is no special packaging for this project. Although, as shown in the system overview picture, the wiring is simple and clear, most of interesting stuffs are under the hood.

Installation Instructions
Here is the github repository: github.

In this project we use Playstation Eye Camera and ili9341 LCD screen. The LCD screen should be included in your lab kit, and you can borrow the camera from Dr. Yoder or ECE instrument room.

To install required library and environment follow the instructions below. These installation instructions are for Debian and Ubuntu distributions:

Please make sure you have python3 installed. You can find the instruction here.

1 - Update your Operating System and install the dependencies: bone$ sudo apt-get update bone$ sudo apt-get install v4l-utils libv4l-dev fbset fbi imagemagick

2 - After installation, some of v4l2 utilities should be available to use, you can confirm by issuing the following command: bone$ v4l2-ctl -h

Next we are going to make sure the camera is suitable for the job. You can find a detailed video instruction from Derek Molloy here.

3 - Make sure the camera is plugged in and run: bone$ lsusb

You should be able to see your camera listed on there. To know more about your camera, run: bone$ '''v4l2-ctl --all

If v4l2 gives you an error or it won't recognize your camera, sorry buddy, you have to use a different camera that supported by v4l2.

4 - Install hardware To install the ili9341 LCD screen, you need to connect it to GPIO0 and SP1 as shown in HW6

Besides that, the red button is connected to GP1-3, the green button is connected to GP1-4.

5 - Setup program Now we are good to setup the main program. To do this, first go to the file: bone$ cd ECE497_Final bone$ ls -l You should see a file called apitest.py . Open it with your favorite editor. In this case, we simply use nano: bone$ nano apitest.py Once you open up the python file, you need to change the subscription_key and uri_base. For example, ours are: subscription_key = '7ffe7a3ee1844bc2aa3211c4f02bbc55'
 * 1) Replace the subscription_key string value with your valid subscription key.

uri_base = 'eastus.api.cognitive.microsoft.com'

You can get yours from Microsoft Azure Face API for free. One thing to notice here is free trial key will only allow up to 20 transactions per minutes. We are using a paid version here for faster transaction. You can, of course, try our key. But keep in mind this key will expire in 20 days, so there is very little chance they will still work when you try.

Finally, make sure you have saved your change and run: bone$ ./setup.sh

You are now good to try the face recognition system by running main.py. The instructions will be showed on LCD screen.

User Instructions
Programming Tools and Instructions

Our group used C for capturing with webcam and we used python for face identification. You can do multiple tings with the code base that is in Github.

Database

The database folder contains all students' pictures and all the names are stored in face.dat.

How to Run

1. Connect your BeagleBone Blue to your PC using the USB cable.

2. ssh into your Bone and make sure it has Internet connection. You can test this by ping google: bone$ ping -c2 google.com

This program won't work without Internet connection.

3. Run the following line: bone$ sudo ./main.py

4-1. In order to add new face to the database: push the green button, enter your name, wait for the program to finish adding.

4-2. In order to make face identification: push the red button, wait for the program to identify, see your name shows up on LCD screen.

Image Grabber

For this specific webcam, PlayStation Eye camera, you MUST set the resolution to be 320*280 or lower. Otherwise this camera won't save any picture or video clip at all.

Highlights
This project works with both face pictures and human faces. The program can successfully identify a person even if the person's face is partially covered or out of the frame. In most cases, the program can even recognize a person with or without accessories like hat and glasses. For the sake of simplicity, we didn't show this feature in video demo but was presented during the physical demo session.

You can see a pretty cool demo here. We used two pictures from Justin Bieber where one was added to the database and another was used to verify the identity. Although there are some similarities shared between two pictures, they are visually distinguishable with different face angles, shadow levels and facial expressions. After we pressed the green button and typed the user name, the first picture was put into database. We then presented camera the second picture, after running verification, the result was shown on screen.

Theory of Operation
The program can be divided into three parts: image grabber, face detect, face verification. Press both buttons will run the image grabber and save picture to program's root directory. The default camera output file type is pmm file which can't be display or process easily. Therefore, the pmm file will then be converted to jpg file whose name depends of work type. If user presses green button(add new face), the file will be named as the number of pictures plus one and move into database. In another scenario where user presses red button(verify face), the file will be named as 1.jpg. The detailed face detection, verification and error handling are showed in flowchart. One thing to notice is that, the input for face verification is not picture but faceIDs that returned from face detection. All the pmm and jpg files in root directory will be removed when verification, detection finish or an error is detected.

A high resolution pdf version of our operation flowchart can be found here:

Future Work
There are a lot of things you can do with this project.

You could make the program auto-run, so that each time you power up your Bone, the program will start running.

You could also add an error handler when adding the new faces. Right now the program adds a face no matter the captured picture contains a face or not, so if there is a "bad" picture, one needs to go to the database and delete the photo manually. An error handler could simplify this process by automatically deleting a "bad" picture when it detects no face in the photo.

You can, if you really want to, use openCV instead of commercial face recognition services like Microsoft Face API. We will not recommend that as face verification requires huge amount of face training which are not the focus of this class.

One cool thing we came up at the very end of this project was you can use the touch screen to replace two physical buttons. This need more research and experiment on using the LCD screen as an input device but will definitely make the whole system more compact and useful.

Conclusions
The design featured in this project is a good foundation for IoT devices like smartlock. We finished the first part which is analysis and verification face. Next step we can focus on the Internet part. We have looked into Blynk which is an Internet of Things platform with a drag-n-drop mobile application builder and saw great potential in it. By utilizing the command line display function and streaming function we can shrink the physical size one step further. Besides that, as we mentioned in previous section, we haven't released full potential of LCD touch screen. Currently we only use it as a display. A good amount of hour were spent to figure out how to take picture as the sample code from Derek Molloy won't work directly for our camera. Putting this main problem aside, our exploration on sending request, parsing JSON response, forming and organizing database went pretty smoothly. We also put lots of effort into optimizing user instructions. Our efforts do pay off as these instructions help users go through confusing parts without our help during demo session. Although we achieved a high degree of completion in this project, there are much more to do in it and can still be a good topic for future classes.