ECE497 Project Smart Glass

Team members: Hazen Hamather and Luke Kuza

Draft Feedback
Good start, but much is missing. Add pictures of your Smart Glass.

I look forward to seeing your finished report.

I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!

 00 Executive Summary 00 Installation Instructions 00 User Instructions 00 Highlights 00 Theory of Operation 00 Work Breakdown 00 Future Work 00 Conclusions 00 Demo 00 Late Comments: I'm looking forward to seeing this.

Score: 10/100

(Inline Comment)

Executive Summary
Smart Glass will be a spin off of the original Magic Mirror developed by Michael Teeuw. The team plans to build a setup that is very similar to his but take it a step further and implement motion control by using an Xbox Kinect. The hardware aspect of this project is 95% complete. The frame and housing has been constructed and the mirror film applied to the glass and installed in the frame. The monitor that rests behind the glass has been securely fastened into place and an external button has been soldered to it to turn the monitor on and off as desired by the user. Currently, we have discovered a way to access some of the functionality associated with the Kinect such as IR distance and the camera itself, utilizing OpenKinect, which is an open source Kinect library. We have also designed a sleek (at least we think so) user interface that displays information the user might find important or interesting. One of the major problems continuing to plague the team is the lack of access to the Kinect. OpenKinect has limited capabilities with respect to our project scope, leading us to seek out other methods such as OpenNI2.

Packaging
Smart Glass was designed and built with the idea of possible repairs and upgrades in mind. The front facade provides a nice framed look while the rear is all business. The display sitting on the glass can be easily removed with only a few screws. Two screws hold the monitor in place laterally and gravity (combined with a couple of shims) securely holds the monitor in place vertically. This was done to make removal easy, which we did numerous times during not only the build phase but the programming phase as well.

Maybe on an unlucky day, the glass gets bumped and shatters. As sad as that would be, replacing it is quite simple. The facade holding the now shattered glass can be removed with the removal of the 8 screws seen around the edges. After this has been done, another 8 screws around the edges of the facade hold the glass in. Once those screws are removed, the facade comes apart, allowing for new glass to be inserted.

These are only a few of the ways in which design and packaging were taken into account. Should anybody need to access internal components, we feel they will be pleased with our overall design. If you have hardware, consider Small Build, Big Execuition for ideas on the final packaging.

Installation Instructions
The repository for this project can be found on github. Installation is very simple. Simply clone the repository.

User Instructions
Once again, getting the mirror up and running is as easy as installation.
 * 1) cd ~/SmartGlass/SmartGlass/FlipClock-master
 * 2) open TestingAPI.html in Chrome

Highlights
The project was created with a strong and durable frame, with a clean and simple design.

Here is a YouTube demo of the project in action with an explanation.

Theory of Operation
The BeagleBone is running a simple webpage that is constantly making JSON requests to various APIs to keep the interface current. The Bone will also be doing image processing in order to alter the contents of what the user is experiencing.

Work Breakdown
The large milestones we have encountered have been designing the frame, constructing the frame, and setting up our interface combined with motion detection controls. Luke originally had the idea of how to mount the glass into a frame and Hazen took that idea and created a SolidWorks model of what the final product should look like. The provided the group with a Bill of Materials (BOM) that we were able to take to the department in order to get funding. Using that BOM, we gathered the right construction materials and began the building process. Hazen took control of the building process and Luke handled the electrical part of altering our monitor to suit our needs. Currently at this time, Hazen is finishing up the User Interface and Luke is attempting to access the Kinect, hoping to implement simple skeleton tracking for gesture recognition. The Kinect is not working as expected right now and we aim to have that ironed out by Wednesday night (11/9).

Future Work
One neat thing we would like to see done at some point is create this same project but run Windows instead. Using Windows for development opens up most (if not all) of the capabilities of the Kinect granted to it by Microsoft. It would make for an easier gesture recognition and also give the ability to do some very complex motion tracking or even facial recognition. Advanced techniques such as the previous could allow the same mirror to be customized for multiple users.

K== Conclusions ==

The project was a lot more challenging than the team originally anticipated. The challenging of creating a reflective surface allowing light to pass through was difficult, especially at a low price point. In addition, mounting the glass in a frame was also difficult, the entire mount/frame took about 13 hours to complete. The Kinect was also another difficult problem. While getting Skelton Tracking working on a Windows/x86 host was trivial, the problem arose when getting the software to work on ARM systems such as the BBB. The software that originally was chosen to do skeleton tracking was OpenNI, NiTE, and PrimeSense Sensor Kinect. The only issue is Apple bought and shut this company down, and their old software is no longer updated. The ARM package of NiTE had segmentation faults on the BBB, and after countless hours was determined to be unworkable. A switch to ROS (Robot Operating System), but the switch was not made in time to get tracking working. What *did* work however, was retrieving the camera and depth map from the Kinect.

The project was an extremely good learning experience, and the team can not wait to add tracking functionality in the future. While this may not even be possible on ARM or a slow core like the BBB, it will continue to be researched. The tasks we already completed were difficult and excellent.