ECE497 Project Makeshift Drums

Team members: Will Elswick, James Savage.

Grading Template
I'm using the following template to grade. Each slot is 10 points. 0 = Missing, 5=OK, 10=Wow!

 08 Executive Summary - Check the tense: "project will communicate" vs. "project communicates" 10 Installation Instructions - Nice and detailed 09 User Instructions - Clear 09 Highlights - Cool video 10 Theory of Operation - Good. Has the source of Seg Faults be found? 09 Work Breakdown - A lifetime of work? 09 Future Work 10 Conclusions - Good 10 Demo 10 Late (not) Comments: Nice project. You pulled together a number of interesting things to make it work.

Score: 94/100

Executive Summary
Our project will communicate with an accelerometer to make music. The sensor can be attached by the user to everyday objects like books or a table top and the Beagle Bone Black will use the sensors' outputs to play sounds. In this way, a user can play the drums, bongos, maracas, et cetera without a drum kit or other bulky equipment.

The implementation of this project will involve the combination of four major parts.


 * A ADXL345 accelerometer,
 * A C++ node.js module to perform polling and data processing,
 * A simple node.js server, which uses socket.io to send motion events, and
 * A client side app which uses HTML5 WebAudio APIs to play back sound in response to motion events.

Installation Instructions
Building this project was easiest to perform on the beagle-bone itself, however it should be possible to cross-compile if you wish.


 * Download the project source code from https://github.com/axiixc/beagle-band
 * Building node.js add-ons uses a special build manager called, to install it you will have to first install  , and.
 * You can then install  using   (note: if you are not already root this will require   privileges).
 * Build the  modules.
 * Change directory to the  subfolder.
 * Use  to fully rebuild the module.
 * If you wish you can test the module using, which is a simple test script that simply prints out all motion events it receives.
 * Install dependencies for the server.
 * In the root project directory, run . This will download the final-fs and socket.io libraries required by the server.
 * Start the server by running . This will start the server listening on port 3001.

The only hardware we used, other than the beaglebone, was the aformentioned ADXL345 accelerometer.


 * VCC should be tied to 3.3V and GND to GND.
 * We used the I2C interface which required the /CS pin to be tied high. We tied the SDO/ALT ADDRESS pin low to give the device an address of 0x53.
 * We used I2C bus 1 by connecting SCL and SDA to pins 19 and 20, respectively.
 * The INT1 pin, which lets the beaglebone know when samples are ready for processing, was connected to GPIO60(P9_12).

User Instructions/Highlights
Our project allows the user to navigate to a port on the beaglebone. The bone then provides an interface which allows the user to select which instrument he or she wishes to play.

At this point, the user can shake, strike or otherwise agitate the ADXL345 accelerometer connected to the beaglebone. The beaglebone processes these movements and and will play sounds corresponding to the selected instrument through the browser accordingly. In this way the user can act like the beaglebone is a maraca, bongo, or other percussion instrument to emulate a real percussionist.

A video of our working prototype is available at https://vimeo.com/79573880.

Theory of Operation
When the program first runs it initializes the accelerometer to sample at 100Hz with a range of +/-2g. Whenever a sample is collected, the accelerometer sends an interrupt signal to one of the BBB's GPIO pins. Our code then uses the I2C protocol to retrieve that sample and any others it has collected in the meantime. The accelerometer has a FIFO queue which will collect up to 32 samples in the even that the BBB does not service the interrupt before the next samples are collected.

The BBB then conditions the signal so it can ignore any offsets (such as those due to the force of gravity) or small changes, like noise of the device being rotated. By changing a few parameters of our conditioning algorithm we can change how strong a strike or shake must be before the device detects it.

To condition the signal, we first considered the fact that the accelerometer is affected by the force of gravity. This provides an offset depending on the orientation with which the user is holding the accelerometer and affects our calculation. To factor this out, we found the derivative of the acceleration by subtracting each incoming sample with the previous one. By accumulating the subsequent values we eliminate this offset.

However, if we reorient the device the offset comes back--we merely calibrated the offset for the accelerometer's starting position. To continuously factor out this offset as the orientation changes we made the accumulator only add together the latest handful of samples. This essential creates a highpass filter. By changing the number of points accumulated at once we can affect how quickly a change must occur for it to not be filtered out. This helps us detect shakes or strikes much easier.

Next we simply accumulate our filtered acceleration values to get our velocity. When our velocity is equal to zero (and our acceleration is non-zero) we have found a shake or a strike. This is when our program reports the strike to the browser. We also take the acceleration value at that point and use that to change the volume of the sound we make.

In designing our architecture we encountered an problem in how to both best transfer data to our client web application, and read data from our accelerometer. While it would be easiest to do data collection and processing using C or similar language, we wanted to use WebSockets to communicate with the web browser. As a novel compromise, we opted to write our own node.js add-on in C++, and pass back relevant updates to JavaScript, which would in turn integrate with the higher level networking libraries to communicate this to the browser.

The basics of writing a node.js add-on is described in this node.js api article, however we also utilized numerous other sources in order to build this module. Specifically, due to node.js's event-driven model, we had to integrate with libuv (node.js's runloop and thread library) in order to preserve the asynchronous style of node.js. This proved to be a relatively low technical hurdle, as most of the API in libuv closely mirrors is synchronous counterparts in standard OS libraries (e.g., uv_poll vs poll), however we did encounter some possible concurrency issues that we have not yet been able to address involving segmentation faults somewhere between our data processing and JavaScript callbacks. Our add-on appears to run smoothly for 100 to 300 invocations of our callback, at which point is mysteriously crashes, seemingly due to a bad pointer address. Were we to continue the project, this would be our final remaining bug to fix.

As for the actual implementation of the module, the data we pass back to JavaScript includes only an intensity measure, which is scaled (based on observed ranges of the accelerometer, not its theoretical limits, in order to produce a better sounding result). The callback in JavaScript then uses the socket.io library to emit a  event to all listening clients, along with an intensity value which the clients use to adjust the volume of the played sound.

On the client side we used WebAudio to playback static mp3 files which are also hosted by our node.js server. When the client opens a new websocket connection, a  event is emitted by the server to the client, enumerating all available audio files. The client uses this list to initialize a  structure on its end, which holds both the list of available sounds and contains logic to download and retain cached buffers of those sounds, allowing for seamless playback. Then, when a sound is selected by the client (either by explicit user action, or implicitly when a new SoundList is initialized), the relevant resources are downloaded from the server, and playback will begin as soon as playback events are received from the server.

Work Breakdown
Accelerometer interface -- Will Elswick - 7 hours

Signal conditioning -- Will Elswick - 9 hours

Node.js addon -- James Savage - 12 hours

Web browser interface -- James Savage - 7 hours

Inspiration/sarcasm -- James Savage - his entire life

Future Work
This project could easily be expanded to take inputs from multiple accelerometers. We didn't have any other ADXL345 accelerometers so we couldn't duplicate our inputs. With this, it we could easily modify our interface to recognize another accelerometer and assign to it its own unique sounds. This way we could have a whole percussion section or even a whole drum kit.

Additionally we could add some composing tools to the interface so that the user could record and playback or loop his performance. By switching the instrument someone could create an entire rythmic track using only the one accelerometer we have.

Conclusions
The ADXL345 accelerometer proved very easy to interface with as we could use the same I2C techniques we had learned previously in class. It had many different configurable options that allowed us many degrees of freedom in our design. Altering the sampling frequency, ranging, and interrupt mode of the device allowed us to most easily grab samples from the device and process them while still responding in real time to the user.

We found that by using simple math and the knowledge gained in ECE signal processing courses we could easily condition the inputs from the accelerometer and differentiate between shakes and slower movements. All of our filters proved to be accumulators or differentiators that were entirely implemented using simple math operators in C. This allowed us to use C++ to service all of our interrupt handling quickly enough that the whole system could respond in real time.