MPA03 Kinects and Motors

Due: Wednesday, April 16 (before class, meet in HCIL)

We've now acquired significant experience working with the Arduino hardware prototyping platform. In MPA01 and 02, we designed and implemented new interactive experiences using custom hardware that we built. With MPA01, the focus was on creating new input devices and interactions with desktop computers. With MPA02, we left the desktop environment all together and explored embedding computation in low-tech materials (paper, fabric, wood, cardboard). In this assignment, we continue the theme of combining the physical and virtual worlds in unique ways but our approach changes. Enter: the Microsoft Kinect. In this assignment, you will use the Microsoft Kinect, which combines an IR camera (for depth) and a traditional RGB camera (for visuals) into a single sensor.

What To Do

In this assignment, your goal is to use the Microsoft Kinect to create a physical-based interaction that is digitized, analyzed, and then used to control/actuate something in the real world. Hence, Kinects and Motors. :) You will use the Kinect along with computer vision to translate the physical into the virtual and the Arduino (or the Raspberry Pi or BeagleBone) to translate the virtual back to the physical.

Here are some examples (all of which use motors but not necessarily cameras/computer vision):

Here are some successful examples from the first time I taught the course:
  • The Force: Uses a Kinect and an IR emitter to control a wireless helicopter using gestures.
  • The Friendly Waving and Dancing Bunny: Uses a Kinect, LEGOs, a stuffed animal, and servo motors to create an interactive animatronic bunny that responds to various gestural interactions (like dancing and waving).
  • Kinect: And Then There Was Light! To save power, many rooms in AVW turn off the lights to save power if no motion is sensed for ~20 minutes. In this project, the Kinect is used to count the number of people in a room and, if the count > 0, the system activates an arm attached to a servo motor to trigger the room's motion sensor to turn the lights back on.

Here are some cool Kinect hacks that could possibly be remixed, improved upon, or extended with motors--if nothing else, these examples are inspiring. :)
  1. Theo Watson and Emily Gobeillo of used openFrameworks and libFreenect to build an "Interactive Puppet Prototype with XBox Kinect." Here's version 2.0, which is even more impressive!
  2. Kinect-based touchscreen by combining depth sensing with projection (link). More on using Kinect for multitouch (video).
  3. Oliver Kreylos' famous 3D-video capture with a Kinect (more info here).
  4. Many people have tried using the Kinect for 3D-scanning (e.g., Kinect Fusion link1 and link2). These models can then be imported into video games or even 3D-printed (e.g., Kinect-Based 3D-scanning, link2, link3).

Note: the Microsoft Kinect also has a microphone and a speech recognition SDK. You are welcome to experiment with this as well but it is not required for this assignment. We also have a Microsoft Surface 2 (an interactive tabletop computer) that you are welcome to use for this assignment.

Note 2: some of you have asked if you can use the Leap Motion instead of the Kinect. Yes, this is fine. We have one Leap Motion in the HCIL Hackerspace; however, I believe its SDK is restricted to a login that I only have access to. We will need to explore this.

The Kinect

Setting up the Kinect Development Environment

We will be using the Microsoft "Kinect for Windows" sensor (rather than the XBox360 Kinect sensor) for this assignment. What's the difference you ask? Good question. According to the Microsoft Kinect Developer FAQ, the "Kinect for Windows" sensor "The Kinect for Windows sensor is a fully-tested and supported Kinect experience on Windows with features such as “near mode,” skeletal tracking control, API improvements, and improved USB support across a range of Windows computers and Windows-specific 10’ acoustic models."

So, it would seem that the Windows-based sensor has a special "near mode" to deal with the fact that people will be much closer to the Kinect sensor on a PC/laptop than they would be with an XBox360 Kinect sensor (i.e.,using it in living rooms). Again, according to the Kinect Developer FAQ, "'near mode' enables the depth sensor to see objects as close as 40 centimeters and also communicates more information about depth values outside the range than was previously available. There is also improved synchronization between color and depth, mapping depth to color, and a full frame API."

On Windows

The "officially supported" way of setting up your development environment for the Microsoft Kinect on Windows involves the following steps:
  1. Download and Install Microsoft Visual Studio. You can get the full version from your Microsoft Dreamspark account or download Visual Studio Express (which is freely available).
  2. Download and install the Kinect for Windows SDK
  3. Set up the Kinect for Windows Developer Toolkit
  4. Plug the Kinect into your laptop/PC
  5. You can also install language packs if you so desire (French, German, Italian, Japanese, and Spanish are supported)

On Mac OS X

It used to be that the only way to get a Kinect sensor setup for the Mac OS X was by using open source software. Now, however, there are multiple approaches:
  1. The new Kinect SDK works on Windows running in a virtual machine. To get this to work, see: Using Kinect for Windows with a Virtual Machine. Note: you can download a copy of Windows using your Microsoft Dreamspark account.
  2. If you want to go the open source route, you could try the OpenNI framework (which Apple will sadly be shutting down on April 23, 2014) or the SimpleOpenNI framework for Processing.

Installing OpenNI

Installing OpenNI is not entirely straight forward--especially if you are not comfortable or familiar with command line tools. Here are the steps I took to get it to work. I am running OS X Mavericks (10.9.2).
  1. You must have XCode installed.
  2. Download the Mac version of the OpenNI framework.
  3. Create a Kinect directory somewhere to store all the Kinect related things. I created '/Users/jonf/Kinect'
  4. Unzip/extract the OpenNI tarball to this folder (so, I have '/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2')
  5. Go into this directory and run the install script (> sudo ./ Nothing will happen. :)
  6. OpenNI relies on libfreenect, which is part of the Open Kinect project. libfreenect, however, has its own set of prerequisites including: libusb, CMake, and Python. We'll go through each in turn.

  7. Use homebrew to install libusb. `brew install libusb` OR follow these instructions.
    1. Download libusb here (direct 1.0.9 link).
    2. Unzip libusb into /Users/jonf/Kinect/. (so, I have '/Users/jonf/Kinect/libusb-1.0.18'). Then do the following:
    3. > cd libusb-1.0.18
      > ./configure
      > make
      > sudo make install
    4. libusb should be successfully installed now
  8. Ue homebrew to install cmake. `brew install cmake`. OR Follow these instructions.
    1. Download cmake here (the 'cmake-' file). Unlike libusb, you can install a precompiled binary. However, the Mac will likely give you a security warning. You must go to "Security & Privacy" on your Mac, click on the "lock" icon in the bottom left hand corner of the dialog box, and change "Allow apps downloaded from:" to "Anywhere." You only have to change this temporarily while you install cmake.
    2. Once you've changed your security settings, double click on the 'cmake-' file and install cmake. Go through the installation screens (I clicked on "yes" for installing command line tools though I'm not sure if that makes a difference). Once cmake has installed, change back your security settings to your defaults.
  9. Don't brew this! Now we can install libfreenect. Open terminal and goto /Users/jonf/Kinect. Then do this:
    > git clone
    > cd libfreenect
    > mkdir build
    > cd build
    > cmake -L ..
    > make
    > cmake .. -DBUILD_OPENNI2_DRIVER=ON
    > make
  10. The -DBUILD_OPENNI2_DRIVER=ON part is to create a bridge to libfreenect implemented as an OpenNI driver. It allows OpenNI to use Kinect hardware on Linux and OSX.

  11. The final step is to copy the the driver to the OpenNI driver directory (see step 3 here). Here's what I did to get the samples in '/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2/Samples/bin' to work:
    > cd /Users/jonf/Kinect/libfreenect/build
    > Repository="/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2/Samples/Bin/OpenNI2/Drivers"
    > cp -L lib/OpenNI2-FreenectDriver/libFreenectDriver* ${Repository}

  12. This will copy the following files into '/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2/Samples/Bin/OpenNI2/Drivers':

  13. In total, in '/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2/Samples/Bin/OpenNI2/Drivers,' you should have:
Jons-MacBook-Air-2:Drivers jonf$ pwd
Jons-MacBook-Air-2:Drivers jonf$ ls -al
total 7128
drwxr-xr-x@ 11 jonf  staff      374 Mar 31 14:49 .
drwxr-xr-x@  4 jonf  staff      136 Mar 31 14:42 ..
-rw-r--r--@  1 jonf  staff     6148 Mar 31 14:47 .DS_Store
-rw-r--r--@  1 jonf  staff     4090 Nov 12 09:15 PS1080.ini
-rw-r--r--@  1 jonf  staff     1304 Nov 12 09:15 PSLink.ini
-rwxr-xr-x   1 jonf  staff   147672 Mar 31 14:49 libFreenectDriver.0.4.1.dylib
-rwxr-xr-x   1 jonf  staff   147672 Mar 31 14:49 libFreenectDriver.0.4.dylib
-rwxr-xr-x   1 jonf  staff   147672 Mar 31 14:49 libFreenectDriver.dylib
-rwxr-xr-x@  1 jonf  staff   708384 Nov 12 09:15 libOniFile.dylib
-rwxr-xr-x@  1 jonf  staff  1600964 Nov 12 09:15 libPS1080.dylib
-rwxr-xr-x@  1 jonf  staff   866508 Nov 12 09:15 libPSLink.dylib
That's it! The samples should work now in '/Users/jonf/Kinect/OpenNI-MacOSX-x64-2.2/Samples/Bin.' Open terminal and goto that directory. Then type:
  • > ./SimpleViewer

The following app should open:
OpenNI SimpleViewer

Kinect Resources

Helpful Links


  • Jared St. Jean (editor of, Kinect Hacks: Tips & Tools for Motion and Pattern Detection, O'Reilly, 2012, Amazon
  • Greg Borenstein, Making Things See: 3D vision with Kinect, Processing, Arduino, and MakerBot, Make:Books, 2012, Amazon, Safari Online


There are three main types of motors: DC Motors, Servo Motors, and Stepper Motors.
  • DC Motors are fast, continuous rotation motors often used for things that need high RPM like car wheels and fans
  • Servo Motors are responsive, high-torque motors but with limited angle of rotation (e.g., 120-180 degrees). Servo motors require additional circuitry for positioning information. So, though you can precisely position a servo motor, if this position was blocked by, say, a wall or some other obstruction, your motor system would not know this without additional circuitry.
  • Stepper Motors have precise rotation that is easy to setup and control but are comparatively slower than servo motors. While a servo requires additional support circuitry to provide positional feedback information, a stepper motor gets this information "for free." Thus, stepper motors are used for 3D-printers, CNC machines, and other devices that require precise positioning information.

HCIL Hackerspace Motors

We have a bunch of random motors in the HCIL Hackerspace including hobby DC motors and servos as well as motors embedded in built artifacts like the Parrot AR.Drone and the IR RC Helicopters. In addition, I just put in the following order to Adafruit. These parts will arrive Tuesday, April 1st.

Motor Tutorials

Feel free to post additional helpful tutorials or forum entries to Piazza and I'll add them to this list.

Motor Controllers

DC Motors

Servo Motors

Stepper Motors

Tools/Library Usage

As before, you can use whatever developer tools, IDEs, debuggers, libraries, and/or code snippets you find to support turning your ideas into a reality. Of course, you must keep track and cite the use of any code or libraries you use in your project. You must also include citations towards projects that inspired your own. Do not be shy to include as many links as you can that influenced your projects form or function in some way.

Remember to also include citations (with URLs) in your code via comments to all code that you borrowed from or extended from blogs, forums, open source, etc. If I find code that was copied and not appropriately cited, I will consider this a direct violation of the UMD Academic Integrity policy. You will not be penalized for re-using or re-appropriating cool things in this class, you will be penalized for not properly attributing them.

Assignment Deliverables

The assignment deliverables are due before lecture begins.

  • Utilize github to store and post your code. This should be publically viewable and accessable. You are welcome to use any license you like on the code itself (including no license at all--e.g., None). When you use other people's code, you must cite your source--even if it's just a blog post and a small snippet. I believe github provides academic accounts (for additional features, please check the website).

  • Post a Wiki write-up to your own wiki subpage on this wiki (example).

  • Upload a video demoing your submission to YouTube. You should include the link to the YouTube video in your Wikipage. Please take the video creation process seriously--video is one of the best forms to portray the interactivity and sheer awesomeness of your inventions. I hope that you create something you would feel proud of to show your friends or family.

  • Presentation/demo. On Wed, April 16, we'll have a presentation/demo day. We will dedicate the whole 75 minutes to this (if not more!). We will likely have around 10 teams so each presentation should be 4 minutes + 1 min Q/A. It's up to you how you want to present your work--you could do a live demo for the class, play all or part of your video, show slides, or do an interpretive dance. After all presentations are complete, we'll use the remaining time in the class to interact with each others demos.

Assignment Grading and Rubric

Most, if not all, assignments in this class will be graded on novelty, aesthetic, fun, creativity, technical sophistication, and engagement. All assignments (including the project) will be peer-reviewed by everyone in the class including me. Everyone, including me, will fill out the same feedback form. We will rank our favorite projects and the top two or three teams will receive a prize.

Completed Assignments

As before, please list your completed assignments below. You and your partner do not have to list your real names unless you want to. Instead, you can come up with a team name (I know some students are sensitive about online privacy and I respect that). The format should be similar to before.

1. KineChroma

Alina Goldman, HCI Masters
Jon Gluck, CS PhD

Draw with light using photoluminescence and lasers.

2. SketchEngine

Fan Du, CS PhD
Meethu Malu, CS PhD

Creating Art using Kinect and Lasers!


Ruofei Du, CS PhD
Kent Wills, CS Masters
Max Potasznik, CS Masters

Zen gardens are a peaceful way to visualize space and tranquillity. Our project seeks to connect the zen garden to the room in which it sits; imbuing the garden with properties of the room's occupants. In this way, the zen garden becomes not just a visualization of an ideal space and tranquillity but of the actual tranquillity of it's surroundings... or lack thereof.

4. (motor)chestra

Chris Imbriano, CS PhD
Sana Malik, CS PhD

Conduct a two-motor orchestra.

5. Polite Bunny

Ankit Abboud, CS Masters
Hitesh Maidasani, CS Masters
Ankit Shrivastava, ISchool Masters

When you pass next to the window, our Polite Bunny will track you and follow you.
When you stop moving, the bunny will waive its hand at you to tell you hello :-)

6. Somebody's Watching You

Richard Johnson, CS PhD
Joshua Bradley, CS PhD

7. Mr. Crab

Tiffany Chao
Peter Enns

8. Picoku

Brendan Fruin, CS Masters
Kristin Williams, HCI Masters

Play Sudoku with gestures!