January 25, 2011tags: depth-sensor,hack,interface,kinect,

Kinect Magic

Finally had some time this weekend to play with kinect. Nothing too special yet, just segementing pixels based on depth ranges. It’s implemented on the gpu and runs pretty fast (450fps on MBP 9600M GT).

I’d like to extend it so that you can drag 3D shapes into the “room” and interact with them (almost like a gui builder for things like microsoft lightspace). 2D planes can emulate cursors (I do have a cursor working for 2D tracking, but no video yet), a cylinder can be a slider, other regions can have custom functionality. Then we can design interaction and interactive spaces like game designers design video games.

Big kudos to the openkinect group for doing so much amazing work at record pace and for openframeworks for being generally awesome and having integrated kinect already.

We can see a lot of uses for this. Anything from an interactive shopping window, to installations where touchscreens are not an option. Let us know if you’re interested in exploring possibilites.

GPU based bounding box intersection of kinect depth map from fresk on Vimeo.

click to enlarge