Archive for the ‘ Computational Cameras ’ Category

Spatial LIteracy

In the essay required to read this week shows different approaches toward the space between the written text and the spoken language. I believe it’s frightening to think that without the sense of emotion of the spoken word and the information of the written text that human development might have been very different. I’m of course stating the extreme. But it reminded me of when I was stil in film school where we actually used film compared to today’s digital tools. There was a stark contrast in the cinematography course on the requirements to pass the course. It was not “arbitrary” where we would make films and show them for critique and opinions then graded according to our story and performance. But instead we had to follow evyerhing to the letter. We essentially had “to learn the science behind the art”.

This meant learning what ASA settings were, f-stops, footcandles, gels and the like to create a spectacular image to push our stories forward. There was no room for error. It was either you got it right, or everyrhing is black. Cinematography was something specific, precise requiring an entirely new subset of knowledge in order to put image onto celluloid. By acquiring this “cinematographic language” we are able to control the factors that define the image.

Using this analogy, by further examining and defining the spaces around us, I think that it’s possible to show more. Computers and data accelerate this process. By deconstructing the information and putting it forward in text it is possible to further expand the current syntax known by computers. For example if we can create a system where a program can actually detect emotion by using a variety of factors or even display emotion, I think that it would put our understanding of the physical world even further than the text that runs by our screen.

Kinect Hand Tracking

It’s very unique that a single camera can essentially do two things. It’s far from perfect but it’s still a joy to work with. The Xbox Kinect is actually two cameras put together. A “traditional” RGB camera and an infrared camera assembled side by side. Due to the assembly of the camera, what you see in one lens is not the same as the other due to a parallax effect where the cameras are not displaying the exact same image. Most notably a less than 1 cm gap between the two cameras.

What makes this unique is the ability of the kinect to detect depth via the IR camera. As the kinect projects infrared light, this can be seen by the IR camera and return values that measure depth or distance of an object in front of it.

For the first week in Computational Cameras is to work with the depth camera and see what things we can do with it.

Based on the OpenNI framework and using the Processing library we are able to track our hand with the Kinect as seen with the red dot.

Notice that when my hand turns black, the Kinect is technically, unable to see my hand. The dot follows my right hand even when I put it down and extend my left hand. However I can pick up the dot with my left and pass it on.

What if we use a shape?

For this example I decided to use a katamari.

I decided to use an SVG file instead of the traditional image file due to it’s ability to keep it’s resolution when scaled. I assume the properties are the same when using the PImage command as well.

It was simple enough to call the katamari into the sketch and behaves the same way as the dot. But seeing the katamari in gray isn’t fun. So I activated the RGB camera and put them side by side.

Notice on how the image goes beyond the frame of the IR camera but still visible in the entire sketch.

And by putting the two together. I seems like magic! I wish this was real that the katamari would roll up the mess in my room.

Next up. I want to put more objects that I can interact with and manipulate in multiple screens.

Design a site like this with WordPress.com
Get started