Posts Tagged ‘ kinect ’

Hanami

fukuoka  026

Sakura in Fukuoka

Hanami or flower viewing is commonly associated with the act of viewing cherry blossoms.

I think I’ve found my idea for my Computational Cameras final. It has an intresting history that dates back to the 18th century on how a single flower can symbolize extreme beauty and a quick death due to the fleeting nature of the cherry blossoms.

nagasaki  055

Cherry trees at Nagasaki ground zero

I’ll be implementing these in my final project by replacing it the lifespan of the cherry blossoms as you move through a given space.

So for the next few weeks I’ll be spending time at the Brooklyn Botanical Gardens to shoot pictures and video. I’m not sure yet if the petals will be falling but if not. I’ll attempt to recreate them in Processing.

http://vimeo.com/40166788

Help me decide

What to do for my finals?? I have a bunch and need to narrow them down and focus on ONE project for the next two weeks.

1. Enhance the AR project.

I would actually move the AR codes into playing cards and laying them down the table would project different words. I was thinking of making it a language learning tool. By arranging them in the correct order/ syntax, you will get the english translation.

OR

haptic interface via IR, same as above but a touchscreen keyboard on plexi.

OR

Hang a bunch of AR codes around and when a camera or phone is pointed at it, it will display a “virtual forest” with vines or branches linking between the AR codes.

2. Enhance the midterm project.

Instead of putting random unrelated images as I walk through space using the kinect. I’d like to project an actual scene that would give you the illusion of actually walking through space.

OR

Generate an interactive scene that would respond to your proximity and facial reactions much like a camera enabled Eliza psychotherapist bot.

http://en.wikipedia.org/wiki/ELIZA

3. I’d like to make a digital camera obscura box. But it seems to me that the illusion is lost when it moves to the digital form compared to the actual physics involved in projecting it.

http://ngm.nationalgeographic.com/2011/05/camera-obscura/camera-obscura-video

So I’m reaching out to the class to help me decide.

Camera Walk?

I’ve posted it before I think that I always wanted a holodeck. But of course this is nothing like that. So for this project I finally got the network camera working at my place in Queens and uploading the images every 15 minutes. I shot some video from a window at ITP and put the two together.

I knew I was going to use the Kinect and initially began using the depthMap and measuring in inches and using the values there to determine distance. That didn’t work for me. I decided to use the Center of Mass or CoM command to determine the position but then depth would be another problem.

Soooo for the purposes of this project I just adjusted the position of the kinect to be overhead to simplify the position.

Adding a fornt facing camera turned out to be more challenging. For some reason, OpenNI takes over all the cameras of the computer and will only want the kinect. Solution? Add another kinect.

I’d like to expand this further with head tracking such as imitating the look around an area or even creating the illusion of depth without 3D glasses.

http://youtu.be/ac9mx180aTo

Skeletal tracking

This week Kim Ash and I worked together on the skeletal tracking of the Kinect using OpenNI. The idea is when you reach a pose, a “nuclear” explosion occurs. Using the code sample from ITP Resident Greg Borenstien’s book “Making Things See, 2011”, it was fairly straightforward enough to get the skeletal tracking in place.

We wanted the explosion to occur once the two outstretched arms were in place.

skeletal tracking

In this image, we just wanted to track the arms. This is possible using the OpenNI commands:

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

Then by using an “if” statement, it was just measuring the position of the joints that would give the outstretched arms pose.

if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
stroke(255);
}
else {
tint(255, 255);
image(cloud, 840, 130, 206, 283);
explosion.play();
// stroke(255, 0, 0);
}
kinect.drawLimb(userId, SimpleOpenNI.SKELRIGHTSHOULDER, SimpleOpenNI.SKELRIGHTELBOW);
kinect.drawLimb(userId, SimpleOpenNI.SKELLEFTSHOULDER, SimpleOpenNI.SKELLEFTELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

Which results in this:

We wanted a better screen capture but for some reason this sketch didn’t like Ambrosia’s SnapzPro.

Full code:

import ddf.minim.;
import ddf.minim.signals.
;
import ddf.minim.analysis.;
import ddf.minim.effects.
;

Minim minim;
AudioPlayer explosion;

import SimpleOpenNI.*;
SimpleOpenNI kinect;
PImage back;
PImage cloud;

void setup() {
size(640*2, 480);
back = loadImage(“desert.png”);
cloud = loadImage(“cloud.png”);
// imageMode(CENTER);

minim = new Minim(this);
explosion = minim.loadFile(“explosion.mp3”);

kinect = new SimpleOpenNI(this);
kinect.enableDepth();
kinect.enableRGB();
kinect.enableUser(SimpleOpenNI.SKELPROFILEALL);
strokeWeight(5);
}

void draw() {
background(0);
kinect.update();
image(kinect.depthImage(), 0, 0);
// image(kinect.rgbImage(),640,0);
image(back, 640, 0, 640, 480);

IntVector userList = new IntVector();
kinect.getUsers(userList);
if (userList.size() > 0) {
int userId = userList.get(0);
if ( kinect.isTrackingSkeleton(userId)) {
PVector rightHand = new PVector();
PVector rightElbow = new PVector();
PVector rightShoulder = new PVector();
PVector leftHand = new PVector();
PVector leftElbow = new PVector();
PVector leftShoulder = new PVector();

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

  // right elbow above right shoulder
  // AND
  // right elbow right of right shoulder
  if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
   // stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

}
}

// user-tracking callbacks!
void onNewUser(int userId) {
println(“start pose detection”);
kinect.startPoseDetection(“Psi”, userId);
}

void onEndCalibration(int userId, boolean successful) {
if (successful) {
println(” User calibrated !!!”);
kinect.startTrackingSkeleton(userId);
}
else {
println(” Failed to calibrate user !!!”);
kinect.startPoseDetection(“Psi”, userId);
}
}

void onStartPose(String pose, int userId) {
println(“Started pose for user”);
kinect.stopPoseDetection(userId);
kinect.requestCalibrationSkeleton(userId, true);
}

void keyPressed() {

switch(key)
{
case ‘ ‘:
kinect.setMirror(!kinect.mirror());
break;
}
}

void close () {
explosion.close();
minim.stop();
super.stop();
}

10 years from now, the Kinect will be…

  1. It will be used to track a person’s movements in a moving vehicle. I see the technology to be able to see the effects of crash tests and impact test on vehicles.
  2. This could actually be used to take body measurements for various applications such as customized furniture and equipment like bicycles.
  3. all connected and in a Batman Dark Knight sort of way, spy on us and give intelligence agencies a real time 3D visual map of any area that the camera sees.
  4. The Star Trek holodeck could actually be real in my lifetime. With Kinect cameras to capture the real world in real time, it could be recreated in a holodeck somewhere for us to interact in.
  5. We are no longer limited by the size of our screen to use our computer. Minority Report gesture control has now arrived to our homes.
  6. Assitive technology for those who have no depth perception. The sad thing about current 3D technology is that it requires viewers to have both eyes to view the 3D image. Using kinect technology, I think we can use it to scan the world and display it in such a way that we won’t get dizzy with the fancy images.
  7. Virtual presence scanner. Imagine you can be “physically” present anywhere with your 3D scanned image using the Kinect and brought elsewhere.
  8. TV is now a thing of the past. Shows are now projected directly into your room. Video cameras will have kinect technology that will allow projectors to display the action right in your living room.
  9. It will be used to automate preparation of food. Imagine. You’ll never have to de bone a fish or chicken for dinner. Current technology relies on X-ray snapshots in a machine that only belongs in a factory. What if this could be in your house. I think it would be awesome.

 

    10. and lastly, the Kinect technology could be used to follow the human body as it approaches a screen for the image on the screen to adjust to the depth opf field of the user. Objects in the mirror may appear closer.

So there you have it. My ten predictions for the future. Some of them are already here. But who knows what the future brings.

Kinect Hand Tracking

It’s very unique that a single camera can essentially do two things. It’s far from perfect but it’s still a joy to work with. The Xbox Kinect is actually two cameras put together. A “traditional” RGB camera and an infrared camera assembled side by side. Due to the assembly of the camera, what you see in one lens is not the same as the other due to a parallax effect where the cameras are not displaying the exact same image. Most notably a less than 1 cm gap between the two cameras.

What makes this unique is the ability of the kinect to detect depth via the IR camera. As the kinect projects infrared light, this can be seen by the IR camera and return values that measure depth or distance of an object in front of it.

For the first week in Computational Cameras is to work with the depth camera and see what things we can do with it.

Based on the OpenNI framework and using the Processing library we are able to track our hand with the Kinect as seen with the red dot.

Notice that when my hand turns black, the Kinect is technically, unable to see my hand. The dot follows my right hand even when I put it down and extend my left hand. However I can pick up the dot with my left and pass it on.

What if we use a shape?

For this example I decided to use a katamari.

I decided to use an SVG file instead of the traditional image file due to it’s ability to keep it’s resolution when scaled. I assume the properties are the same when using the PImage command as well.

It was simple enough to call the katamari into the sketch and behaves the same way as the dot. But seeing the katamari in gray isn’t fun. So I activated the RGB camera and put them side by side.

Notice on how the image goes beyond the frame of the IR camera but still visible in the entire sketch.

And by putting the two together. I seems like magic! I wish this was real that the katamari would roll up the mess in my room.

Next up. I want to put more objects that I can interact with and manipulate in multiple screens.

Pampanga lantern – Physical Computing Final

 

Sooo my last idea didn’t exactly fly but that’s the way it is here. My classmates instead gave me better ideas to make this more interesting. One that resonated was suggested by Lisa Park that I should look at the work of installation artist Olafur Eliasson and his installation work in London called the Weather Project.

The Weather Project

This now inspired me to make something smaller using ordinary light bulbs and AC power. By using a servo motor to control an dimmer switch found at Lowes or Home Depot to control the brightness of the lamp.

I then started designing the lamp itself. By using 6 x 60w bulbs hanging overhead.

But then the lamp looked too ordinary. I felt like I could buy something in Ikea and it would still be better than anything I would make. I wanted to use a lamp that my family makes, but unfortunately none are in the country right now.

While talking to my brother and sister, they gave me the idea of using shells.

Photo by flickr user Beth (Nautilus Shell Studios) under a CC BY 2.0 License

Photo by flickr user Beth (Nautilus Shell Studios) under a CC BY 2.0 License

These are abundant in the Philippines but I don’t have the time to procure them at this time. I’m just making my sister bring a bunch that I can use for future project.

So then I finally settled on a Pampanga Lantern.

Photo by flickr user dementia under a BY-NA-SA-2.0 Creative Commons license.

The Pampanga lantern has Christian significance. A spin off of the more traditional “parol” or Christmas lantern, this symbolizes the star that led the three wise men to the Baby Jesus. This version is made from the Province of Pampanga in the Philippines and is only manufactured for the holidays. Each array of lights are controlled in a sequence and a parade is held every year displaying various designs from different provinces. These can lanterns can be from the size of a plate to twenty feet. The one displayed here took a year to make.


As far as I know, they don’t use Arduinos to power this.

So now I’ve decided to make mine a lantern.

I wanted to make it the traditional way by using either stiff wire or wood to build the frame and wrap paper around it. But after an afternoon of bending wire to my will, I was quickly frustrated at the slow pace and difficulty in handling 16 gauge wire. While at chuch I had the insanely great idea of using a laser cutter. DUH!

So an afternoon of hanging in the shop and talking to Tak Cheung and Oya Kosebay gave great fabrication ideas on how to build it.

I went home and designed the pattern on illustrator for the laser cutter and came up with this.

A Saturday trip to Canal Plastics and picked me up a 1/8″ 1′ x 2′ acrylic sheet of transparent plastic and headed to the lab where Jackson and Michael Columbo helped with the laser cutter as well as conforming my pattern to the laser cutter. For future reference laser people, make sure your design is inside the 1′ x 2′ dimension of the material/ laser cutter. Like really in there with a border. Think of it as a safe area.

After roughly an hour.

Now the holes I originally designed were too small now for LEDs to pass through so I has to use a hand drill with a 1/8″ bit for the LED pins to fit through. Lesson learned. But at the same thing it gave me practice on using a hand drill while watching PAC12 footbal over the weekend. Go Trojans!!

Now for the wiring and / programming part. I’m planning to use copper tape and the wiring board in a separate device which will hang from the ceiling and basically just keep the lights on the board. I’ve etched the colors as well and will use paper to cover it and diffuse the light. Think of it like the USS Enterprise saucer section but hanging on the wall.

Photo copywright of Paramount Pictures

I’m thinking of being able to switch between two sensors or one or whichever works. I’m currently in possesion of the humidity and temperature sensor RHT03 from sparkfun instead of a thermistor. Using a wonderful library available on the arduino website it maps the RHT03 the same way. It’s technically a digital sensor so it’s pretty much straightforward.

To measure person presence, an infrared sensor would be perfect but I’ve spent my money on LEDs then the kinect happened.

Yes the Xbox Kinect that was once a toy and now a hackers device and something I already have. I just plan on using processing to get the serial data from the IR cam of the kinect using the Open Kinect library since I don’t really need to map out a person at this time over the OpenNI library and send that data to the arduino to measure people presence.

The other option is to use the current data that I’m getting from my ICM project final which gets the weather from the Yahoo Weather API and send it serial to the arduino.

It’s something to work on the holidays before the big cruch for December. Hopefully I’ll be finished with most of this weekend and Happy Holidays!

 

 

Xbox360 @ E3

I know these updates are coming pretty slow but they’re coming.

The Xbox360 had quite a showing at E3.

Project Natal is now Kinect
I knew there was something weird with the name Natal and apparently audiences did not warm up to the name which was announced to the public a year ago. It’s still pretty much the same and will still use the camera peripheral in order to track your movements. There’s no wand or wii mote to help the camera guide you so setting this up will be quite interesting. With that in mind, I’ll assume you can’t play this in a dimly lit room since the cameras need some light in order to track something it sees. The freedom in movement looks interesting with nothing in your hand to accidentally throw at your very expensive LCD TV. The list of games is pretty much the same as you would expect with motion control. You have your usual sports games but the dancing is a new thing for motion control since there is no floor pad. But is this new?

No, Sony had a similar device to this called the Eye Toy which was originally released on the PS2 and had it’s share of motion detection. Nothing new for Kinect other than the response time and the graphics. We’ll just have to wait and see when Kinect comes out in the fall.

Microsoft Kinect for Xbox 360

Kinect Dance Central | E3 trailer XBox 360

There’s a new Xbox design

New XBOX 360 250GB E3 2010 [HD]

The new Xbox finally gets it right. Built in WiFi, 250 GB HD and Kinect ready. All wrapped up in a shiny black box. Doesn’t that remind you of another console?

But for most of us who might be buying this console, we already have one. How on Earth do we transfer all our stuff to the new console. You now can with this hard drive transfer kit for only $15 USD. You’d think they’d at least include it in the box.

Despite all of this the big question remains, has Microsoft fixed their dreaded red ring of death? I for one have gone through at least four consoles. Thankfully fixed by warranty but it’s the first time that a console actually died on me after all my years of gaming. Not to mention that it is the one console that all my friends and I share the same dilemma. It has died on us at least once.  It may have the most popular games but it is technically the worst console I have owned. Gone are the days where gaming consoles could take a beating. I try to limit moving the console from one place to another for fear of it getting broken. So have they fixed it? Reports say they have. Only time will tell. It’s available now for $299 USD.

But after all of their announcements, there’s still something missing in my opinion, the handheld. Yes the handheld. I’ve been waiting or clamoring for the Xbox team to come up with a portable gaming device to go up against Nintendo, Sony and of course Apple. It is the once space they haven’t made any inclinations of entering. Is it necessary for them to enter this space? Yes. Handhelds sell. Nintendo has sold more DS units than all the consoles sold by Sony and Microsoft together. I think it’s only a matter of time before they do. Maybe in five years, the same amount of time it took them to fix most of the Xbox 360.