Archive for the ‘ mobile ’ Category

Queensborough bridge at dawn



Queensborough bridge at dawn, originally uploaded by mdelamerced.

Advertisements

Sandy Timelapse/ Upper East Side NY

ITP Spring Show 2012

ITPshow

Today’s the last day for the ITP Spring Show. Last chance to see awesome stuff.

Help me decide

What to do for my finals?? I have a bunch and need to narrow them down and focus on ONE project for the next two weeks.

1. Enhance the AR project.

I would actually move the AR codes into playing cards and laying them down the table would project different words. I was thinking of making it a language learning tool. By arranging them in the correct order/ syntax, you will get the english translation.

OR

haptic interface via IR, same as above but a touchscreen keyboard on plexi.

OR

Hang a bunch of AR codes around and when a camera or phone is pointed at it, it will display a “virtual forest” with vines or branches linking between the AR codes.

2. Enhance the midterm project.

Instead of putting random unrelated images as I walk through space using the kinect. I’d like to project an actual scene that would give you the illusion of actually walking through space.

OR

Generate an interactive scene that would respond to your proximity and facial reactions much like a camera enabled Eliza psychotherapist bot.

http://en.wikipedia.org/wiki/ELIZA

3. I’d like to make a digital camera obscura box. But it seems to me that the illusion is lost when it moves to the digital form compared to the actual physics involved in projecting it.

http://ngm.nationalgeographic.com/2011/05/camera-obscura/camera-obscura-video

So I’m reaching out to the class to help me decide.

Skeletal tracking

This week Kim Ash and I worked together on the skeletal tracking of the Kinect using OpenNI. The idea is when you reach a pose, a “nuclear” explosion occurs. Using the code sample from ITP Resident Greg Borenstien’s book “Making Things See, 2011”, it was fairly straightforward enough to get the skeletal tracking in place.

We wanted the explosion to occur once the two outstretched arms were in place.

skeletal tracking

In this image, we just wanted to track the arms. This is possible using the OpenNI commands:

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

Then by using an “if” statement, it was just measuring the position of the joints that would give the outstretched arms pose.

if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
stroke(255);
}
else {
tint(255, 255);
image(cloud, 840, 130, 206, 283);
explosion.play();
// stroke(255, 0, 0);
}
kinect.drawLimb(userId, SimpleOpenNI.SKELRIGHTSHOULDER, SimpleOpenNI.SKELRIGHTELBOW);
kinect.drawLimb(userId, SimpleOpenNI.SKELLEFTSHOULDER, SimpleOpenNI.SKELLEFTELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

Which results in this:

We wanted a better screen capture but for some reason this sketch didn’t like Ambrosia’s SnapzPro.

Full code:

import ddf.minim.;
import ddf.minim.signals.
;
import ddf.minim.analysis.;
import ddf.minim.effects.
;

Minim minim;
AudioPlayer explosion;

import SimpleOpenNI.*;
SimpleOpenNI kinect;
PImage back;
PImage cloud;

void setup() {
size(640*2, 480);
back = loadImage(“desert.png”);
cloud = loadImage(“cloud.png”);
// imageMode(CENTER);

minim = new Minim(this);
explosion = minim.loadFile(“explosion.mp3”);

kinect = new SimpleOpenNI(this);
kinect.enableDepth();
kinect.enableRGB();
kinect.enableUser(SimpleOpenNI.SKELPROFILEALL);
strokeWeight(5);
}

void draw() {
background(0);
kinect.update();
image(kinect.depthImage(), 0, 0);
// image(kinect.rgbImage(),640,0);
image(back, 640, 0, 640, 480);

IntVector userList = new IntVector();
kinect.getUsers(userList);
if (userList.size() > 0) {
int userId = userList.get(0);
if ( kinect.isTrackingSkeleton(userId)) {
PVector rightHand = new PVector();
PVector rightElbow = new PVector();
PVector rightShoulder = new PVector();
PVector leftHand = new PVector();
PVector leftElbow = new PVector();
PVector leftShoulder = new PVector();

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

  // right elbow above right shoulder
  // AND
  // right elbow right of right shoulder
  if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
   // stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

}
}

// user-tracking callbacks!
void onNewUser(int userId) {
println(“start pose detection”);
kinect.startPoseDetection(“Psi”, userId);
}

void onEndCalibration(int userId, boolean successful) {
if (successful) {
println(” User calibrated !!!”);
kinect.startTrackingSkeleton(userId);
}
else {
println(” Failed to calibrate user !!!”);
kinect.startPoseDetection(“Psi”, userId);
}
}

void onStartPose(String pose, int userId) {
println(“Started pose for user”);
kinect.stopPoseDetection(userId);
kinect.requestCalibrationSkeleton(userId, true);
}

void keyPressed() {

switch(key)
{
case ‘ ‘:
kinect.setMirror(!kinect.mirror());
break;
}
}

void close () {
explosion.close();
minim.stop();
super.stop();
}

10 years from now, the Kinect will be…

  1. It will be used to track a person’s movements in a moving vehicle. I see the technology to be able to see the effects of crash tests and impact test on vehicles.
  2. This could actually be used to take body measurements for various applications such as customized furniture and equipment like bicycles.
  3. all connected and in a Batman Dark Knight sort of way, spy on us and give intelligence agencies a real time 3D visual map of any area that the camera sees.
  4. The Star Trek holodeck could actually be real in my lifetime. With Kinect cameras to capture the real world in real time, it could be recreated in a holodeck somewhere for us to interact in.
  5. We are no longer limited by the size of our screen to use our computer. Minority Report gesture control has now arrived to our homes.
  6. Assitive technology for those who have no depth perception. The sad thing about current 3D technology is that it requires viewers to have both eyes to view the 3D image. Using kinect technology, I think we can use it to scan the world and display it in such a way that we won’t get dizzy with the fancy images.
  7. Virtual presence scanner. Imagine you can be “physically” present anywhere with your 3D scanned image using the Kinect and brought elsewhere.
  8. TV is now a thing of the past. Shows are now projected directly into your room. Video cameras will have kinect technology that will allow projectors to display the action right in your living room.
  9. It will be used to automate preparation of food. Imagine. You’ll never have to de bone a fish or chicken for dinner. Current technology relies on X-ray snapshots in a machine that only belongs in a factory. What if this could be in your house. I think it would be awesome.

 

    10. and lastly, the Kinect technology could be used to follow the human body as it approaches a screen for the image on the screen to adjust to the depth opf field of the user. Objects in the mirror may appear closer.

So there you have it. My ten predictions for the future. Some of them are already here. But who knows what the future brings.

Max Sonar EZ1 critique

In the Sensor Workshop class, I was paired with Mark Breneman to find, update, edit, and critique a sensor report already posted in the ITP wiki as a guide for the ITP community. We chose the MaxSonar EZ-1 sensor form Maxbotix.

Most of the codes posted are based on PBASIC and must be translated to Arduino C since most of ITP uses the Arduino microntroller for our projects. Will edit the wiki further before Wednesday and add links to pictures and diagrams.