Archive for the ‘ mobile ’ Category

Queensborough bridge at dawn



Queensborough bridge at dawn, originally uploaded by mdelamerced.

Sandy Timelapse/ Upper East Side NY

ITP Spring Show 2012

ITPshow

Today’s the last day for the ITP Spring Show. Last chance to see awesome stuff.

Help me decide

What to do for my finals?? I have a bunch and need to narrow them down and focus on ONE project for the next two weeks.

1. Enhance the AR project.

I would actually move the AR codes into playing cards and laying them down the table would project different words. I was thinking of making it a language learning tool. By arranging them in the correct order/ syntax, you will get the english translation.

OR

haptic interface via IR, same as above but a touchscreen keyboard on plexi.

OR

Hang a bunch of AR codes around and when a camera or phone is pointed at it, it will display a “virtual forest” with vines or branches linking between the AR codes.

2. Enhance the midterm project.

Instead of putting random unrelated images as I walk through space using the kinect. I’d like to project an actual scene that would give you the illusion of actually walking through space.

OR

Generate an interactive scene that would respond to your proximity and facial reactions much like a camera enabled Eliza psychotherapist bot.

http://en.wikipedia.org/wiki/ELIZA

3. I’d like to make a digital camera obscura box. But it seems to me that the illusion is lost when it moves to the digital form compared to the actual physics involved in projecting it.

http://ngm.nationalgeographic.com/2011/05/camera-obscura/camera-obscura-video

So I’m reaching out to the class to help me decide.

Skeletal tracking

This week Kim Ash and I worked together on the skeletal tracking of the Kinect using OpenNI. The idea is when you reach a pose, a “nuclear” explosion occurs. Using the code sample from ITP Resident Greg Borenstien’s book “Making Things See, 2011”, it was fairly straightforward enough to get the skeletal tracking in place.

We wanted the explosion to occur once the two outstretched arms were in place.

skeletal tracking

In this image, we just wanted to track the arms. This is possible using the OpenNI commands:

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

Then by using an “if” statement, it was just measuring the position of the joints that would give the outstretched arms pose.

if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
stroke(255);
}
else {
tint(255, 255);
image(cloud, 840, 130, 206, 283);
explosion.play();
// stroke(255, 0, 0);
}
kinect.drawLimb(userId, SimpleOpenNI.SKELRIGHTSHOULDER, SimpleOpenNI.SKELRIGHTELBOW);
kinect.drawLimb(userId, SimpleOpenNI.SKELLEFTSHOULDER, SimpleOpenNI.SKELLEFTELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

Which results in this:

We wanted a better screen capture but for some reason this sketch didn’t like Ambrosia’s SnapzPro.

Full code:

import ddf.minim.;
import ddf.minim.signals.
;
import ddf.minim.analysis.;
import ddf.minim.effects.
;

Minim minim;
AudioPlayer explosion;

import SimpleOpenNI.*;
SimpleOpenNI kinect;
PImage back;
PImage cloud;

void setup() {
size(640*2, 480);
back = loadImage(“desert.png”);
cloud = loadImage(“cloud.png”);
// imageMode(CENTER);

minim = new Minim(this);
explosion = minim.loadFile(“explosion.mp3”);

kinect = new SimpleOpenNI(this);
kinect.enableDepth();
kinect.enableRGB();
kinect.enableUser(SimpleOpenNI.SKELPROFILEALL);
strokeWeight(5);
}

void draw() {
background(0);
kinect.update();
image(kinect.depthImage(), 0, 0);
// image(kinect.rgbImage(),640,0);
image(back, 640, 0, 640, 480);

IntVector userList = new IntVector();
kinect.getUsers(userList);
if (userList.size() > 0) {
int userId = userList.get(0);
if ( kinect.isTrackingSkeleton(userId)) {
PVector rightHand = new PVector();
PVector rightElbow = new PVector();
PVector rightShoulder = new PVector();
PVector leftHand = new PVector();
PVector leftElbow = new PVector();
PVector leftShoulder = new PVector();

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

  // right elbow above right shoulder
  // AND
  // right elbow right of right shoulder
  if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
   // stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

}
}

// user-tracking callbacks!
void onNewUser(int userId) {
println(“start pose detection”);
kinect.startPoseDetection(“Psi”, userId);
}

void onEndCalibration(int userId, boolean successful) {
if (successful) {
println(” User calibrated !!!”);
kinect.startTrackingSkeleton(userId);
}
else {
println(” Failed to calibrate user !!!”);
kinect.startPoseDetection(“Psi”, userId);
}
}

void onStartPose(String pose, int userId) {
println(“Started pose for user”);
kinect.stopPoseDetection(userId);
kinect.requestCalibrationSkeleton(userId, true);
}

void keyPressed() {

switch(key)
{
case ‘ ‘:
kinect.setMirror(!kinect.mirror());
break;
}
}

void close () {
explosion.close();
minim.stop();
super.stop();
}

10 years from now, the Kinect will be…

  1. It will be used to track a person’s movements in a moving vehicle. I see the technology to be able to see the effects of crash tests and impact test on vehicles.
  2. This could actually be used to take body measurements for various applications such as customized furniture and equipment like bicycles.
  3. all connected and in a Batman Dark Knight sort of way, spy on us and give intelligence agencies a real time 3D visual map of any area that the camera sees.
  4. The Star Trek holodeck could actually be real in my lifetime. With Kinect cameras to capture the real world in real time, it could be recreated in a holodeck somewhere for us to interact in.
  5. We are no longer limited by the size of our screen to use our computer. Minority Report gesture control has now arrived to our homes.
  6. Assitive technology for those who have no depth perception. The sad thing about current 3D technology is that it requires viewers to have both eyes to view the 3D image. Using kinect technology, I think we can use it to scan the world and display it in such a way that we won’t get dizzy with the fancy images.
  7. Virtual presence scanner. Imagine you can be “physically” present anywhere with your 3D scanned image using the Kinect and brought elsewhere.
  8. TV is now a thing of the past. Shows are now projected directly into your room. Video cameras will have kinect technology that will allow projectors to display the action right in your living room.
  9. It will be used to automate preparation of food. Imagine. You’ll never have to de bone a fish or chicken for dinner. Current technology relies on X-ray snapshots in a machine that only belongs in a factory. What if this could be in your house. I think it would be awesome.

 

    10. and lastly, the Kinect technology could be used to follow the human body as it approaches a screen for the image on the screen to adjust to the depth opf field of the user. Objects in the mirror may appear closer.

So there you have it. My ten predictions for the future. Some of them are already here. But who knows what the future brings.

Max Sonar EZ1 critique

In the Sensor Workshop class, I was paired with Mark Breneman to find, update, edit, and critique a sensor report already posted in the ITP wiki as a guide for the ITP community. We chose the MaxSonar EZ-1 sensor form Maxbotix.

Most of the codes posted are based on PBASIC and must be translated to Arduino C since most of ITP uses the Arduino microntroller for our projects. Will edit the wiki further before Wednesday and add links to pictures and diagrams.

Documentary ideas

We were asked to list possible ideas for a documentary that we can complete this semester so here’s what I came up with. So on the flight back to NY I came up with this list.

1.) *Woody Allen’s Manhattan/ New York
2.) Recycling day/ reusable bags/ plastic bag consumption
    I hate the recycling system of NY that it has to be in bags. Feels so 3rd world. That’s how they collect garbage in my neighborhood back home.
3.)* hybrid or petrol cars
    why hybrid
    why petrol
4.) Winter in New York
5.) Do you know where your water comes from?
6.) Email and letters
    Do letters have a greater impact?
7.) What’s in a New York minute? Why does it feel that time moves faster in New York than in any other city I’ve been to?
8.) What’s in the cameraphone pictures?
    why use your cameraphone?
    has it turned into something else?
9.) Train ride along the atlantic coast or cross country during spring break
10.) All in the family.

The ones with a “*” were presented in class.

My comments on each topic:

1. I’ve been a fan of Woody Allen’s films since I saw Manhattan in one of my film classes in 1994. I think that no other filmmaker has shot that many films in one city for more than 30 years. Using geolocation, it’s possible to factor in the locationsas well as the scenes from his movies. There are places that he revisits after 10 movies and uses them again. My documentary would visit the places research on the film and the location and significance for the American auteur.

2. Recyling here in NY is madness. I find it weird that you would have to store your recyclables or let’s just say segregate our garbage into plastic bags. Isn’t the point of recycling is to get rid of the plastic. Not to mention that someone has to physically pick up the garbage from your curb because trashbins work because someone will steal it. Seriously?!

3. The debate between alternative energy is if interest to be especially when it comes to automobiles. There is more to commercial hybrid cars and petrol cars than most people realize. For example, petrol cars run and will consume the gas that is in the tanks. But as the vehicle gets lighter, the car becomes less fuel hungry (unless of course you’re driving a 3.0 liter engine that just guzzles the gas). Then there’s the hybrid, the batteries don’t get lighter as you drive longer thus keeping your car at almost the same weight it started with. Then there’s the issue with the electric cars, how far will it actually run?

4. Winter in New York I was told is quite an experience. I was here this time last year in between blizzards for the group interview at ITP and it was brutal. But sadly I don’t think this might work out since it’s only snowed twice the whole time I’ve been at ITP and one of them was in October.

5. Just a thought on where the city water comes from. On second thought I don’t think I’d like to know.

6. There is a digital divide when it comes to emails and letters. Why does it feel more special when you receive a letter from someone compared to an email? Why do letters mean more to your senator or congressman (1 letter = 2000 constituents) than an email campaign?

7. title speaks for itself.

8. Cameraphones has replaced the Polaroid and the snapshot in an instant. Why? why why why? Is there room left for the professional?

9. I was facinated by American trains in particular form one business trip to DC. I was coming from Richmond, VA and instead of renting a car, I decided to take the train. It was a pleasant surprise seeing another side of the country. I think railways were built first before roads because roads were was just dirt trampled over by horses and herds. Railways on the other hand are deliberate paths built to get from point A to point B. Even more so in the United States where it connected the East and Weat coast as Americans settled further west. I think a spring break train ride would be perfect for this.

10. I would like to build an online documentation on WWII survivors in the Philippines. Though most of the survivors are gone I would like to focus on my family. My gradparents had very different backgrounds and their stories of survival is something that is interesting to document. Sadly though, there might not be enough pictures even to document the whole thing. But still, I think I may return to this idea when I go back home.

So there you have it! 10 ideas for documentaries but I’m focusing on 1 and 9. Will post more as I elaborate on the idea.

The self image

This is the second time in my entire academic life that I’ve been fortunate enough to be in a comic book class. The first one was taught by Professor Emil Flores from the Department of English and Comparative Literature from the University of the Philippines which was mostly a writing class and that was almost twenty years ago (gasp!).

I’m not a natural born artist like other people. But through time and peserverance I could actually draw or paint some stuff but nothing that would get me published anytime soon.

I based my self portrait on my Mii profile. The Nintendo Wii and 3DS have built in programs that would allow the user to create a 3D avatar of yourself and (if supported by some games) be included in the action. Microsoft and Sony also released a similar avatar creator for the Xbox and PS3 but sadly it isn’t cute.

The Mii creator actually simplifies the process by letting you select from pre-exisiting templates and start from there. Some people have easy faces to draw on but somehow, my face has proven to be difficult. Difficult in a way that there’s no given set of images that suit me.

After a while I’ve finally settled down on a particular image for myself.

Comics

The image on the left is the avatar I created using the 3DS and the one on the right is the image created on the Wii. My sister argues that neither look like me. From here I started sketching myself.

Comics

I’m really out of practice at this stage.

comics

I noticed that my glasses the shape of my face and my hair are the most noticeable features of my face. My eyes aren’t that significant unless I take off my glasses, but that would be something else. It’s hard to draw myself when I can’t see without my glasses. But it’s a start.

Spatial LIteracy

In the essay required to read this week shows different approaches toward the space between the written text and the spoken language. I believe it’s frightening to think that without the sense of emotion of the spoken word and the information of the written text that human development might have been very different. I’m of course stating the extreme. But it reminded me of when I was stil in film school where we actually used film compared to today’s digital tools. There was a stark contrast in the cinematography course on the requirements to pass the course. It was not “arbitrary” where we would make films and show them for critique and opinions then graded according to our story and performance. But instead we had to follow evyerhing to the letter. We essentially had “to learn the science behind the art”.

This meant learning what ASA settings were, f-stops, footcandles, gels and the like to create a spectacular image to push our stories forward. There was no room for error. It was either you got it right, or everyrhing is black. Cinematography was something specific, precise requiring an entirely new subset of knowledge in order to put image onto celluloid. By acquiring this “cinematographic language” we are able to control the factors that define the image.

Using this analogy, by further examining and defining the spaces around us, I think that it’s possible to show more. Computers and data accelerate this process. By deconstructing the information and putting it forward in text it is possible to further expand the current syntax known by computers. For example if we can create a system where a program can actually detect emotion by using a variety of factors or even display emotion, I think that it would put our understanding of the physical world even further than the text that runs by our screen.