The road narrows

In two weeks I’ll be embarking on a journey that will take me across the country. But after taking a breather and exmaining my capabilities, I’ve shortened the journey and changed it.

The cross country train ride would have been amazing since it’s not a road, it is not part of Google Earth Street View. That alone was difficult. Not to mention I mapped the ride to Chicago and it was very long.

I’ve decided to drive up the California coast instead. Specifically the area of California Highway 1 between San Luis Obispo and San Francisco.

I drove there the first time in the spring of 2010 in a Nissan Hybrid with my sister. This time around I’ll be travelling knowing what I’ll need. The difference today is that I’ll be equipping the car I’ll be using with a number of cameras that would almost make it look like a Google Street View car but in simpler terms. I’ve marked the series of stops and places I intend to visit and the entire journey will take me three days.

There’s something to be said about a road trip. This documentary narrates some of the history of the road and will attempt to capture the unique views only visible on this road. One of the driving factors for me on this documentary is the possiblity that this road may no longer exist in it’s current state in the next ten years. Though no fault of man. The Pacific ocean is slowly eating away at the cliffs and eroding the land beneath the it.

The drive is also very special. The trip will take me to a twisting part of the highway where I never bothered to look at the speed limit. The speed limit is a distraction at that point and it’s not posted anyway. I think it would be even better if I can get my hands on a manual transmission car for this, but I doubt I’ll be able to find one.

I have part of the script for the narrative bits and rest would be in the car. This is very exciting.

Cameras of the world 5 years from now

There are views that more cameras out there could mean two things. One is that We can finally get a grasp on what’s going on in the world. No longer can dictators and criminals hide from us. For once we can finally generate our own opinions on subjects that before, took an army of journalists to capture and analyze. We can finally have our own opinion. Then of course there’s the downside. It’s who is behind the cameras is the scary part. We already live part in that world. Our every movement is captured and stored into servers for who knows how long.

Cameras enabled it’s creators to preserve their time and space and it continues to do so today. The 2011 Japan Earthquake was so devastating that we were getting live images as the tsunami swept through the northern region. Users shared their videos of the quake as it was happening and for the first time, the word could see terrible disaster live.

I for one would like to be optimistic about where technology is leading us in terms of cameras. I long for the images of the old cities in my home. I wish I could re-create the city the way it was before World War II or even better, re-create Old Manila during the Spanish era. We would be able to take a walk into history so to speak, understand and experience the place and time where my grandparents and great-grandparents lived. Like a living holodeck based on information from the past.

Cameras are something we fear about today. But it’s something that our descendants would look for in the future.

Adaptaion

apesketches

This week in Comics, we adapted T.C. Boyle’s short story, “The Ape in Retirement” into 6 panels of comic book form. It was a challenge in figuring out what to keep and what to take away and my process has resulted in this.

While on the train, I sketched out scenes that were important and significant in moving the story forward. Even the it was told through the eyes of the female protagonist, Beatrice. The actions all belong to Konrad the ape.

apesketches

This is the final sketch in panels. I chose to focus on the face of Konrad for his ability to express emotions similar to humans. Though my sketches are crude, I was making an attempt to show how Konrad was reacting to the events around him. Given more time I would narrated it through Beatrice’s words but Konrad’s actions.

Apple bite

Watch here

 

import oscP5.*;
OscP5 oscP5;

//crane[] crane;
PImage crane1;
PImage crane2;
PImage crane3;
PImage crane4;
PImage crane5;
PImage apple;

String crane= "crane1, crane2, crane3, crane4, crane5";

PVector posePosition;
boolean found;
float eyeLeftHeight;
float eyeRightHeight;
float mouthHeight;
float mouthWidth;
float nostrilHeight;
float leftEyebrowHeight;
float rightEyebrowHeight;

float[] chew = new float [5];
//float[] crane = new float [crane1, crane2, crane3, crane4, crane5];

//float chew = 0;

PVector[] meshPoints;

float poseScale;

void setup() {
size(640, 480);
frameRate(30);

for (int i =0; i < chew.length; i++) {
chew[i] += 1;
}

// crane = new crane [crane1, crane2, crane3, crane4, crane5];
crane1 = loadImage("crane01.JPG");
crane2 = loadImage("crane02.JPG");
crane3 = loadImage("crane03.JPG");
crane4 = loadImage("crane04.JPG");
crane5 = loadImage("crane05.JPG");
apple = loadImage("apple.jpg");

meshPoints = new PVector[66];

for (int i = 0; i < meshPoints.length; i++) {
meshPoints[i] = new PVector();
}

oscP5 = new OscP5(this, 8338);
oscP5.plug(this, "mouthWidthReceived", "/gesture/mouth/width");
oscP5.plug(this, "mouthHeightReceived", "/gesture/mouth/height");
oscP5.plug(this, "eyebrowLeftReceived", "/gesture/eyebrow/left");
oscP5.plug(this, "eyebrowRightReceived", "/gesture/eyebrow/right");
oscP5.plug(this, "eyeLeftReceived", "/gesture/eye/left");
oscP5.plug(this, "eyeRightReceived", "/gesture/eye/right");
oscP5.plug(this, "jawReceived", "/gesture/jaw");
oscP5.plug(this, "nostrilsReceived", "/gesture/nostrils");
oscP5.plug(this, "found", "/found");
oscP5.plug(this, "poseOrientation", "/pose/orientation");
oscP5.plug(this, "posePosition", "/pose/position");
oscP5.plug(this, "poseScale", "/pose/scale");
oscP5.plug(this, "loadMesh", "/raw");
}

void draw() {
background(0);
stroke(100);
/*
for (int i=0; i 1) {
image(apple, 0, 0, 640, 480);
}
//}
/* if (found) {
fill(255);
for (int i = 0; i 0) {
PVector prev = meshPoints[i-1];
line(prev.x, prev.y, p.x, p.y);
}
}*/

/translate(posePosition.x, posePosition.y);
scale(poseScale);
noFill();
// ellipse(0,0, 3,3);
ellipse(-20, eyeLeftHeight * -9, 20, 7);
ellipse(20, eyeRightHeight * -9, 20, 7);
ellipse(0, 20, mouthWidth
3, mouthHeight * 3);
ellipse(-5, nostrilHeight * -1, 7, 3);
ellipse(5, nostrilHeight * -1, 7, 3);
rectMode(CENTER);
fill(0);
rect(-20, leftEyebrowHeight * -5, 25, 5);
rect(20, rightEyebrowHeight * -5, 25, 5);
*/
}
//}

public void mouthWidthReceived(float w) {
// println("mouth Width: " + w);
mouthWidth = w;
}

public void mouthHeightReceived(float h) {
println("mouth height: " + h);
mouthHeight = h;
}

public void eyebrowLeftReceived(float h) {
// println("eyebrow left: " + h);
leftEyebrowHeight = h;
}

public void eyebrowRightReceived(float h) {
// println("eyebrow right: " + h);
rightEyebrowHeight = h;
}

public void eyeLeftReceived(float h) {
// println("eye left: " + h);
eyeLeftHeight = h;
}

public void eyeRightReceived(float h) {
// println("eye right: " + h);
eyeRightHeight = h;
}

public void jawReceived(float h) {
// println("jaw: " + h);
}

public void nostrilsReceived(float h) {
// println("nostrils: " + h);
nostrilHeight = h;
}

public void found(int i) {
println("found: " + i); // 1 == found, 0 == not found
found = i == 1;
}

public void posePosition(float x, float y) {
//println("pose position\tX: " + x + " Y: " + y );
posePosition = new PVector(x, y);
}

public void poseScale(float s) {
//println("scale: " + s);
poseScale = s;
}

public void poseOrientation(float x, float y, float z) {
//println("pose orientation\tX: " + x + " Y: " + y + " Z: " + z);
}

public void loadMesh(float x0, float y0, float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4, float x5, float y5, float x6, float y6, float x7, float y7, float x8, float y8, float x9, float y9, float x10, float y10, float x11, float y11, float x12, float y12, float x13, float y13, float x14, float y14, float x15, float y15, float x16, float y16, float x17, float y17, float x18, float y18, float x19, float y19, float x20, float y20, float x21, float y21, float x22, float y22, float x23, float y23, float x24, float y24, float x25, float y25, float x26, float y26, float x27, float y27, float x28, float y28, float x29, float y29, float x30, float y30, float x31, float y31, float x32, float y32, float x33, float y33, float x34, float y34, float x35, float y35, float x36, float y36, float x37, float y37, float x38, float y38, float x39, float y39, float x40, float y40, float x41, float y41, float x42, float y42, float x43, float y43, float x44, float y44, float x45, float y45, float x46, float y46, float x47, float y47, float x48, float y48, float x49, float y49, float x50, float y50, float x51, float y51, float x52, float y52, float x53, float y53, float x54, float y54, float x55, float y55, float x56, float y56, float x57, float y57, float x58, float y58, float x59, float y59, float x60, float y60, float x61, float y61, float x62, float y62, float x63, float y63, float x64, float y64, float x65, float y65) {
println("loading mesh...");
meshPoints[0].x = x0;
meshPoints[0].y = y0;
meshPoints[1].x = x1;
meshPoints[1].y = y1;
meshPoints[2].x = x2;
meshPoints[2].y = y2;
meshPoints[3].x = x3;
meshPoints[3].y = y3;
meshPoints[4].x = x4;
meshPoints[4].y = y4;
meshPoints[5].x = x5;
meshPoints[5].y = y5;
meshPoints[6].x = x6;
meshPoints[6].y = y6;
meshPoints[7].x = x7;
meshPoints[7].y = y7;
meshPoints[8].x = x8;
meshPoints[8].y = y8;
meshPoints[9].x = x9;
meshPoints[9].y = y9;
meshPoints[10].x = x10;
meshPoints[10].y = y10;
meshPoints[11].x = x11;
meshPoints[11].y = y11;
meshPoints[12].x = x12;
meshPoints[12].y = y12;
meshPoints[13].x = x13;
meshPoints[13].y = y13;
meshPoints[14].x = x14;
meshPoints[14].y = y14;
meshPoints[15].x = x15;
meshPoints[15].y = y15;
meshPoints[16].x = x16;
meshPoints[16].y = y16;
meshPoints[17].x = x17;
meshPoints[17].y = y17;
meshPoints[18].x = x18;
meshPoints[18].y = y18;
meshPoints[19].x = x19;
meshPoints[19].y = y19;
meshPoints[20].x = x20;
meshPoints[20].y = y20;
meshPoints[21].x = x21;
meshPoints[21].y = y21;
meshPoints[22].x = x22;
meshPoints[22].y = y22;
meshPoints[23].x = x23;
meshPoints[23].y = y23;
meshPoints[24].x = x24;
meshPoints[24].y = y24;
meshPoints[25].x = x25;
meshPoints[25].y = y25;
meshPoints[26].x = x26;
meshPoints[26].y = y26;
meshPoints[27].x = x27;
meshPoints[27].y = y27;
meshPoints[28].x = x28;
meshPoints[28].y = y28;
meshPoints[29].x = x29;
meshPoints[29].y = y29;
meshPoints[30].x = x30;
meshPoints[30].y = y30;
meshPoints[31].x = x31;
meshPoints[31].y = y31;
meshPoints[32].x = x32;
meshPoints[32].y = y32;
meshPoints[33].x = x33;
meshPoints[33].y = y33;
meshPoints[34].x = x34;
meshPoints[34].y = y34;
meshPoints[35].x = x35;
meshPoints[35].y = y35;
meshPoints[36].x = x36;
meshPoints[36].y = y36;
meshPoints[37].x = x37;
meshPoints[37].y = y37;
meshPoints[38].x = x38;
meshPoints[38].y = y38;
meshPoints[39].x = x39;
meshPoints[39].y = y39;
meshPoints[40].x = x40;
meshPoints[40].y = y40;
meshPoints[41].x = x41;
meshPoints[41].y = y41;
meshPoints[42].x = x42;
meshPoints[42].y = y42;
meshPoints[43].x = x43;
meshPoints[43].y = y43;
meshPoints[44].x = x44;
meshPoints[44].y = y44;
meshPoints[45].x = x45;
meshPoints[45].y = y45;
meshPoints[46].x = x46;
meshPoints[46].y = y46;
meshPoints[47].x = x47;
meshPoints[47].y = y47;
meshPoints[48].x = x48;
meshPoints[48].y = y48;
meshPoints[49].x = x49;
meshPoints[49].y = y49;
meshPoints[50].x = x50;
meshPoints[50].y = y50;
meshPoints[51].x = x51;
meshPoints[51].y = y51;
meshPoints[52].x = x52;
meshPoints[52].y = y52;
meshPoints[53].x = x53;
meshPoints[53].y = y53;
meshPoints[54].x = x54;
meshPoints[54].y = y54;
meshPoints[55].x = x55;
meshPoints[55].y = y55;
meshPoints[56].x = x56;
meshPoints[56].y = y56;
meshPoints[57].x = x57;
meshPoints[57].y = y57;
meshPoints[58].x = x58;
meshPoints[58].y = y58;
meshPoints[59].x = x59;
meshPoints[59].y = y59;
meshPoints[60].x = x60;
meshPoints[60].y = y60;
meshPoints[61].x = x61;
meshPoints[61].y = y61;
meshPoints[62].x = x62;
meshPoints[62].y = y62;
meshPoints[63].x = x63;
meshPoints[63].y = y63;
meshPoints[64].x = x64;
meshPoints[64].y = y64;
meshPoints[65].x = x65;
meshPoints[65].y = y65;
}

void oscEvent(OscMessage theOscMessage) {
if (theOscMessage.isPlugged()==false) {
println("UNPLUGGED: " + theOscMessage);
}
}

Face tracking projects

  • In car face tracking – to detect driver fatigue and other behavior.
  • PS Eye head tracking to move the POV of the game according to the position of the head.
  • Example
  • Facial tracking in cars to control objects in the car and identify the driver.
  • Facial recognition for identification of people in photos as depicted in Apple’s iPhoto and Aperture
  • To target objects in real life and fire deadly missles like Apache Helicopter Pilots.

How to get from point A to point B

I’ve made documentaries before and it’s not something entirely new. We would basically tell a story and the one way of telling a story is just simply to move forward. “Connecting” seems to be just as complicated as it sounds.

For one, no longer do you have a captured audience for them to digest your work in one sitting. The audience can now jump from one part of your story to another. They can now view your documentary on their own time and only the parts that they are interested in. So is there a way for the filmmaker’s view to translate properly into the hyperlinked form?

I think it’s possible but I think the material requires two different approaches resulting in the same ending. There will be path as determined by the author to be the “true vision” for the work. This is exactly as the creator determined it and should be experienced as such. The other path would be the one created by the user as they go along the story on their time and their own interests.

With this in mind I think traditional documentary methods still hold true for two of my proposals. One is the train ride and the other is Woody Allen map of Manhattan but I think I can narrow it down to his film (Manhattan, 1979).

http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=new+york+city&aq=&sll=37.0625,-95.677068&sspn=14.84512,90&t=w&ie=UTF8&hq=&hnear=New+York&ll=40.714353,-74.005973&spn=0.080645,0.175781&z=13&output=embed

View Larger Map

For “Manhattan”, there are various sources for the film such as the Internet Movie Database and Google Maps to determine these locations. For instance, the iconic scene with Woody Allen and Diane Keaton was shot exactly on this spot. As you can see today, there is no park bench. I wonder what this place looks like at sunrise, as depicted in the film.

http://maps.google.com/maps?f=q&source=embed&hl=en&geocode=&q=E+59th+St,+New+York,+NY&aq=0&oq=59&sll=40.761958,-73.973443&sspn=0.027565,0.175781&t=w&ie=UTF8&hq=&hnear=E+59th+St,+New+York&ll=40.761553,-73.966584&spn=79.695795,180&z=3&layer=c&panoid=69sIySJ7ZVcr3Yfe3RQIPQ&cbll=40.757793,-73.959552&cbp=13,107.65876970508172,,0,5.099324376664313&output=svembed

http://maps.google.com/maps?f=q&source=embed&hl=en&geocode=&q=E+59th+St,+New+York,+NY&aq=0&oq=59&sll=37.0625,-95.677068&sspn=42.987658,90&ie=UTF8&hq=&hnear=E+59th+St,+New+York&t=h&layer=c&cbll=40.757793,-73.959552&panoid=69sIySJ7ZVcr3Yfe3RQIPQ&cbp=13,109.18,,0,0.44&ll=40.751825,-73.96142&spn=0.020416,0.048237&z=14&output=svembed

View Larger Map

Woody Allen and Diane Keaton in Manhattan

Then embed this scene,

That should work. It would allow the film to be enjoyed in a different way. Something like that.

As for the train, I’ve downloaded maps and brochures from Amtrak that would get me to the west coast. Sadly I couldn’t find an API for Amtrak but Google Maps does the same anyway.

Based on the railyway system of the US, there are two ways to the west, either through the north or sounth, the tracks don’t exactly cross the US like the highways. Also, it would take two trains. The Lake Shore Limited and the California Zephyr. I think this is where the two paths I mentioned above can be created. One is my own journey on the train that would include, videos, pictures, and posts from along the train that would be the filmmaker’s point of view. The other is where the user follow the train as it went through the track in real time via Google Earth. The user can then see the stops that are marked along the way to get from point A to point B and any historical significance it would have. This would also work along the east coast, but I think riding across the country would be very exciting. Or maybe California Highway 1.

Technologies I think I see in integrating would be:

  • HTML5
  • Google Maps API
  • YouTube API
  • a bit of Javascript

That’s all I can think of for now.

Moments in panels

This week’s homework examines on the different ways we can combine words and images.
comicspanel1interdependent
Interdependent

comicspanel1wordspecific

Word Specific

comicspanel1parallel

Parallel

comicspanel1picturespecific

Picture specific

Personally, I preferred the picture specific method in this particular story of me crashing my bike. I think the the ability to convey the story using images conveys more emotion. I leave it to the imagination of the reader on how painful the crash was or the feeling of your life flashing right before your eyes. No words can properly explain that.

Skeletal tracking

This week Kim Ash and I worked together on the skeletal tracking of the Kinect using OpenNI. The idea is when you reach a pose, a “nuclear” explosion occurs. Using the code sample from ITP Resident Greg Borenstien’s book “Making Things See, 2011”, it was fairly straightforward enough to get the skeletal tracking in place.

We wanted the explosion to occur once the two outstretched arms were in place.

skeletal tracking

In this image, we just wanted to track the arms. This is possible using the OpenNI commands:

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

Then by using an “if” statement, it was just measuring the position of the joints that would give the outstretched arms pose.

if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
stroke(255);
}
else {
tint(255, 255);
image(cloud, 840, 130, 206, 283);
explosion.play();
// stroke(255, 0, 0);
}
kinect.drawLimb(userId, SimpleOpenNI.SKELRIGHTSHOULDER, SimpleOpenNI.SKELRIGHTELBOW);
kinect.drawLimb(userId, SimpleOpenNI.SKELLEFTSHOULDER, SimpleOpenNI.SKELLEFTELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

Which results in this:

We wanted a better screen capture but for some reason this sketch didn’t like Ambrosia’s SnapzPro.

Full code:

import ddf.minim.;
import ddf.minim.signals.
;
import ddf.minim.analysis.;
import ddf.minim.effects.
;

Minim minim;
AudioPlayer explosion;

import SimpleOpenNI.*;
SimpleOpenNI kinect;
PImage back;
PImage cloud;

void setup() {
size(640*2, 480);
back = loadImage(“desert.png”);
cloud = loadImage(“cloud.png”);
// imageMode(CENTER);

minim = new Minim(this);
explosion = minim.loadFile(“explosion.mp3”);

kinect = new SimpleOpenNI(this);
kinect.enableDepth();
kinect.enableRGB();
kinect.enableUser(SimpleOpenNI.SKELPROFILEALL);
strokeWeight(5);
}

void draw() {
background(0);
kinect.update();
image(kinect.depthImage(), 0, 0);
// image(kinect.rgbImage(),640,0);
image(back, 640, 0, 640, 480);

IntVector userList = new IntVector();
kinect.getUsers(userList);
if (userList.size() > 0) {
int userId = userList.get(0);
if ( kinect.isTrackingSkeleton(userId)) {
PVector rightHand = new PVector();
PVector rightElbow = new PVector();
PVector rightShoulder = new PVector();
PVector leftHand = new PVector();
PVector leftElbow = new PVector();
PVector leftShoulder = new PVector();

  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, rightShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, leftShoulder);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
  kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);

  // right elbow above right shoulder
  // AND
  // right elbow right of right shoulder
  if (rightElbow.y > rightShoulder.y && rightElbow.x > rightShoulder.x && leftElbow.y > leftShoulder.y && leftElbow.x > leftShoulder.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
   // stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);

  // right hand above right elbow
  // AND
  // right hand right of right elbow
  if (rightHand.y > rightElbow.y && rightHand.x > rightElbow.x && leftHand.y > leftElbow.y && leftHand.x > leftElbow.x) {
    stroke(255);
  }
  else {
     tint(255, 255);
     image(cloud, 840, 130, 206, 283);
     explosion.play();
 //   stroke(255, 0, 0);
  }
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HAND, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HAND, SimpleOpenNI.SKEL_LEFT_ELBOW);
}

}
}

// user-tracking callbacks!
void onNewUser(int userId) {
println(“start pose detection”);
kinect.startPoseDetection(“Psi”, userId);
}

void onEndCalibration(int userId, boolean successful) {
if (successful) {
println(” User calibrated !!!”);
kinect.startTrackingSkeleton(userId);
}
else {
println(” Failed to calibrate user !!!”);
kinect.startPoseDetection(“Psi”, userId);
}
}

void onStartPose(String pose, int userId) {
println(“Started pose for user”);
kinect.stopPoseDetection(userId);
kinect.requestCalibrationSkeleton(userId, true);
}

void keyPressed() {

switch(key)
{
case ‘ ‘:
kinect.setMirror(!kinect.mirror());
break;
}
}

void close () {
explosion.close();
minim.stop();
super.stop();
}

10 years from now, the Kinect will be…

  1. It will be used to track a person’s movements in a moving vehicle. I see the technology to be able to see the effects of crash tests and impact test on vehicles.
  2. This could actually be used to take body measurements for various applications such as customized furniture and equipment like bicycles.
  3. all connected and in a Batman Dark Knight sort of way, spy on us and give intelligence agencies a real time 3D visual map of any area that the camera sees.
  4. The Star Trek holodeck could actually be real in my lifetime. With Kinect cameras to capture the real world in real time, it could be recreated in a holodeck somewhere for us to interact in.
  5. We are no longer limited by the size of our screen to use our computer. Minority Report gesture control has now arrived to our homes.
  6. Assitive technology for those who have no depth perception. The sad thing about current 3D technology is that it requires viewers to have both eyes to view the 3D image. Using kinect technology, I think we can use it to scan the world and display it in such a way that we won’t get dizzy with the fancy images.
  7. Virtual presence scanner. Imagine you can be “physically” present anywhere with your 3D scanned image using the Kinect and brought elsewhere.
  8. TV is now a thing of the past. Shows are now projected directly into your room. Video cameras will have kinect technology that will allow projectors to display the action right in your living room.
  9. It will be used to automate preparation of food. Imagine. You’ll never have to de bone a fish or chicken for dinner. Current technology relies on X-ray snapshots in a machine that only belongs in a factory. What if this could be in your house. I think it would be awesome.

 

    10. and lastly, the Kinect technology could be used to follow the human body as it approaches a screen for the image on the screen to adjust to the depth opf field of the user. Objects in the mirror may appear closer.

So there you have it. My ten predictions for the future. Some of them are already here. But who knows what the future brings.

Max Sonar EZ1 critique

In the Sensor Workshop class, I was paired with Mark Breneman to find, update, edit, and critique a sensor report already posted in the ITP wiki as a guide for the ITP community. We chose the MaxSonar EZ-1 sensor form Maxbotix.

Most of the codes posted are based on PBASIC and must be translated to Arduino C since most of ITP uses the Arduino microntroller for our projects. Will edit the wiki further before Wednesday and add links to pictures and diagrams.