Archive for the ‘ Computational Cameras ’ Category

Hanami

fukuoka  026

Sakura in Fukuoka

Hanami or flower viewing is commonly associated with the act of viewing cherry blossoms.

I think I’ve found my idea for my Computational Cameras final. It has an intresting history that dates back to the 18th century on how a single flower can symbolize extreme beauty and a quick death due to the fleeting nature of the cherry blossoms.

nagasaki  055

Cherry trees at Nagasaki ground zero

I’ll be implementing these in my final project by replacing it the lifespan of the cherry blossoms as you move through a given space.

So for the next few weeks I’ll be spending time at the Brooklyn Botanical Gardens to shoot pictures and video. I’m not sure yet if the petals will be falling but if not. I’ll attempt to recreate them in Processing.

http://vimeo.com/40166788

Advertisements

Tokyo 1954

Notes to follow.

Flash cards

augmentedcards

While most everyone is using Junaio I decided to use the AR toolkit instead. I wanted to make something like magnetic letters on a refrigerator. I got the idea for this in church where each code represents a word.

One of the first things I encountered was the words were coming out in reverse. Even though I’ve already used the pushMatrix and popMatrix command in Processing 2.04a, the video now mirrored me, but the text was still in reverse. I attribute this to a bug but I’m not so sure. To solve this annoying problem, I projected the sketch to the wall and reverse projected it. Thus the letters finally came out the right way.

I just used a short phrase composing of 8 words, but the current library does not allow the same AR image to be repeated twice. Or I may need to fiddle with the code some more.

Putting the AR cards side by side gives me a phrase.

augmentedcards

What could this secret messgae mean?

augmentedcards

This may give me an idea for the final project if I plan to expand this.

import processing.video.*;

// Processing 2.04a + NyARToolkit 1.1.7
//pared down from Amnon Owed http://www.creativeapplications.net/processing/augmented-reality-with-processing-tutorial-processing/

import java.io.*; // for the loadPatternFilenames() function
import processing.opengl.*; // for OPENGL rendering
import jp.nyatla.nyar4psg.*; // the NyARToolkit Processing library

PFont myFont;
String the ="there";
String is ="is";
String no ="no";
String id ="I";
String there ="there";
String is2 ="is";
String only ="only";
String us ="us";

Capture cam;
MultiMarker nya;

void setup() {

size(640, 480, OPENGL);
myFont=loadFont("Helvetica-48.vlw");
cam = new Capture(this, 640, 480);
cam.start();
frameRate(15);

// create a new MultiMarker at a specific resolution (arWidth x arHeight), with the default camera calibration and coordinate system
nya = new MultiMarker(this, width, height, "camera_para.dat", NyAR4PsgConfig.CONFIG_DEFAULT);
// set the delay after which a lost marker is no longer displayed. by default set to something higher, but here manually set to immediate.
//nya.setLostDelay(1);

nya.addARMarker("4x4_1.patt", 80); //your have to print out the cooresponding pdf file and put the .patt files in data folder
nya.addARMarker("4x4_2.patt", 80);
nya.addARMarker("4x4_3.patt", 80);
nya.addARMarker("4x4_4.patt", 80);
nya.addARMarker("4x4_5.patt", 80);
nya.addARMarker("4x4_6.patt", 80);
nya.addARMarker("4x4_7.patt", 80);
nya.addARMarker("4x4_8.patt", 80);
}

void draw() {

background(255); // a background call is needed for correct display of the marker results
cam.read();
//image(cam, 0, 0, width, height); // display the image at the width and height of the sketch window
// flip image horizonatlly
pushMatrix();
scale(-1, 1);
translate(-cam.width, 0);
// image(cam, 0, 0, width, height);
popMatrix();
// pushMatrix();
// scale(-1,0);
nya.detect(cam); // detect markers in the input image at the correct resolution (incorrect resolution will give assertion error)

if (nya.isExistMarker(0)) {
setMatrix(nya.getMarkerMatrix(0)); //use this marker to translate and rotate the processing drawing
translate(0, 0); //offset half the size of the cube.
fill(0);
textFont(myFont, 24);
text(the, 0, 0);
}
perspective();
if (nya.isExistMarker(1)) {
setMatrix(nya.getMarkerMatrix(1));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(is, 0, 0);

}
perspective();
if (nya.isExistMarker(2)) {
setMatrix(nya.getMarkerMatrix(2));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(no, 0, 0);

}
perspective();
if (nya.isExistMarker(3)) {
setMatrix(nya.getMarkerMatrix(3));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(id, 0, 0);

}
perspective();
if (nya.isExistMarker(4)) {
setMatrix(nya.getMarkerMatrix(4));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(there, 0, 0);

}
perspective();
if (nya.isExistMarker(5)) {
setMatrix(nya.getMarkerMatrix(5));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(is2, 0, 0);

}
perspective();
if (nya.isExistMarker(6)) {
setMatrix(nya.getMarkerMatrix(6));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(only, 0, 0);

}
perspective();
if (nya.isExistMarker(7)) {
setMatrix(nya.getMarkerMatrix(7));
translate(0, 0);
fill(0);
textFont(myFont, 24);
text(us, 0, 0);

}
// popMatrix();
}

http://www.youtube.com/watch?v=Y0Ye7ODytIM

Ultra HD Digital 8k

It’s been a long time coming but it’s application remains to be seen.

In 2006, researchers at the NHK demonstrated in Las Vegas a transmission of Ultra High Definition Television with 22.2 surround sound. THe broadcast was from Tokyo to Osaka via an IP network running at 1Gbps. Uncompressed, the sound signal alone an at 20 Mbps while the video signal ran at 24 Gbps.

Current broadcast standards runs at MPEG-2 compressions with a maximum resolution of 1920 x 1080. Ultra HD runs at 7680 x 4320 pixels.

Developed by the NHK in Japan, ultra HD has 4000 scanning lines compared to just 1080 for the current broadcast system.

In 2007, SMPTE approves the Ultra HDTV as a standard format.

The BBC will broadcast the London Summer Olympics in ultraHD.

Each frame is equal to 33 Megapixels.

I can see this as the digital IMAX but more. The 100 degree viewing angle allows for an image that can simulate human perception. It’s quite hard to describe since the image is huge but the experience is almost realistic.

This type of imaging is a step forward to building that holodeck. The amount of detail that the resolution provides will be able to show more infomation for computers to see. Although current limitations would be enough processing power to process the data.

 

Camera Walk?

I’ve posted it before I think that I always wanted a holodeck. But of course this is nothing like that. So for this project I finally got the network camera working at my place in Queens and uploading the images every 15 minutes. I shot some video from a window at ITP and put the two together.

I knew I was going to use the Kinect and initially began using the depthMap and measuring in inches and using the values there to determine distance. That didn’t work for me. I decided to use the Center of Mass or CoM command to determine the position but then depth would be another problem.

Soooo for the purposes of this project I just adjusted the position of the kinect to be overhead to simplify the position.

Adding a fornt facing camera turned out to be more challenging. For some reason, OpenNI takes over all the cameras of the computer and will only want the kinect. Solution? Add another kinect.

I’d like to expand this further with head tracking such as imitating the look around an area or even creating the illusion of depth without 3D glasses.

http://youtu.be/ac9mx180aTo

Cameras midterm

I have no name for this as of yet but over the past few weeks I’ve been trying to recreate what I built five years ago. Sadly through the updates that Apple has made, it is no longer possible with their technology. Streaming video from one place to another seems simple enough and is actually simpler today than it was when I was building it. But getting it to place nice with Java is another thing completely.

I was using both VLC and QuickTime Broadcaster to broadcast video from one place to another without the aid of a server to be imported into a Processing applet and that didn’t turn out too well. Apple has discontinued further development on QuickTime for Java in lieu of AVFoundation. So the Quicktime that we all grew up with no longer exists. the core of AVFoundation is used to run movies in iOS, iMovie and FCPX which is why iMovie runs faster and somewhat better than FCP 7. Oh the pains of 64 bit processing.

The QT Broadcast stream, though encoded in MotionJPEG which Java recognizes, still starts with a QT header. Thus when the stream is imported into the sketch, it will not recognize it as a MotionJPEG file but as a QT streaming file. This would require a decoder to transcode it into MotionJPEG that Java accepts.

Installing a Lion server will not help. I learned that the hard way.

So currently I’m using an application called EvoCam. It’s a standalone software that recognizes practically any camera I plug in and allows a ton of options. From motion capture recording/ streaming video to grabbing stills and sending them through applescript or automator workflow. I’ve been having ftp connectivity issues to the ITP server so I’m uploading this to my personal site. So far it’s been working great save for the times of me getting disconnected on the remote end for some reason or another.

remote still

Refresh the page to get the latest image.

Once the image was finally free on the web it was easy enough to access the code in Processing. I’ve always wanted a holodeck no matter how crude it may be. I think the fact that you are walking through a given space that changes would change on how we would interact with man made images or environments. So this is a very crude version of how I would imagine things. More updates to come.

Cameras of the world 5 years from now

There are views that more cameras out there could mean two things. One is that We can finally get a grasp on what’s going on in the world. No longer can dictators and criminals hide from us. For once we can finally generate our own opinions on subjects that before, took an army of journalists to capture and analyze. We can finally have our own opinion. Then of course there’s the downside. It’s who is behind the cameras is the scary part. We already live part in that world. Our every movement is captured and stored into servers for who knows how long.

Cameras enabled it’s creators to preserve their time and space and it continues to do so today. The 2011 Japan Earthquake was so devastating that we were getting live images as the tsunami swept through the northern region. Users shared their videos of the quake as it was happening and for the first time, the word could see terrible disaster live.

I for one would like to be optimistic about where technology is leading us in terms of cameras. I long for the images of the old cities in my home. I wish I could re-create the city the way it was before World War II or even better, re-create Old Manila during the Spanish era. We would be able to take a walk into history so to speak, understand and experience the place and time where my grandparents and great-grandparents lived. Like a living holodeck based on information from the past.

Cameras are something we fear about today. But it’s something that our descendants would look for in the future.