Wednesday, November 10, 2010

libfreenect

is it just me, or is this kind of exciting?

not exciting because it's a new "gadget",
but because it's different kind of tool.

without the ps3eye,
the eyewriter wouldn't exist in its current form.
what should we make with kinect?
is there anything we couldn't do before?

how long until we tell students
"to detect someone,
first you need to threshold the depth image" or,
"for a full 3d map of a space,
you'll need about 4 kinects in the center of room"

how long until the new posture is "hands forward" instead of "hands up"?
"superman" instead of "surrender"?

how long until we just wave a kinect around,
get a complete 3d map of a space
feed it into our projection mapping toolkit
and start making interesting work
instead of worrying about the mapping?

and finally, what kind of work is inevitable with 3d sensing?
how long until there is a clear 3d interaction aesthetic?
and we say "i've seen this before, i bet they did it with a kinect" ;)

Sunday, August 29, 2010

In Response to "Glitching vs..."

A couple of days ago Evan Meaney wrote a blog post titled "Glitching vs. Processing vs. Moshing vs. Signal Interference". I really appreciate Evan's glitch work, from "Ceibas Cycle" to his writings in "on glitching", where he describes the inevitable collaboration with information theory present in all digital work. But this most recent post just doesn't make any sense.

He states that the post is inspired by the diverse work submitted to GLI.TC/H. I can understand wanting to make some loose categories to help group submissions, but the language in the post sounds like he's building a framework. He asks readers to "use this space as a means to explore and delineate, to observe and report, to enumerate...". But it's hard to do that with just four independent categories (with names like "glitching" and "processing") with no unifying structure other than the context of visual arts.

I think I understand where Evan is coming from, so I'd like to try again.

Let's start with noise.

Noise is what happens when we don't understand something. Noise can be manifest in any media: confusion about the clothing of a particular culture, inability to separate a visual foreground from background, the misunderstanding of a foreign rhythm or melody as arrhythmic or atonal. In our failure to contextualize, we create noise.

Glitch means finding noise when we expect to understand. Glitch is an experience, driven by expectation, emerging from consciousness rather than computation. Just as noise would not exist without us to misunderstand it, glitch would not exist without us to misexpect it.

Glitch art is about dwelling in and exploring these experiences, which sometimes means attempting to reproduce them. These reproductions may be executed in a variety of ways. Sometimes it will involve imitating the processes that regularly lead to glitches. This includes direct memory corruption at the byte level, redirection of streams, removal of key frames, analog interference, and circuit bending. Other times it eschews these processes, and opts to evoke the sensation by other means: through the "Bad TV" effect, or in the choice of palette, shapes, motion, melody, etc.

Most of the time, glitch art falls somewhere in between, drawing on the processes that give rise to glitches, but ultimately focused on evoking the experience by whatever means necessary.

Evan suggests that "a true glitch is not reproducible". I believe "true" glitch is unrelated to reproducibility. "True" glitch is tied solely to expectation. The reason it seems like something "stable" is no longer a glitch is simply because it's packaged as such (i.e., a "glitch") removing the possibility of expecting anything else.

That said, acknowledging that glitch is an experience gives us freedom as artists to share that experience regardless of the procedural purity of our practice. Saying that we're just "imperfect" is a cop-out based on a misguided understanding of what glitch artists are aiming for.

Monday, May 10, 2010

3D Scanning as Dense Microphone Array

Sound is the displacement of matter over time.

A microphone detects sound at a single point, either via direct physical coupling, or using optical methods (as with Laser microphones).

3D scanning can also detect displacement of reflective matter over time. Using a 3D scanning setup with a very large angle between the camera and projector, very minor displacement variations can be detected. Using a high framerate camera, this displacement can be measured at audio frequencies. Every pixel then corresponds to a virtual laser microphone: instead of the usual microphone at a point, a fringe analysis microphone is comprised of N points as determined by the camera resolution.

Saturday, May 08, 2010

Gaze-informed Perceptual Compression

A video chat program that tracks your eye movement and sends gaze information to the other user. The other user's computer compresses the entire image heavily, with the exception of what you're looking at. To you, it just looks like the entire image is clear.

Monday, April 19, 2010

Piece for HTTP

  1. Post a link to a website online, but remove one of the Ts from "http" so it becomes "htp".
  2. When this link is clicked, the participant will be forced by their browser to pause for a moment and reflect on the syntax of URL, adding the missing "t" by hand

Tuesday, April 06, 2010

Empty Art for the Web

A web-enabled computer in a gallery, allowing visitors to browse the internet. A ready-made the information age.

Wednesday, March 24, 2010

Eigenanalysis for Lossy Compression

Eigenanalysis is a method for reducing a set of data to the principle dimensions along which that data varies. In the context of imaging data, it has been applied very successfully to Eigenfaces:

Where a set of faces is broken down into a smaller set of face "prototypes" that can recombined in varying portions to recreate the original data set with a limited accuracy.

In the context of music, I can imagine that the spectral characteristics of songs have some self-similarity: portions repeat, chords are repeated in different voices and different octaves, rhythms repeat, etc. I can imagine a lossy compression algorithm that takes the frequency domain representation of a song, does Eigenanalysis on these vectors, and stores the song simply as the collection of N eigenvectors and the reduced representation of each frequency-domain chunk.

Quantization methods may be employed for further reducing bit usage due to similarity between adjacent chunks. Or different portions of the spectrum can be analyzed separately, which allows for better representation of lower frequencies and less information dedicated to higher frequencies. This unfortunately does not account for the obvious relationship between the lower and higher frequencies.

A more advanced implementation may involve doing eigenanalysis on mutiple chunks simultaneously in a moving window, or at different scales, which will help with rhythmic repetition.

The octave or overtone relationship is a little more complicated, and would require something like a constant-Q transforms to get a logarithmic frequency domain.

Tuesday, March 16, 2010

Google Earth Live

"Google Earth Live" is a hypothetical service offered by Google in the not-to-distant future. It is predicated upon Google releasing a matrix of satellites into orbit that regularly poll large sections of the Earth at high resolution, and offering this data for free via the Google Earth interface.

When this is available, how would you use it (practically)? And what sort of art would you make with it?

The obvious: make timelapse videos of yourself as you go throughout your day, from the perspective of the satellite.

Saturday, March 13, 2010

Non-Metamer Monochromes

Monochromatic paintings have a tradition going back to the early 1900s, exemplified by Malevich and Rodchenko, and later by Rauschenberg.

I'd like to produce a series of monochromes that uses a single non-metamer. By non-metamer, I mean a color that has the same frequency spectrum as the color being replicated. For example, the green of a leaf, the blue of the sky, or the red of a sunset. Instead of just resembling these colors, various paints would be analyzed for their spectral response and mixed in the correct proportions so they precisely recreated these colors.

Friday, March 12, 2010

Alternative Prime Spirals

The Ulam spiral is based on the idea of arranging integers in a rectilinear 2D spiral.

And noticing that certain diagonal patterns fall out that aren't explainable by simple equations that describe some of the "holes".

What other orderings might reveal interesting patterns? How about a 2D Hilbert curve?

Or maybe a 3D one? How might you continue a spiral in a cubic 3D space? Would you get diagonal planes describing the primes? How about a higher dimensional space — maybe higher dimensional planes?

Jesus Glitch

The holy is often found in unexpected places. Jesus in naan, Mary in a Chicago underpass. Dan Paluska has immortalized this concept with his Holy Toaster.

Why don't we ever see Jesus in corrupted image files?

Image compression algorithms are generally rated on their ability to convincingly ignore non-perceptually-relevant features. I propose a new metric for these algorithms: how likely they are, when corrupted, to produce an image of a holy figure.

Six Pieces for Life

Live like you only have until the next:

  1. day
  2. week
  3. month
  4. year
  5. decade
  6. century

Wednesday, February 24, 2010

Lossy Vector Compression

An evenly sampled vector outline is essentially a 2D signal. This isn't the 2D of a raster image, where you have a 2D space with a 3D (RGB) value at each point. It's a 1D space with a 2D (XY) value at each point. You can do a frequency domain decomposition on this signal, which is the foundation for most image compression algorithms. What would it look like to do the usual compression tricks? Quantization of the amplitudes, high frequency removal, etc.?

The interesting thing about this transformation is that line drawings as frequency-decomposable entities already have a tradition established in Harmonographs. To recreate any drawing with a harmonograph would simply require N pendulums on each axis, each with a length proportional to the square of the frequency represented (given the mathematical definition of a pendulum). You would give all the pendulums equal mass, place them at an angle corresponding to the amplitude, and then release them at the right time. This could recreate any line drawing.

Sunday, February 21, 2010

The Real and the Virtual

I'd like to create an installation using a standard multitouch interface. The interface would be approximately 1 m wide and fairly high resolution. It would be mounted in a table-top configuration. A small pool of water, of similar construction and equivalent size, would be sitting directly next to the interface. The interface would be running a water simulation that resembles the real water as much as possible.

3D Video Scanner for Cheap

Here's a way you might try making a 3D video scanner for the cost of a webcam:

  • Weccam with VSYNC broken out
  • Bright LED or LED array
  • Ambient illumination

Mount the LED at approximately the same location as the camera lens. Turn the LED on for alternating VSYNC pulses. The 3D decoding process is as follows: the light intensity at every point can be modeled using the equation i = r * (a + s), where:

  • i is the captured intensity at that pixel
  • r is the reflectivity at that point
  • a is the ambient illumination at that point
  • s is the illumination due to the LED source at that point

Sampling with the LED on and off yields two equations:

  1. i_on = r * (a + s)
  2. i_off = r * (a + 0)

And s corresponds to distance proportionally to an inverse square law:

  • s(d) = f / d^2

Where f is a scaling factor that relates s to a. Solving for d yields:

  • i_off = r * a
  • i_off / a = r
  • i_on = (i_off / a) * (a + (f / d^2))
  • ((a * i_on) / i_off) - a = f / d^2
  • a * ((i_on / i_off) - 1) = f / d^2
  • d = sqrt(f / (a * ((i_on / i_off) - 1)))

The values for a and f can be approximated by hand, or calibrated based on a reference plane. a must be truly uniform, but if the LED is approximately at the same location as the lens then f can be calibrated for automatically to account for its non-point-source qualities.

The disadvantages here are primarily the assumption about ambient illumination, and the simplified material model. The advantages would be the cost and utter simplicity. The fact that it relies on a non-coded point source for illumination means you can work with infrared just as easily as visible light. Furthermore, it actually relies on ambient illumination while many other systems try to minimize it.

Tuesday, February 16, 2010

Thursday, February 11, 2010

Projection Mapping with a 3D Projector

Projection mapping is the art of working with non planar projection surfaces.

APPARATI EFFIMERI Tetragram for Enlargment from Apparati Effimeri on Vimeo.

I'd like to explore this idea with a 3D projector. Normally, 3D projection happens on a plane, which allows for a rectilinear 3D space. If you project onto anything but a plane, the 3D space will be distorted. But if you account for these distortions in advance (for example, with a 3D scan of the scene to be projected on) then you can augment the scene with an overlaid 3D form.

While installations like the video above rely on the observer's large focal distance and visual tricks (like drop shadows) for implying a depth offset, with a 3D projector and shutter glasses you can create genuine depth offsets.

Thursday, January 28, 2010

Precision CD Glitching

"Wounded" CDs have been prepared by artists like Yasunao Tone and Oval, encompassing the experimental and pop domains of music, respectively. In both cases, the music has to be re-recorded from the glitched CD to be heard (and in Oval's case, it is subject to further production). Why not use a laser cutter to make precision glitched CDs, allowing them to be distributed directly?

Thursday, January 21, 2010

Chocolate WTC

September 11th, 2001 memorial

This sculpture is a September 11th, 2001 memorial, designed by C. Boym. As best I can tell, it was cast in nickel. The color and texture gives me a wonderful, terrible idea: why not use chocolate? You know, the same way we have chocolate bunnies? The target market could be Al-Qaeda. Or, perhaps it would be cathartic for those who are still recovering to take a bite out of the past.

Wednesday, January 20, 2010

Connected Everything

Here are some things I'm going to be covering in my thesis.

Tuesday, January 19, 2010

I Am Sitting in a Google Ad

Set up simple web page with only the text of Alvin Lucier's famous composition, and Google text ads.

Visit the page, and take note of the Google text ads. Create a copy of the page, replacing Lucier's text with the text from the ads. Repeat this process indefinitely, or until Google starts repeating itself.

The Bucket Piece

At a popular contemporary art gallery in a large city, in a small white room, place a nondescript bucket on a centered pedestal with a single light above it. A guard is stationed outside, allowing only one person in at a time. Next to the bucket is a plaque reading:

At the end of the day, any money collected in this bucket will be given to the artist. You are free to add or remove money as you wish.

This is the most elegant manifestation of the piece, but a better manifestation will get others involved. For example, donating the money to a charity instead of the artist. Or multiple buckets going to different causes. Or some kind of system measuring the contributions in real time and reporting how many children have been saved from starvation for another day.

Sunday, January 17, 2010

Big Wave Surfing

Why is big wave surfing so engaging?

There's something about huge waves that inspires fear. From the shore it's possible to write them off as passively destructive. But from the water, they can look positively evil. The wave itself can't even be identified — it's no specific body of water, but a general force. A collective action of the entire ocean. An unseen force manifest in a mountain of water.

The big wave surfer confronts this: an unidentifiable, shape-shifting, destructive force backed by the entire ocean. They collaborate, redirecting all of that destructive energy into a single creative action. They name the unnameable.

From the shore, the ocean has no scale. There is nothing to be compared. But when you see a surfer on a wave, you know exactly big it is. Big wave surfing is the humanization of infinity.

Friday, January 08, 2010

Flash Mob 3D Scanning

  1. Pick a local monument.
  2. Organize a flash mob via Craigslist.
  3. Instruct everyone to take photos of the monument.
  4. Everyone then uploads and tags their photos.
  5. These photos are then uploaded to Photosynth for 3D reconstruction.

As a variant, people can just record video and walk around. This relies on the videos being high resolution.