Sunday, August 29, 2010

In Response to "Glitching vs..."

A couple of days ago Evan Meaney wrote a blog post titled "Glitching vs. Processing vs. Moshing vs. Signal Interference". I really appreciate Evan's glitch work, from "Ceibas Cycle" to his writings in "on glitching", where he describes the inevitable collaboration with information theory present in all digital work. But this most recent post just doesn't make any sense.

He states that the post is inspired by the diverse work submitted to GLI.TC/H. I can understand wanting to make some loose categories to help group submissions, but the language in the post sounds like he's building a framework. He asks readers to "use this space as a means to explore and delineate, to observe and report, to enumerate...". But it's hard to do that with just four independent categories (with names like "glitching" and "processing") with no unifying structure other than the context of visual arts.

I think I understand where Evan is coming from, so I'd like to try again.

Let's start with noise.

Noise is what happens when we don't understand something. Noise can be manifest in any media: confusion about the clothing of a particular culture, inability to separate a visual foreground from background, the misunderstanding of a foreign rhythm or melody as arrhythmic or atonal. In our failure to contextualize, we create noise.

Glitch means finding noise when we expect to understand. Glitch is an experience, driven by expectation, emerging from consciousness rather than computation. Just as noise would not exist without us to misunderstand it, glitch would not exist without us to misexpect it.

Glitch art is about dwelling in and exploring these experiences, which sometimes means attempting to reproduce them. These reproductions may be executed in a variety of ways. Sometimes it will involve imitating the processes that regularly lead to glitches. This includes direct memory corruption at the byte level, redirection of streams, removal of key frames, analog interference, and circuit bending. Other times it eschews these processes, and opts to evoke the sensation by other means: through the "Bad TV" effect, or in the choice of palette, shapes, motion, melody, etc.

Most of the time, glitch art falls somewhere in between, drawing on the processes that give rise to glitches, but ultimately focused on evoking the experience by whatever means necessary.

Evan suggests that "a true glitch is not reproducible". I believe "true" glitch is unrelated to reproducibility. "True" glitch is tied solely to expectation. The reason it seems like something "stable" is no longer a glitch is simply because it's packaged as such (i.e., a "glitch") removing the possibility of expecting anything else.

That said, acknowledging that glitch is an experience gives us freedom as artists to share that experience regardless of the procedural purity of our practice. Saying that we're just "imperfect" is a cop-out based on a misguided understanding of what glitch artists are aiming for.

Monday, May 10, 2010

3D Scanning as Dense Microphone Array

Sound is the displacement of matter over time.

A microphone detects sound at a single point, either via direct physical coupling, or using optical methods (as with Laser microphones).

3D scanning can also detect displacement of reflective matter over time. Using a 3D scanning setup with a very large angle between the camera and projector, very minor displacement variations can be detected. Using a high framerate camera, this displacement can be measured at audio frequencies. Every pixel then corresponds to a virtual laser microphone: instead of the usual microphone at a point, a fringe analysis microphone is comprised of N points as determined by the camera resolution.

Saturday, May 08, 2010

Gaze-informed Perceptual Compression

A video chat program that tracks your eye movement and sends gaze information to the other user. The other user's computer compresses the entire image heavily, with the exception of what you're looking at. To you, it just looks like the entire image is clear.

Monday, April 19, 2010

Piece for HTTP

  1. Post a link to a website online, but remove one of the Ts from "http" so it becomes "htp".
  2. When this link is clicked, the participant will be forced by their browser to pause for a moment and reflect on the syntax of URL, adding the missing "t" by hand

Tuesday, April 06, 2010

Empty Art for the Web

A web-enabled computer in a gallery, allowing visitors to browse the internet. A ready-made the information age.

Wednesday, March 24, 2010

Eigenanalysis for Lossy Compression

Eigenanalysis is a method for reducing a set of data to the principle dimensions along which that data varies. In the context of imaging data, it has been applied very successfully to Eigenfaces:

Where a set of faces is broken down into a smaller set of face "prototypes" that can recombined in varying portions to recreate the original data set with a limited accuracy.

In the context of music, I can imagine that the spectral characteristics of songs have some self-similarity: portions repeat, chords are repeated in different voices and different octaves, rhythms repeat, etc. I can imagine a lossy compression algorithm that takes the frequency domain representation of a song, does Eigenanalysis on these vectors, and stores the song simply as the collection of N eigenvectors and the reduced representation of each frequency-domain chunk.

Quantization methods may be employed for further reducing bit usage due to similarity between adjacent chunks. Or different portions of the spectrum can be analyzed separately, which allows for better representation of lower frequencies and less information dedicated to higher frequencies. This unfortunately does not account for the obvious relationship between the lower and higher frequencies.

A more advanced implementation may involve doing eigenanalysis on mutiple chunks simultaneously in a moving window, or at different scales, which will help with rhythmic repetition.

The octave or overtone relationship is a little more complicated, and would require something like a constant-Q transforms to get a logarithmic frequency domain.

Tuesday, March 16, 2010

Google Earth Live

"Google Earth Live" is a hypothetical service offered by Google in the not-to-distant future. It is predicated upon Google releasing a matrix of satellites into orbit that regularly poll large sections of the Earth at high resolution, and offering this data for free via the Google Earth interface.

When this is available, how would you use it (practically)? And what sort of art would you make with it?

The obvious: make timelapse videos of yourself as you go throughout your day, from the perspective of the satellite.