Sunday, May 27, 2007


Throughout history it's been necessary for time to be standardized into units and moments. As we continue developing approaches to personalized information presentation and representation-independent communication, we can start making up our own units. What habits of mind would change if one person worked on a 30 "hour" day, and another on a 10 "hour" day, corresponding to units that were intuitively meaningful to those individuals?

Would we be more effective at time management if we divided time into more meaningful units? Whoever decided that the optimal length for a lecture is quantized in units of hours?

Edit: As generally happens, I'm simultaneously out of my league and describing central questions to entire disciplines. See "The culture of time and space" by Stephen Kern for an introduction to the history of timekeeping.

Saturday, May 26, 2007


The emerging field of bio-art begs the question: what other science/art "mashups" are possible? What would astronomy look like if taken over by artists? Can we track space junk and use it as a projection medium in the manner of Ken Perlin's holodust? Can we launch satellite arrays, using each satellite as a pixel that turns "on" when it orients itself to reflect the sun towards Earth? Pauline Oliveros has already improvised with moon, can we improvise with Sun by bouncing sound off its corona? What more distant interactions and displays are possible?

William Larson

In the 70s and 80s artist William Larson took advantage of the duality of the fax machine — transmitting images as sound — by transmitting both images and sound in the first electronically mediated collages.

The Pantelegraph

Precursor to the fax machine, the pantelegraph was developed by Giovanni Caselli in the 1860s. A needle attached to the end of a pendulum "scanned" a sheet of paper, with the ink and paper producing different amounts of resistance, sent over telegram to another location with a closely synchronized pendulum.

Friday, May 25, 2007

Sound Modulated Light

Sound Modulated Light by Edwin van der Heide transcodes sound (relative air pressure over time) as light (relative brightness over time). A "light receiver" is provided to visitors, allowing them to listen to the light.

Thursday, May 24, 2007

Playlist Creation as a TSP

A playlist (or mixtape) can be considered an ordering over a set of songs. Defining this ordering can be a fairly difficult process, and if we frame it as TSP solving, it's clear why.

Every song has a number of features (tempo, key, genre, theme, etc.), and the difference between songs is some distance metric over these feature vectors. Each song is a node, and there is an edge between every node the length of the difference. As with a traditional TSP, the solution is the shortest path that visits every node.

This alone is a difficult problem, but it's compounded by the possibility of a context dependent metric, or even a total time constraint less than the sum of all the song lengths (with the ability to ignore expensive songs).

Monday, May 14, 2007

Visible Speech

The IPA presents a visual, written representation of spoken sounds. The orthography is derived mostly from standard alphabets. Alexander Melville Bell (father of Alexander Graham Bell) developed Visible Speech towards a similar end, but he attempts to preserve physiological characteristics of the sound in the orthography.

Sonic Visualiser

Sonic Visualiser transcodes audio visually. Meant for "studying a musical recording", it presents a number of "layers" of the audio — a spectral representation, signal representation, note onsets, note values, etc.

Tuesday, May 08, 2007

4 Performer/Instrument Interaction Models

Joel Chabade has some thoughts on performer/instrument interaction models in the latest SEAMUS newsletter. His classification system goes:

  1. Simple: there is a "simple, predictable response" to input (he critiques these models in a NIME 2002 paper)
  2. Fly-by-wire: the performer has an abstracted, high-level control of the system
  3. Interactive: the instrument acts autonomously, responding to the performer
  4. Interacting with life: the performer shares control of the sound with the system, as a sailor shares control of the boat with the waves

The difference between the third and fourth models seems a little unclear, but besides that there are useful distinctions here. "Simple" vaguely corresponds to what I'll call as transcoding, where you interpret one type of data directly as another (gestures as sound parameters). "Fly-by-wire" is a few-to-many transcoding. "Interactive" and "Interacting with life" can be modeled by including a noise source in the transcoding or by adding memory.