Wednesday, February 24, 2010

Lossy Vector Compression

An evenly sampled vector outline is essentially a 2D signal. This isn't the 2D of a raster image, where you have a 2D space with a 3D (RGB) value at each point. It's a 1D space with a 2D (XY) value at each point. You can do a frequency domain decomposition on this signal, which is the foundation for most image compression algorithms. What would it look like to do the usual compression tricks? Quantization of the amplitudes, high frequency removal, etc.?

The interesting thing about this transformation is that line drawings as frequency-decomposable entities already have a tradition established in Harmonographs. To recreate any drawing with a harmonograph would simply require N pendulums on each axis, each with a length proportional to the square of the frequency represented (given the mathematical definition of a pendulum). You would give all the pendulums equal mass, place them at an angle corresponding to the amplitude, and then release them at the right time. This could recreate any line drawing.

Sunday, February 21, 2010

The Real and the Virtual

I'd like to create an installation using a standard multitouch interface. The interface would be approximately 1 m wide and fairly high resolution. It would be mounted in a table-top configuration. A small pool of water, of similar construction and equivalent size, would be sitting directly next to the interface. The interface would be running a water simulation that resembles the real water as much as possible.

3D Video Scanner for Cheap

Here's a way you might try making a 3D video scanner for the cost of a webcam:

  • Weccam with VSYNC broken out
  • Bright LED or LED array
  • Ambient illumination

Mount the LED at approximately the same location as the camera lens. Turn the LED on for alternating VSYNC pulses. The 3D decoding process is as follows: the light intensity at every point can be modeled using the equation i = r * (a + s), where:

  • i is the captured intensity at that pixel
  • r is the reflectivity at that point
  • a is the ambient illumination at that point
  • s is the illumination due to the LED source at that point

Sampling with the LED on and off yields two equations:

  1. i_on = r * (a + s)
  2. i_off = r * (a + 0)

And s corresponds to distance proportionally to an inverse square law:

  • s(d) = f / d^2

Where f is a scaling factor that relates s to a. Solving for d yields:

  • i_off = r * a
  • i_off / a = r
  • i_on = (i_off / a) * (a + (f / d^2))
  • ((a * i_on) / i_off) - a = f / d^2
  • a * ((i_on / i_off) - 1) = f / d^2
  • d = sqrt(f / (a * ((i_on / i_off) - 1)))

The values for a and f can be approximated by hand, or calibrated based on a reference plane. a must be truly uniform, but if the LED is approximately at the same location as the lens then f can be calibrated for automatically to account for its non-point-source qualities.

The disadvantages here are primarily the assumption about ambient illumination, and the simplified material model. The advantages would be the cost and utter simplicity. The fact that it relies on a non-coded point source for illumination means you can work with infrared just as easily as visible light. Furthermore, it actually relies on ambient illumination while many other systems try to minimize it.

Tuesday, February 16, 2010

Thursday, February 11, 2010

Projection Mapping with a 3D Projector

Projection mapping is the art of working with non planar projection surfaces.

APPARATI EFFIMERI Tetragram for Enlargment from Apparati Effimeri on Vimeo.

I'd like to explore this idea with a 3D projector. Normally, 3D projection happens on a plane, which allows for a rectilinear 3D space. If you project onto anything but a plane, the 3D space will be distorted. But if you account for these distortions in advance (for example, with a 3D scan of the scene to be projected on) then you can augment the scene with an overlaid 3D form.

While installations like the video above rely on the observer's large focal distance and visual tricks (like drop shadows) for implying a depth offset, with a 3D projector and shutter glasses you can create genuine depth offsets.