tag:blogger.com,1999:blog-338922752024-03-23T14:04:01.104-04:00erratic semaphoreundeveloped ideas awaiting attention, musings i'd like to rememberAnonymoushttp://www.blogger.com/profile/17427077588637523185noreply@blogger.comBlogger273125tag:blogger.com,1999:blog-33892275.post-87622394676159526392012-05-19T14:47:00.003-04:002012-05-19T14:48:57.995-04:00Rewriting Things<p>In the life of every nontrivial project, there is a point where you look back and ask yourself: "How did we get here? What a tangled web we have weaved! Can we ever recover? If we push ahead, are we digging ourselves into a deeper hole, or is there a light at the other end?"</p><p>I'd like to offer some personal advice.</p><p>If you look at your code base and say "There is so much that's right, even if it's really messy and has some serious bugs." then you should keep going, especially if other people are involved in the project. If you look at your code and say "Actually, no one is really using this yet, and if I switch things around before people have to deal with it, they'll thank me later." then you should head back. Start over. Get it right from the beginning while you have a chance, or it's really going to hurt later.</p><p>You'll know when it's ok to start over, because you won't be too worried about it. You won't deliberate very long, the choice will be pretty obvious.</p><p>But if there is some turmoil in your heart, you need to step back and get some perspective. If it's taking you weeks or months to figure out what to do, don't give up. The answer is that you need to work through it. You need to spend that time reflecting on exactly what is wrong, and what you can do about it. If you get abandon all your work at that point, you're going to regret it later. Especially if you divide the community that is contributing to it, they'll end up duplicating their effort just to learn the same lessons.</p><p>Take OpenCV as an example. The C interface was a solid backbone for thousands of developers, but the community got to a point where it needed to work past the limitations of that language. C++ looked incredibly appealing, but it would required some significant changes to how OpenCV works in order to support those features. What did they do? They didn't rewrite OpenCV from scratch, but they started building a new foundation by reflecting on what worked and what didn't work in the original interface. They took the whole community along with them and now OpenCV 2 is strong and better than ever. One of the most important things is that a lot of the same people are involved. If there was a fork, or someone decided to start from scratch, then nothing learned during the development of OpenCV 1 would be completely applied to OpenCV 2. Some of the same paradigms might be there, but a complete, continuous understanding would be missing.</p><p>On the other hand, look at pd and Max. They could have combined efforts, and learned from each other in a more direct way. But now we have two slightly different environments that are each missing features of the other.</p><p>This might all sound unnecessarily verbose for such a simple message. But that's because this is actually a metaphor for something completely unrelated to software development.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-45832843772396843592012-05-19T14:21:00.000-04:002012-05-19T14:21:05.802-04:00Not the Best<p>I want to be the best, in everything that I do. I try to always push myself, which is great, but this competitive spirit isn't always a good thing. I've been learning recently that an unbalanced competitive spirit can have a bunch of terrible side effects. It can cause:</p><ul><li>Jealousy for the success of others.</li>
<li>A tendency to seek disadvantages for others.</li>
<li>Condescending behavior in order to discourage others.</li>
<li>Unnecessary frustration when you're not at your best.</li>
</ul><p>All of these things can also lead to passive aggressive behavior. Passive aggression is a way of internalizing these effects, sinking deeper into them, while putting up a front of being above them. Passive aggressive behavior puts you on a pedestal, by acknowledging that you <em>could</em> sink to jealousy, or frustration, or anything else — but you're "better than that". If you can actually rise above these things, there is no need to explicitly acknowledge your progress.</p><p>A great example of passive aggressive behavior is providing positive sentiments after a negative statement. Telling someone you would appreciate it if they changed their behavior, and following it up with "thanks!" is one way of pushing the point that you're "better than that," and you're not "really frustrated," when in fact you're trying to mask your frustration. Better responses include: not saying anything and letting it go, or stating clearly, without any masking, how you feel. If it feels like you're exposed and your frustration is out in the open, you're probably doing it right.</p><p>An unbalanced desire to "be the best" can also cause an unwillingness towards empathic behavior. In order to have empathy for others, you need to join them where they are, and relate to their state of mind. This requires a loss of ego, and a loss of pride. Instead, the oposite behavior is chosen: the competitive person becomes sarcastic, or offers outlandish advice, or attempts to make the problem seem insignificant or trivial. I don't think I'm very sarcastic, but I regularly offer outlandish advice when people come to me with problems. Sometimes it's an effective for dealing with your own problems, but it doesn't always mean the same thing in the form of advice.</p><p>Sarcasm is maybe the most dangerous of these responses. When someone hears sarcasm, the first reaction is disbelief; that the statement sounds ridiculous. Then, they're forced into a reversal of their understanding, where they accept the true intention of the speaker. Sarcasm forces the person hearing it into the mindset of the person speaking it. This is key: sarcasm is a shield against empathy. With sarcasm, you reject the validity of another person's situation and instead force them to empathize with you.</p><p>I'm still learning these lessons, but right now they're informed by a healthy dose of criticism from a variety of open source software developers, and exactly one failed romance.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com2tag:blogger.com,1999:blog-33892275.post-18272830808119674252012-01-17T21:08:00.004-05:002012-01-17T22:21:14.064-05:00Make Fewer Decisions, Write Less Code<p>I have two suggestions for aspiring C++ programmers:</p>
<ul>
<li>Make fewer decisions.</li>
<li>Write less code.</li>
</ul>
<p>These might seem counterintuitive at first, so let me explain.</p>
<p>Let's say you have a setter method for an object.</p>
<pre>
class MyObject {
public:
void setSize(int);
private:
int size;
}
</pre>
<p>When you implement that method, how do you name the argument? Here is one way:</p>
<pre>
void MyObject::setSize(int size) {
this->size = size;
}
</pre>
<p>You may have also seen "int _size" or "int inSize", or maybe MyObject has a variable "mSize" instead, or any number of other combinations. I personally do it the above way because I have a thorough understanding of variable scope, and this happens to minimizes the number of unique symbols.</p>
<p>The important thing isn't which approach you pick, it's that you consistently use that solution. Each time you see a setter method, you should be able to write out those two lines without thinking twice. In other words, when you come across a mundane problem you should always have the same solution. You shouldn't even have to make a decision: every time there is more than one way to do something, pick the way that works most of the time, and always use it.</p>
<p>As another example, perhaps you have a collection of things you want to loop through. Here is the first thing I write:</p>
<pre>for(int i = 0; i < n; i++)</pre>
<p>Code formatting is a kind of recurring decision. What is the smallest set of rules you can formulate for the way you format code? These are two for the above line:</p>
<ul>
<li>Binary operators are surrounded by spaces.</li>
<li>Semicolons are followed by spaces.</li>
</ul>
<p>By minimizing the number of rules you have to follow, and having them cover as many situations as possible, you can reduce the amount of decisions you have to make. Maybe the above rules can be further simplified by removing the word "binary"?</p>
<p>Similarly important are your design pattern choices, all the way down to variable and enumeration idioms:</p>
<ul>
<li>Always use "i" as your index variable.</li>
<li>Always use "n" as your terminating condition.</li>
<li><a href="http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html">Always use the < when enumerating a range.</a></li>
</ul>
<p>After writing the above line, you can define "n" on the line above. You'll probably need it again later. And sometimes you can even use something like boost to write the whole thing with a single idiom.</p>
<p>If you're ever asking yourself how your code should be formatted or whether to use < or <= for a loop condition, you're probably wasting time that could be better spent on high level decisions. When I'm using Xcode I use its terrible auto-indent feature to make sure my code is consistent, even though I aesthetically disagree with some of its decisions — it's the fastest way of normalizing my code.</p>
<p>Besides making fewer decisions (and, in the process, writing more consistent code), it's important to write less. Writing less means finding creative ways to use fewer symbols, fewer objects, fewer control statements, fewer loops. Writing less should never make things more complicated. Consider the two functions below:</p>
<pre>
bool both(bool a, bool b) {
if(a) {
if(b) {
return true;
}
if(!b) {
return false;
}
}
}
bool both(bool a, bool b) {
return a && b;
}
</pre>
<p>It's an extreme example, but the point is: when you write less code, there are <a href="http://www.neverworkintheory.org/?p=58">fewer opportunities to make mistakes</a> (there is actually a mistake in the first one, can you see it?). In the situation above, you can simplify your code with truth table analysis. To sum the numbers from 1 to n you can (arguably) simplify your code with recursion, or the more efficient analytic version:</p>
<pre>
int sum(int n) {
int sum = 0;
for(int i = 1; i <= n; i++) {
sum += i;
}
return sum;
}
int sum(int n) {
if(n > 0) {
return 1 + sum(n - 1);
} else {
return 0;
}
}
int sum(int n) {
return (n * (n + 1)) / 2;
}
</pre>
<p>For each of those techniques, how many ways could they be wrong? Which one is the simplest?</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com6tag:blogger.com,1999:blog-33892275.post-83916011941904821672011-03-17T11:10:00.006-04:002011-03-17T11:49:00.865-04:00Median Filtering in OpenCV<p>I was just browsing through the <a href="https://code.ros.org/trac/opencv/browser/trunk/opencv/">OpenCV source</a> to learn more about how it implements smoothing. I noticed a few interesting things that say something more general about how OpenCV2 is structured.</p>
<p>In OpenCV1 there is cvSmooth(), which lets you pass a parameter like CV_GAUSSIAN or CV_MEDIAN to specify what kind of smoothing you want. In OpenCV2, this function coexists with CV2-style functions like cv::medianBlur() and cv::GaussianBlur() (note that Gaussian is capitalized because it is a proper name). If you scroll to the very bottom of <a href="https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/imgproc/src/smooth.cpp">smooth.cpp</a>, you'll find cvSmooth(), where it becomes evident that the newer cv::medianBlur and cv::GaussianBlur() are the implementations, while cvSmooth() is a wrapper that simply calls them.</p>
<p>Reading the <a href="http://opencv.willowgarage.com/documentation/cpp/imgproc_image_filtering.html?highlight=smooth#cv-medianblur">documentation</a>, I was surprised to find that many of the blurring functions support in-place processing. Due to <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Median_filter">the way median filtering works</a>, in-place operation is a non-trivial property. Digging into cv::medianBlur(), you'll find:</p>
<pre>
void medianBlur(const Mat& src0, Mat& dst, int ksize) {
...
dst.create( src0.size(), src0.type() );
...
cv::copyMakeBorder( src0, src, 0, 0,
ksize/2, ksize/2, BORDER_REPLICATE );
...
}
</pre>
<p>First, it calls Mat::create() on the dst Mat (in CV1, this would be an IplImage*). Mat::create() ensures that dst is the right size and type. If you pass it an unallocated Mat then this step allocates it for you, which makes it easy to use but less efficient. Then it does a copyMakeBorder(), which makes it safe to run the median filter on the edges of the image. So even if you give medianBlur() an allocated dst Mat, it's still going to be allocating a big working image for doing the blur! Finally, there's this mess of an if statement:</p>
<pre>
double img_size_mp = (double)(size.width*size.height)/(1 << 20);
if( ksize <=
3 + (img_size_mp < 1 ? 12 : img_size_mp < 4 ? 6 : 2)*
(MEDIAN_HAVE_SIMD && checkHardwareSupport(CV_CPU_SSE2) ? 1 : 3)) {
medianBlur_8u_Om( src, dst, ksize );
} else {
medianBlur_8u_O1( src, dst, ksize );
}
</pre>
<p>This is actually a really pleasant surprise. There are two things that might happen here: medianBlur_8u_Om() or medianBlur_8u_O1(). The _Om() function takes as long to run as your image is big (called O(n) time, or <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Linear_time#Linear_time">linear time</a>) while the _O1() function takes a constant amount of time to run, regardless of how big your image is (O(1) time, or constant time). The O(1) implementation isn't trivial, and was only implemented in <a href="http://nomis80.org/ctmf.html">a 2007 paper</a>. If the O(1) function is available, why not just always use that? The answer is in the if statement above: sometimes when your kernel size is smaller (relative to your total image size) it's actually faster to use the O(n) function. OpenCV has gone to the trouble of figuring out where that cutoff is, and this if statement encodes that cutoff — automatically switching between the implementations for us.</p>
<p>In conclusion, if you need the most blazingly-fast median filtering code ever, first you need to figure out which side of the if statement you're on (O(n) or O(1)). Then you should prepare a reusable buffer for yourself using cv::copyMakeBorder(), and call medianBlur_8u_O1() or medianBlur_8u_Om() directly.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com2tag:blogger.com,1999:blog-33892275.post-6710889728937753332011-03-14T22:04:00.002-04:002011-03-14T22:11:39.223-04:00Social Media Predictors<p>Who is winning on the internet right now?</p>
<p>Let's say you watch a video on YouTube. You're the 500th viewer, and later that day it explodes to 100k views. This gives you a score of 500/100000 = .005. The next day you watch a video, you're the 500th viewer, but the video never goes beyond 1k views. So your score that day is 500/1000 = .5. Your average score is (.005 + .5) / 2 = ~.25.</p>
<p>Let's say the person with the lowest score is winning. Unfortunately, the only institution that's really in a position to calculate this score is Google.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com1tag:blogger.com,1999:blog-33892275.post-68017524257415646862011-02-10T22:28:00.006-05:002011-02-10T22:49:11.061-05:00libfreenect, three months in<i>
it's been <a href="https://github.com/OpenKinect/libfreenect/commit/4094151eb0a8eb71f24df9d204e04b89b1724ea1">three months</a>,<br/>
we're already telling students,<br/>
"you need to threshold the depth image"<br/>
and <a href="http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV4?from=Research.KinectRgbDemoV3">waving around our kinect</a> for a more complete perspective<br/>
<br/>
now go get your kinect<br/>
and put it in the same spot you put it<br/>
when you first brought it home.<br/>
<br/>
do you remember the feeling<br/>
of a new eye in your house?<br/>
a welcome intruder?<br/>
<br/>
watch it sitting there and try to remember<br/>
the feeling that things are somehow "more 3d"<br/>
now that the computer can see it too.<br/>
<br/>
when you first brought it home<br/>
was it pointing away from you?<br/>
proving itself to you,<br/>
identifying the scale of a scene<br/>
larger than itself?<br/>
<br/>
it's been three months,<br/>
which direction are you pointing it now?<br/>
</i>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-66146541896782678142010-11-10T17:35:00.004-05:002010-11-10T18:22:12.941-05:00libfreenect<span style="font-style:italic;">
<p>is it just me, or is <a href="http://git.marcansoft.com/?p=libfreenect.git">this</a> kind of exciting?</p>
<p>not exciting because it's a new "gadget",<br/>
but because it's different kind of tool.</p>
<p>without the <a href="https://secure.wikimedia.org/wikipedia/en/wiki/PlayStation_Eye#PC_drivers">ps3eye</a>,<br/>
the <a href="http://www.eyewriter.org/">eyewriter</a> wouldn't exist in its current form.<br/>
what should we make with kinect?<br/>
is there anything we couldn't do before?</p>
<p>how long until we tell students<br/>
"to detect someone,<br/>
first you need to <a href="http://www.youtube.com/watch?v=EPUkueinGK4">threshold the depth image</a>" or,<br/>
"for a full 3d map of a space,<br/>
you'll need about 4 kinects in the center of room"</p>
<p>how long until the new posture is "hands forward" instead of "<a href="http://www.flong.com/texts/essays/essay_pose/">hands up</a>"?<br/>
"superman" instead of "surrender"?</p>
<p>how long until we just wave a kinect around,<br/>
get a complete 3d map of a space<br/>
feed it into our projection mapping toolkit<br/>
and start making interesting work<br/>
instead of <a href="http://vvvv.org/documentation/how-to-project-on-3d-geometry">worrying about the mapping</a>?</p>
<p>and finally, what kind of work is inevitable with 3d sensing?<br/>
how long until there is a clear 3d interaction aesthetic?<br/>
and we say "i've seen this before, i bet they did it with a kinect" ;)</p></span>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com3tag:blogger.com,1999:blog-33892275.post-51420103254968176552010-08-29T17:10:00.003-04:002010-08-29T17:17:49.970-04:00In Response to "Glitching vs..."<p>A couple of days ago <a href="http://www.evanmeaney.com/">Evan Meaney</a> wrote a blog post titled <a href="http://gli.tc/h/blog/?p=55">"Glitching vs. Processing vs. Moshing vs. Signal Interference"</a>. I really appreciate Evan's glitch work, from "Ceibas Cycle" to his writings in "on glitching", where he describes the inevitable collaboration with information theory present in all digital work. But this most recent post just doesn't make any sense.</p>
<p>He states that the post is inspired by the diverse work submitted to <a href="http://gli.tc/h">GLI.TC/H</a>. I can understand wanting to make some loose categories to help group submissions, but the language in the post sounds like he's building a framework. He asks readers to "use this space as a means to explore and delineate, to observe and report, to enumerate...". But it's hard to do that with just four independent categories (with names like "glitching" and "processing") with no unifying structure other than the context of visual arts.</p>
<p>I think I understand where Evan is coming from, so I'd like to try again.</p>
<p>Let's start with noise.</p>
<p>Noise is what happens when we don't understand something. Noise can be manifest in any media: confusion about the clothing of a particular culture, inability to separate a visual foreground from background, the misunderstanding of a foreign rhythm or melody as arrhythmic or atonal. In our failure to contextualize, we create noise.</p>
<p>Glitch means finding noise when we expect to understand. Glitch is an experience, driven by expectation, emerging from consciousness rather than computation. Just as noise would not exist without us to misunderstand it, glitch would not exist without us to misexpect it.</p>
<p>Glitch art is about dwelling in and exploring these experiences, which sometimes means attempting to reproduce them. These reproductions may be executed in a variety of ways. Sometimes it will involve imitating the processes that regularly lead to glitches. This includes direct memory corruption at the byte level, redirection of streams, removal of key frames, analog interference, and circuit bending. Other times it eschews these processes, and opts to evoke the sensation by other means: through the "Bad TV" effect, or in the choice of palette, shapes, motion, melody, etc.</p>
<p>Most of the time, glitch art falls somewhere in between, drawing on the processes that give rise to glitches, but ultimately focused on evoking the experience by whatever means necessary.</p>
<p>Evan suggests that "a true glitch is not reproducible". I believe "true" glitch is unrelated to reproducibility. "True" glitch is tied solely to expectation. The reason it seems like something "stable" is no longer a glitch is simply because it's packaged as such (i.e., a "glitch") removing the possibility of expecting anything else.</p>
<p>That said, acknowledging that glitch is an experience gives us freedom as artists to share that experience regardless of the procedural purity of our practice. Saying that we're just "imperfect" is a cop-out based on a misguided understanding of what glitch artists are aiming for.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com1tag:blogger.com,1999:blog-33892275.post-87492134982633917282010-05-10T16:38:00.005-04:002010-05-10T17:55:17.104-04:003D Scanning as Dense Microphone Array<p>Sound is the displacement of matter over time.</p>
<p>A microphone detects sound at a single point, either via direct physical coupling, or using optical methods (as with <a href="http://en.wikipedia.org/wiki/Laser_microphone">Laser microphones</a>).</p>
<p>3D scanning can also detect displacement of reflective matter over time. Using a 3D scanning setup with a very large angle between the camera and projector, very minor displacement variations can be detected. Using a high framerate camera, this displacement can be measured at audio frequencies. Every pixel then corresponds to a virtual laser microphone: instead of the usual microphone at a point, a fringe analysis microphone is comprised of N points as determined by the camera resolution.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com1tag:blogger.com,1999:blog-33892275.post-17605449057052636622010-05-08T04:38:00.003-04:002010-05-08T04:59:46.339-04:00Gaze-informed Perceptual Compression<p>A video chat program that tracks your eye movement and sends gaze information to the other user. The other user's computer compresses the entire image heavily, with the exception of what you're looking at. To you, it just looks like the entire image is clear.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com1tag:blogger.com,1999:blog-33892275.post-46800680811030290442010-04-19T01:47:00.005-04:002010-04-19T01:54:50.120-04:00Piece for HTTP<ol>
<li>Post a link to a website online, but remove one of the Ts from "http" so it becomes "htp".</li>
<li>When this link is clicked, the participant will be forced by their browser to pause for a moment and reflect on the syntax of URL, adding the missing "t" by hand</li>
</ol>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-24575499975228785832010-04-06T05:06:00.002-04:002010-04-06T05:09:39.068-04:00Empty Art for the Web<p>A web-enabled computer in a gallery, allowing visitors to browse the internet. A ready-made the information age.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-76206330649239137592010-03-24T13:04:00.002-04:002010-03-24T13:21:41.123-04:00Eigenanalysis for Lossy Compression<p>Eigenanalysis is a method for reducing a set of data to the principle dimensions along which that data varies. In the context of imaging data, it has been applied very successfully to <a href="http://en.wikipedia.org/wiki/Eigenfaces">Eigenfaces</a>:</p>
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Eigenfaces.png/220px-Eigenfaces.png"/>
<p>Where a set of faces is broken down into a smaller set of face "prototypes" that can recombined in varying portions to recreate the original data set with a limited accuracy.</p>
<p>In the context of music, I can imagine that the spectral characteristics of songs have some self-similarity: portions repeat, chords are repeated in different voices and different octaves, rhythms repeat, etc. I can imagine a lossy compression algorithm that takes the frequency domain representation of a song, does Eigenanalysis on these vectors, and stores the song simply as the collection of N eigenvectors and the reduced representation of each frequency-domain chunk.</p>
<p>Quantization methods may be employed for further reducing bit usage due to similarity between adjacent chunks. Or different portions of the spectrum can be analyzed separately, which allows for better representation of lower frequencies and less information dedicated to higher frequencies. This unfortunately does not account for the obvious relationship between the lower and higher frequencies.</p>
<p>A more advanced implementation may involve doing eigenanalysis on mutiple chunks simultaneously in a moving window, or at different scales, which will help with rhythmic repetition.</p>
<p>The octave or overtone relationship is a little more complicated, and would require something like a constant-Q transforms to get a logarithmic frequency domain.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-2441343644825333192010-03-16T15:13:00.002-04:002010-03-16T15:27:04.233-04:00Google Earth Live<p>"Google Earth Live" is a hypothetical service offered by Google in the not-to-distant future. It is predicated upon Google releasing a matrix of satellites into orbit that regularly poll large sections of the Earth at high resolution, and offering this data for free via the Google Earth interface.</p>
<p>When this is available, how would you use it (practically)? And what sort of art would you make with it?</p>
<p>The obvious: make timelapse videos of yourself as you go throughout your day, from the perspective of the satellite.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-2151373597287640072010-03-13T19:58:00.004-05:002010-03-13T20:06:09.684-05:00Non-Metamer Monochromes<a href="http://en.wikipedia.org/wiki/File:Marevich,_Suprematist_Composition-_White_on_White_1917.jpg"><img src="http://upload.wikimedia.org/wikipedia/en/thumb/a/ad/Marevich%2C_Suprematist_Composition-_White_on_White_1917.jpg/300px-Marevich%2C_Suprematist_Composition-_White_on_White_1917.jpg"/></a>
<p><a href="http://en.wikipedia.org/wiki/Monochrome_painting">Monochromatic paintings</a> have a tradition going back to the early 1900s, exemplified by Malevich and Rodchenko, and later by Rauschenberg.</p>
<p>I'd like to produce a series of monochromes that uses a single non-<a href="http://en.wikipedia.org/wiki/Metamerism_%28color%29">metamer</a>. By non-metamer, I mean a color that has the same frequency spectrum as the color being replicated. For example, the green of a leaf, the blue of the sky, or the red of a sunset. Instead of just resembling these colors, various paints would be analyzed for their spectral response and mixed in the correct proportions so they precisely recreated these colors.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com1tag:blogger.com,1999:blog-33892275.post-3603622170335265632010-03-12T21:07:00.003-05:002010-03-12T21:20:48.921-05:00Alternative Prime Spirals<p>The <a href="http://en.wikipedia.org/wiki/Ulam_spiral">Ulam spiral</a> is based on the idea of arranging integers in a rectilinear 2D spiral.</p>
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Ulam_spiral_howto_primes_only.svg/200px-Ulam_spiral_howto_primes_only.svg.png"/>
<p>And noticing that certain diagonal patterns fall out that aren't explainable by simple equations that describe some of the "holes".</p>
<img src="http://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Ulam_1.png/250px-Ulam_1.png"/>
<p>What other orderings might reveal interesting patterns? How about a 2D <a href="http://en.wikipedia.org/wiki/Hilbert_curve">Hilbert curve</a>?</p>
<img src="http://upload.wikimedia.org/wikipedia/commons/4/46/Hilbert_curve.gif"/>
<p>Or maybe a 3D one? How might you continue a spiral in a cubic 3D space? Would you get diagonal planes describing the primes? How about a higher dimensional space — maybe higher dimensional planes?</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-1657246030860208982010-03-12T13:43:00.005-05:002010-03-12T20:21:55.476-05:00Jesus Glitch<p>The holy is often found in unexpected places. Jesus in <a href="http://www.metro.co.uk/weird/808516-face-of-jesus-appears-in-naan-bread">naan</a>, Mary in a <a href="http://www.cbsnews.com/stories/2005/04/20/national/main689630.shtml">Chicago underpass</a>. Dan Paluska has immortalized this concept with his <a href="http://plainfront.com/theholytoaster/">Holy Toaster</a>.</p>
<a href="http://plainfront.com/theholytoaster/"><img src="http://farm1.static.flickr.com/250/456124655_30491edce5_m.jpg"/></a>
<p>Why don't we ever see Jesus in corrupted image files?</p>
<a href="http://www.flickr.com/photos/r00s/4360443285/"><img src="http://farm5.static.flickr.com/4042/4360443285_eaab3d2f1d_m.jpg"/></a>
<p>Image compression algorithms are generally rated on their ability to convincingly ignore non-perceptually-relevant features. I propose a new metric for these algorithms: how likely they are, when corrupted, to produce an image of a holy figure.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com2tag:blogger.com,1999:blog-33892275.post-66003075170327514052010-03-12T13:43:00.001-05:002010-03-12T13:43:26.469-05:00Six Pieces for Life<p>Live like you only have until the next:</p>
<ol>
<li>day</li>
<li>week</li>
<li>month</li>
<li>year</li>
<li>decade</li>
<li>century</li>
</ol>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-33440574381772173182010-02-24T09:59:00.000-05:002010-06-13T21:57:45.180-04:00Lossy Vector Compression<p>An evenly sampled vector outline is essentially a 2D signal. This isn't the 2D of a raster image, where you have a 2D space with a 3D (RGB) value at each point. It's a 1D space with a 2D (XY) value at each point. You can do a frequency domain decomposition on this signal, which is the foundation for most image compression algorithms. What would it look like to do the usual compression tricks? Quantization of the amplitudes, high frequency removal, etc.?</p>
<p>The interesting thing about this transformation is that line drawings as frequency-decomposable entities already have a tradition established in <a href="http://en.wikipedia.org/wiki/Harmonograph">Harmonographs</a>. To recreate any drawing with a harmonograph would simply require N pendulums on each axis, each with a length proportional to the square of the frequency represented (given the <a href="http://en.wikipedia.org/wiki/Pendulum_%28mathematics%29">mathematical definition of a pendulum</a>). You would give all the pendulums equal mass, place them at an angle corresponding to the amplitude, and then release them at the right time. This could recreate any line drawing.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-68818118624510920322010-02-21T16:57:00.002-05:002010-02-21T17:03:05.092-05:00The Real and the Virtual<p>I'd like to create an installation using a standard multitouch interface. The interface would be approximately 1 m wide and fairly high resolution. It would be mounted in a table-top configuration. A small pool of water, of similar construction and equivalent size, would be sitting directly next to the interface. The interface would be running a water simulation that resembles the real water as much as possible.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com4tag:blogger.com,1999:blog-33892275.post-23384416744031803852010-02-21T01:50:00.006-05:002010-02-21T02:22:09.283-05:003D Video Scanner for Cheap<p>Here's a way you might try making a 3D video scanner for the cost of a webcam:</p>
<ul>
<li>Weccam with VSYNC broken out</li>
<li>Bright LED or LED array</li>
<li>Ambient illumination</li>
</ul>
<p>Mount the LED at approximately the same location as the camera lens. Turn the LED on for alternating VSYNC pulses. The 3D decoding process is as follows: the light intensity at every point can be modeled using the equation i = r * (a + s), where:</p>
<ul>
<li>i is the captured intensity at that pixel</li>
<li>r is the reflectivity at that point</li>
<li>a is the ambient illumination at that point</li>
<li>s is the illumination due to the LED source at that point</li>
</ul>
<p>Sampling with the LED on and off yields two equations:</p>
<ol>
<li>i_on = r * (a + s)</li>
<li>i_off = r * (a + 0)</li>
</ol>
<p>And s corresponds to distance proportionally to an inverse square law:</p>
<ul>
<li>s(d) = f / d^2</li>
</ul>
<p>Where f is a scaling factor that relates s to a. Solving for d yields:</p>
<ul>
<li>i_off = r * a</li>
<li>i_off / a = r</li>
<li>i_on = (i_off / a) * (a + (f / d^2))</li>
<li>((a * i_on) / i_off) - a = f / d^2</li>
<li>a * ((i_on / i_off) - 1) = f / d^2</li>
<li>d = sqrt(f / (a * ((i_on / i_off) - 1)))</li>
</ul>
<p>The values for a and f can be approximated by hand, or calibrated based on a reference plane. a must be truly uniform, but if the LED is approximately at the same location as the lens then f can be calibrated for automatically to account for its non-point-source qualities.</p>
<p>The disadvantages here are primarily the assumption about ambient illumination, and the simplified material model. The advantages would be the cost and utter simplicity. The fact that it relies on a non-coded point source for illumination means you can work with infrared just as easily as visible light. Furthermore, it actually relies on ambient illumination while many other systems try to minimize it.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-45613512747903184072010-02-16T19:25:00.000-05:002010-06-13T21:57:45.201-04:00Google Insights<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=internet+explorer%7Cfirefox%7Cchrome&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=afghanistan%7Ciraq%7Ciran%7Ckorea&up__location=empty&up__category=0&up__time_range=12-m&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=three%7Cfour%7Cfive%7Csix%7Cseven&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=swine%7Cflu&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=yours%7Cmine%7C&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=kazaa%7Ctorrent&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=grandma%7Cgrandpa&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=photo%7Cmusic%7Cvideo&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=%7Csee%7Ctouch%7C%7C&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=legs%7Carms%7Cfeet%7Ceyes%7Chands&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=summer%7Cwinter%7C&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=hot+cocoa%7Cice+tea&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=coat%7Cbikini&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=skiing%7Cswimming&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=san+francisco%7Cnyc&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=north%7Csouth%7Cwest%7Ceast&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=breakfast%7Clunch%7Cdinner&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>
<script type="text/javascript" src="http://www.gmodules.com/ig/ifr?url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fgoogle_insightsforsearch_interestovertime_searchterms.xml&up__property=empty&up__search_terms=facebook%7Cporn&up__location=empty&up__category=0&up__time_range=empty&up__compare_to_category=false&synd=ig&w=320&h=350&lang=en-US&title=Google+Insights+for+Search&border=%23ffffff%7C3px%2C1px+solid+%23999999&output=js"></script>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-12092873251451943902010-02-11T17:25:00.004-05:002010-02-11T17:44:16.844-05:00Projection Mapping with a 3D Projector<p>Projection mapping is the art of working with non planar projection surfaces.</p>
<object width="400" height="300"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=5374101&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=ffffff&fullscreen=1" /><embed src="http://vimeo.com/moogaloop.swf?clip_id=5374101&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=ffffff&fullscreen=1" type="application/x-shockwave-flash" allowfullscreen="true" allowscriptaccess="always" width="400" height="300"></embed></object><p><a href="http://vimeo.com/5374101">APPARATI EFFIMERI Tetragram for Enlargment</a> from <a href="http://vimeo.com/user1284538">Apparati Effimeri</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<p>I'd like to explore this idea with a 3D projector. Normally, 3D projection happens on a plane, which allows for a rectilinear 3D space. If you project onto anything but a plane, the 3D space will be distorted. But if you account for these distortions in advance (for example, with a 3D scan of the scene to be projected on) then you can augment the scene with an overlaid 3D form.</p>
<p>While installations like the video above rely on the observer's large focal distance and visual tricks (like drop shadows) for implying a depth offset, with a 3D projector and shutter glasses you can create genuine depth offsets.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com3tag:blogger.com,1999:blog-33892275.post-50849763037324055042010-01-28T23:49:00.002-05:002010-01-28T23:55:24.953-05:00Precision CD Glitching<p>"Wounded" CDs have been prepared by artists like <a href="http://en.wikipedia.org/wiki/Yasunao_Tone">Yasunao Tone</a> and <a href="http://en.wikipedia.org/wiki/Oval_%28band%29">Oval</a>, encompassing the experimental and pop domains of music, respectively. In both cases, the music has to be re-recorded from the glitched CD to be heard (and in Oval's case, it is subject to further production). Why not use a laser cutter to make precision glitched CDs, allowing them to be distributed directly?</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0tag:blogger.com,1999:blog-33892275.post-10990515099832334082010-01-21T01:27:00.004-05:002010-01-21T03:15:03.754-05:00Chocolate WTC<a href="http://radicalart.info/destruction/ArtificialDisasters/WTC/index.html"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 250px; height: 307px;" src="http://1.bp.blogspot.com/_cW3PDUfh7bo/S1f0fgdx4dI/AAAAAAAAAB4/JtwWAjMGbFU/s320/CBoymSept11thMemSet-S.jpg" border="0" alt="September 11th, 2001 memorial" id="BLOGGER_PHOTO_ID_5429076697946382802"/></a>
<p>This sculpture is a September 11th, 2001 memorial, designed by C. Boym. As best I can tell, it was cast in nickel. The color and texture gives me a wonderful, terrible idea: why not use chocolate? You know, the same way we have chocolate bunnies? The target market could be Al-Qaeda. Or, perhaps it would be cathartic for those who are still recovering to take a bite out of the past.</p>Kylehttp://www.blogger.com/profile/15336246897173047011noreply@blogger.com0