libfreenect
is it just me, or is this kind of exciting? not exciting because it's a new "gadget", without the ps3eye, how long until we tell students how long until the new posture is "hands forward" instead of "hands up"? how long until we just wave a kinect around, and finally, what kind of work is inevitable with 3d sensing?
but because it's different kind of tool.
the eyewriter wouldn't exist in its current form.
what should we make with kinect?
is there anything we couldn't do before?
"to detect someone,
first you need to threshold the depth image" or,
"for a full 3d map of a space,
you'll need about 4 kinects in the center of room"
"superman" instead of "surrender"?
get a complete 3d map of a space
feed it into our projection mapping toolkit
and start making interesting work
instead of worrying about the mapping?
how long until there is a clear 3d interaction aesthetic?
and we say "i've seen this before, i bet they did it with a kinect" ;)
3 comments:
"i've seen this before, i bet they did it a kinect"
Ha.
I agree, this feels monumental. It has been really exciting watching people work to develop these drivers in this short period, I can't wait to see how it progresses and to eventually leverage this for gestural interfaces, sensing spaces, and autonomous robotics.
I think your right about the superman gesture.
There are many computer vision projects born before kinekt:
I bet this was not done in kinect:
http://nuve.jmartinho.net/?s=video
Post a Comment