I love Vimeo. Along with Twitter I find it to be one of my most valuable sources for creative inspiration. Also the constructive community is far far away from the usual FAIL/WTF/LOL/First/Fake types you regularly encounter on YouTube.
On my travels I’m constantly adding interesting videos to my favourites (or ‘likes’ as Vimeo calls them). There are so many incredibly inspiring artworks in there that I’ve decided to create a new channel – Tech Art – to showcase the best examples of creative code. It is a collection of pieces that fall into the categories of installation art, projection mapping, music/data/math visualisation, augmented/mixed reality, multi-touch, physical computing and anything else that utilises technology in a creative manner.
If this type of work floats your boat, then have a look here and don’t forget to subscribe if you have a Vimeo account.
Published June 9, 2009
I’ve been researching Multitouch for the last month or so over at the NUI Group website. There is a wealth of information on how to build multitouch tables using several different approaches all of which are fully documented on the forum and wiki. In addition to this there are open source frameworks, libraries and demo applications. Using all this information and free software, a fully functioning multitouch table can be assembled for less than a grand.
Before committing to full table, I decided to build a very basic version by following this superb tutorial:
Using only a webcam, a cardboard box, some tracing paper and a picture frame, I cobbled something together in about 10 minutes.
In terms of software, I decided to use tbeta A.K.A. CCV (Community Core Vision). This picks up and analyses the shadows on the diffused surface using blob detection. These results are broadcasted using the TUIC protocol and can be translated by platforms such as Flash, Processing, openFrameworks, C++, C#, Java etc. I was suprised how simple it was to set up. Within 30 minutes I dragging/resizing pictures, finger painting and all sorts using the demos.
So, time to start creating a few toys then.
*** UPDATE ***
Just to absolutely clear, I did not create this tutorial. It was by a very talented guy called Seth Sandler. I wish I had. The guy’s had over 1.3 million hits on YouTube.
I’ve been playing around with the Arduino electronics prototyping platform recently and this is my first experiment. It’s a prototype for a future artwork that essentially acts as a primitive touchscreen device. I’ve used 2 PING))) Ultrasonic Range Sensors positioned at the top of my monitor. These measure the distance of any object placed in front of them. The Arduino board collects this data and passes it on to Flash via SerProxy.
Click here to see the video on Vimeo
Click here to see the video on YouTube
For this project I used a modified version of David Cuartielles’ Arduino sketch and SerProxy as a Serial-to-Network Proxy Server. Here is my serproxy.cfg code.
# Config file for serproxy
# See serproxy's README file for documentation
# Transform newlines coming from the serial port into nils
# true (e.g. if using Flash) or false
# Comm ports used
# Default settings
# Idle time out in seconds
# Port 3 settings (ttyS2)
I spent quite a lot of time trying to use Firmata, and AS3Glue but found that Firmata didn’t work with the PING Range Sensor. I’m fairly sure that AS3Glue requires the Board to be running Firmata so, as mentioned above, I settled for Arduino code from David Cuartielles. I wrote my own Actionscript to deal with the incoming data using the Socket and ByteArray classes. You can find an early rough version of this code here.
Images of the setup can now be seen here.