Bipolar is the result of a short experimental journey into visualising sound using computer vision. The initial idea was to capture a mesh of my face in realtime, and warp it using the sound buffer data coming in from the microphone as I speak. Initially I explored ofxFaceTracker but had trouble segmenting the mesh so moved to the Kinect camera. I had a rough idea of how the final result might look but it turned out quite differently.
As this intense spiky effect began to take shape I realised this would be perfect for the chaotic and dark sound of Dubstep. Thankfully I know just the guy to help here. I met the DJ and producer Sam Pool AKA SPL at the Fractal ’11 event in Colombia. He kindly offered to contribute some music to any future projects so I checked out his offerings on SoundCloud and found the perfect track in Lootin ’92 by 12th Planet and SPL. This, of course, meant I would have to perform to the music. Apologies in advance for any offence caused by my “dancing”🙂
This was build using openFrameworks and Theo Watson’s ofxKinect addon which now offers excellent depth->RGB calibration. I’m building a mesh from this data and calculating all the face and vertex normals. Every second vertex is then extruded in the direction of it’s normal using values taken from the microphone.
The project is still at the prototype stage and needs some refactoring and optimisation. Once it is looking a little better I will release the code.