Archive for the 'Kinect' Category

Cell at Audi City Beijing

I recently travelled to China to install Cell at a new media arts exhibition held at Audi City Beijing. Cell in an installation I made in collaboration with Keiichi Matsuda in 2011 that is a provocation, a comment on the commodification of identity and a vision of how we might present ourselves in the coming years (more here).

It was always our intention to change this installation over time to implement new technologies and adapt to different contexts. In this instance we decided to give the visitors the opportunity to contribute to the piece by submitting the tags. This was achieved via a web app that would present the user with one of twenty questions such as – “Where did you meet your first love?” or “What is something you couldn’t live without?”. The answer is submitted and added to the collection of tags. Whereas the original piece would allow users to adopt the role of fictional characters, the result of this version was a crowd sourced cloud of words and phrases that formed a collective identity over the course of the week long exhibition.

Cell_1

Cell_2

There were several challenges this time round. The display consisted of 2 “PowerWalls” that, when combined, consisted of 8×4 plasma screens – an overall size of 11x3m. We went with a very powerful custom PC (made by Gareth Griffiths of Uberact) as we needed to significantly increase the tag count and split the image over the 2 walls (using a DualHead2Go). We also needed the extra power as there were 5 kinects (all running from separate PCs). This allowed for up to 10 simultaneous users and meant more calculations than usual. Cell is an open source project and the code for the new iteration is available here. The piece requires openFrameworks v0.8.0 and Visual Studio Express 2012.

Cell_3

I was pleasantly surprised to discover that my app Konstruct (made with Juliet Alliban) was also exhibited at the event. This section was part of the AppArtAwards exhibition and was organised by the Goethe-Institut China and ZKM.

Finally, huge thanks to Audi for holding the exhibition, to ADP Projects for helping to curate the event and acting as producers in Beijing, to Keith Watson for providing some space at Level39 for testing, to Juliet Alliban for helping with the setup and to Gareth Griffiths for building the PC.

Bipolar at Digital Shoreditch

Bipolar is an experiment in using the human form as a medium for sound visualisation. It is an audiovisual virtual mirror that warps the participant’s body as they wander through the space. A soundscape designed by Liam Paton is generated from the presence and motion of the participants. The data from this (in addition to sounds from the user and environment) is used to transform the body into a distorted portrait that fluctuates between states of chaos and order.

Bipolar

Bipolar

Bipolar

This piece has evolved from an experiment I made 18 months ago when exploring the possibilities for using the body as a canvas for visualising sound – have a look here for more information on the technology. Since then it has been exhibited at a number of events including Digital Shoreditch, The Wired Popup Store in Regent St, The New Sublime exhibition at Brighton Digital festival and The BIMA awards. There are plans to install it at several more spaces in the coming months.

Bipolar at Wired Popup Store

Bipolar at Wired Popup Store

Bipolar at Digital Shoreditch

Bipolar at Digital Shoreditch

In the time since the original experiment, Bipolar has gone through several changes and optimisations. The biggest addition is the interactive sound aspect which was designed by Liam Paton, composer and co-founder of Silent Studios. The idea was to build a dark, abstract soundscape to compliment the visuals and react to motion, location and distance. He built the software using Max/MSP and I was able to communicate with it from my openFrameworks app via OSC.

Visually, I wanted to retain the chaotic nature of the original but with a few refinements and optimisations. The main issue with the original version was the fact that the extrusions appeared to be fairly random. Each spike is achieved by extruding a vertex in the direction of its normal but the normals weren’t very smooth. This was down to the way in which the depth data from the Kinect is presented. In order to get round this I implemented a custom smoothing algorithm that took place on the GPU (the vertex normals were also calculated by making a normal map on the GPU) which allowed me to create a much more pleasing looking super optimised organised chaos.

Kinect Processing

Another addition was some fake ambient occlusion. The original piece could seem a little flat in places, so this effect was added to create what look like shadows surrounding the spikes. I achieved this by darkening the colour of certain vertices surrounding the extruded vertex. The results should be visible in the image below.

Bipolar shading

At the moment all of the mesh processing is tightly interweaved into the application. I intend to release an addon in the coming weeks that will include most of this functionality along with some simple hole filling.

Vevo Presents: The Maccabees (in the dark)

This live session with The Maccabees is presented by Vevo for the Magners ‘Made in the Dark’ campaign. The brainchild of Directors Jamie Roberts and Will Hanke, the performance contains a combination of live action footage (shot with an Alexa on a technocrane) and an animated sequence by Jamie Child and James Ballard. In addition to this, the scene was also shot in 3D. This was where I came in. In order to achieve this we built a rig that contained 10 Kinect cameras, each attached to a Macbook Pro. Seven of these were facing outward to the crowd and 3 were facing inward to the band.

Three applications were built to achieve this, all using openFrameworks. The client application used ofxKinect to record the point clouds. The millimetre data for each pixel of the depth map was transcoded into 320×240 TIFF images and exported to the hard drive at roughly 32 fps. A server application was used to monitor and control the 10 clients using OSC. Among other tasks, this starts/stops the recording, synchronises the timecode and displays the status, fps and a live preview of the depth map.

Once the recording had taken place a separate ‘mesh builder’ app then created 3D files from this data. Using this software, the TIFFs are imported and transformed back into their original point cloud structure. A variety of calibration methods are used to rotate, position and warp the point clouds to rebuild the scene and transform it into 2 meshes, one for the band and another for the crowd. A large sequence of 3D files (.obj) were exported and given to the post production guys to create the animated sequence in Maya and After Effects. This app also formats the recorded TIFF and .obj files so that there are only 25 per second and are in an easily manageable directory structure.

The whole project was one long learning experience consisting of many of hurdles. Networking, controlling and monitoring so many client machines was a challenge, as was dealing with and formatting such a huge amount of files.

One of the greatest challenges was calibrating and combining the band point clouds. I thought it would be possible to implement a single collection of algorithms to format each one then just rotate and position them. This wasn’t the case. It seemed that each camera recording was in fact slightly different. I had to implement a sort of cube warping system to alter each point cloud individually so they would all fit together.

I was aware that Kinect point clouds become less accurate at distances beyond 2-3 metres so I implemented a few custom smoothing techniques. These were eventually dropped in favour of the raw chaotic Kinect aesthetic, which we agreed to embrace from the early stages of the project. The idea from the start was to make an abstract representation of the band members.

The shoot took place at 3 Mills studios. Jamie and Will had managed to bring together a huge team of talented people and some incredible technology with a relatively small budget. In addition to Kinect rig, the technocrane and the Alexa camera there was also a programmable LED rig built and controlled by Squib. It was a pleasure to be part of the team and see it all come together.

Here’s some behind the scenes footage.

Cell in Shanghai

I was recently invited to Shanghai for a week to set up Cell with Keiichi Matsuda. It was for an art/music/film/food event organised by Emotion China. They had a 5×3 LCD wall erected specifically to display the piece. This made a huge difference to the usual rear projection configuration.

Here are a few photos:


-

More photos of the trip here.

Traces

Traces

Traces is an interactive installation that was commissioned by The Public in West Bromwich for their Art of Motion exhibtion and produced by Nexus Interactive Arts. The piece encourages the user to adopt the roles of both performer and composer. It is an immersive, abstract mirror that offers an alternative approach to using the body to communicate emotion. Kinect cameras and custom software are used to capture and process movement and form. This data is translated into abstract generative graphics that flow and fill the space offering transient glimpses of the body. An immersive generative soundscape by David Kamp reacts to the user’s presence and motion. Traces relies on continuous movement in space in order to create shadows of activity. Once the performer stops moving, the piece disintegrates over time, and returns to darkness.

The Public is an incredible 5 storey gallery, community centre and creative hub in West Bromwich that is concerned with showcasing the work of interactive artists and local people. I’ve been in talks with the curator Graham Peet for the last year with a view to potentially contributing. A few months ago, he commissioned me to build a new piece for the “Art of Motion” exhibition. I thought this would be an ideal opportunity to work with Nexus Interactive Arts. They were interested in the piece and agreed to produce it. The producer, Beccy McCray introduced me to Berlin based sound designer and composer David Kamp who did an excellent job with the generative soundscape.

Traces exhibition entrance

My aim was to build an installation that suited not only the theme of the show but the themes of play, discovery and creativity that already permeate the gallery spaces of The Public.

Traces

Traces was built with openFrameworks and Sadam Fujioka‘s ofxKinectNui addon. This allowed me to use Windows Kinects (kindly donated by Microsoft) and the new Microsoft Kinect SDK 1.0.

The show runs from 30th May until 9th September. I would highly recommend anyone with an interest in interactive art to take a trip to West Bromwich to visit The Public. In addition to the exhibition there are many other excellent pieces.

Here’s an excellent write up and interview on the Creators Project.

Traces

Bipolar

Bipolar is the result of a short experimental journey into visualising sound using computer vision. The initial idea was to capture a mesh of my face in realtime, and warp it using the sound buffer data coming in from the microphone as I speak. Initially I explored ofxFaceTracker but had trouble segmenting the mesh so moved to the Kinect camera. I had a rough idea of how the final result might look but it turned out quite differently.

As this intense spiky effect began to take shape I realised this would be perfect for the chaotic and dark sound of Dubstep. Thankfully I know just the guy to help here. I met the DJ and producer Sam Pool AKA SPL at the Fractal ’11 event in Colombia. He kindly offered to contribute some music to any future projects so I checked out his offerings on SoundCloud and found the perfect track in Lootin ’92 by 12th Planet and SPL. This, of course, meant I would have to perform to the music. Apologies in advance for any offence caused by my “dancing” :)

This was build using openFrameworks and Theo Watson’s ofxKinect addon which now offers excellent depth->RGB calibration. I’m building a mesh from this data and calculating all the face and vertex normals. Every second vertex is then extruded in the direction of it’s normal using values taken from the microphone.

The project is still at the prototype stage and needs some refactoring and optimisation. Once it is looking a little better I will release the code.

Cell

Cell

Cell at the Alpha-Ville festival

Cell is an interactive installation commissioned for the Alpha-Ville festival, a collaboration between myself and Keiichi Matsuda. It plays with the notion of the commodification of identity by mirroring the visitors in the form of randomly assigned personalities mined from online profiles. It aims to get the visitors thinking about the way in which we use social media to fabricate our second selves, and how these constructed personae define and enmesh us. As users enter the space they are assigned a random identity. Over time, tags floating in the cloud begin to move towards and stick to the users until they are represented entirely as a tangled web of data seemingly bringing together our physical and digital selves.

I first got in touch with the organisers of the festival, Estella Olivia And Carmen Salas, around May with a view to contributing. They asked if I knew of Keiichi Matsuda, and whether I would be interested in a collaboration. Coincidentally we had met up a month before and had discussed the idea of joining forces as our areas of research are very similar. We come from different fields, he architecture and film making, me new media art and interaction design. This turned out to be a perfect combination. We shared the concept and design, Keiichi focussed on the fabrication, planning the space and putting together the documentary while I happily wrote the software. Even with these distributed roles we found we were often offering suggestions and help to each other throughout the course of the project.

The wall

The concept wall

Microsoft have supported the project from the early stages. Keiichi and I were both speaking at an event in June when we met Paul Foster who was promoting the MS Kinect for Windows SDK. We discussed our project which would be utilising the Kinect camera and he was interested in helping out. He introduced us to William Coleman and since then they have supplied all the equipment and funded the studio space (thanks to Tim Williams and Tom Hogan at Lumacuostics for putting us up and all the advice).

In addition to this, Microsoft also introduced us to Simon Hamilton Ritchie who runs Brighton based agency Matchbox Mobile. These guys contributed a great deal to the project, most importantly, ofxMSKinect, an openFrameworks addon for the official Kinect SDK. One of the main advantages of using this over the hacked drivers is the auto user recognition, we no longer need to pull that annoying calibration stance which can be a big barrier in a piece such as Cell. In addition to depth/skeleton tracking, the potential for utilising the voice recognition capabilities is an exciting prospect for the interactive arts community. This will be integrated into ofxMSKinect in the coming months.

Multiple Skeletons

Skeletal data from 4 Kinect cameras

So on to the setup, Halfway through the project we realised that we would only be able to track 2 skeletons using a single Kinect camera. While this is fine for gaming, for a large scale interactive experience this would not be enough. So instead of 1 camera we decided to go with 4! We organised 4 Dell XPS 15 laptops each connected to a Kinect camera. The skeletal data from each client is fed to an Alienware M17x laptop through a Local Area Connection (with help from Matchbox) giving us the potential to track the skeletal data of up to 8 users in a space of around 5m x 4m. The software on the Alienware server then calculates and renders the scene which is rear projected onto a large screen using a BenQ SP840 projector.

The screen posed a bit of a challenge. We could either rent one for a ridiculous price or build our own and have complete freedom over the design. This was important to us so Keiichi put his woodwork skills to the test and made a 4.2m x 1.8m screen that can be reduced to 1.5m. Quite an achievement for a rear projection screen with no supporting beams! We used ROSCO grey screen material which was perfect for our requirements.

Setting up

Keiichi and Iannish preparing the screen at the Alpha-Ville festival

We were very pleased with the reaction to Cell. The feedback from the festival goers was really positive. It was important to us that the participants were both interested in the concept and taken by the experience. Many that we spoke to seemed to engage with the piece on both levels.

If you would like any more information please visit the Cell website. If you would like to contact us regarding this piece, please email – info [at] installcell.com

I’d like to thank the following for their help in realising this piece (in order of appearance):

Carmen Salas, Estela Oliva, Paul Foster, Will Coleman, Simon Hamilton Ritchie, Theo Watson, Kyle McDonald, Arturo Castro, Tim Williams, Tom Hogan, Claire Holdsworth, Vincent Oliver and Iannish Posooa.

Kinect Serendipity

I’ve been working on a new installation using the Xbox Kinect camera which, incidentally, has been a great excuse to start getting to grips with OpenGL. I had some lovely unexpected glitchy results earlier today that made for some wonderful imagery. Check it out:

Kinect Serendipity 1

Kinect Serendipity 2

Kinect Serendipity 3










Tweets

Flickr Photos

Mesh study

Mesh study

Mesh study

More Photos

Follow

Get every new post delivered to your Inbox.

Join 58 other followers