A few months ago I met up with Peter Gregson, who we chatted to yesterday and is giving the keynote at Reasons to be Appy next week. Peter was very excited about a project he was knee-deep in at the time called ‘The Listening Machine’. ‘The Listening Machine’ is ostensibly a 6 month long piece of music that is dynamically generated (using pre-recorded live instruments) based on the conversations of 500 people on Twitter. That is, the music will shift and change based on elements including volume of tweets, sentiment and topic area. It launched last week with some nice coverage from the likes of Huffington Post, Wall Street Journal and Scientific American, and makes for some rather lovely background music.
At first I thought this was another example of art for art’s sake, pushing technology and arts to the limits to see what can be achieved (which I applaud, as it’s usually these projects that inspire other people to more awesome stuff). It was only when I was chatting to someone about the project, and they referred to the project as ‘data auralisation’ that it really got me thinking (thanks Tassos for the inspiration!).
Data visualisation has become the buzz word of the late 2000s, with infographics becoming almost de rigour for anything web 2.1 and Dave McCandless being heralded as the best thing since sliced bread. Whilst I still love the odd infographic and practically worship the ground Mr McCandless walks on, I’m waiting to be excited by the next big thing.
With data visualisation, you can package data in digestible formats for just about anyone to understand , appreciate and work with. However, while it makes it easy to data snack without having to dig your head in stats for days at a time, it’s still something you need to actively look at. However, with something like The Listening Machine, sentiment, volume of tweets and topics can be monitored as background noise. Replace the source and what you have is an aural dashboard that’s as unintrusive as possible… Something you can tune in and out of, and train your ear to listen out for warning signs.
This concept of interpreting data as sound, as opposed to visuals, like we feel fine, has got me rather excited of late. I’ve been working on a few dirty hacks myself that can hopefully see the light of day soon, but I’m having just as much fun playing with other people’s data auralisations, like MTA.me (aka Conductor), which takes the New York Subway trains and turns them into string instruments. What can you pull together with live data and sound?