Why I can’t embrace Augmented Reality

Over the last few years we’ve seen some jaw-dropping Augment Reality demos. Here’s a YouTube playlist of some of my favourites:

I love watching these and enjoy imagining me being in a situation where one of these AR apps can save a life, save the world, or just get me to the nearest train station before I have to get a taxi home. I genuinely watch those videos and think, “I could use that, I could be in that very same situation and when I am, I’d pull out my AR app and the world would once again be a better place.”

I’d love to service my own BMW and with AR apps like that I don’t feel that’s such a ridiculous idea. However, it’s less likely that I’d surf the streets on my phone trying to find a bar that sold Stella Artois. And there-in lies my problem.

My problem with AR experiences is that I don’t want to use them in public. And you will no doubt be quick to point out, that’s a pretty major problem. I’d use the BMW app in the privacy of my own garage, but I’m not going to do it in public. Put rather more politely than I could ever manage by a colleague of mine recently “it makes you look like a right Charlie”. Have you ever seen someone stop whilst walking down the street and hold their phone in front of them as if they’re looking through it into another dimension? I haven’t and I walk the streets a lot. Is it because it takes us outside of our comfort zone? Even the swathes of Hoxton Hipsters who are so quick to jump in and ride the latest technology bandwagon are able to exercise self-control when thinking about taking out their smartphones and browsing the streets like the device is a window to another world. Maybe because it’s a public admission of a gap in our knowledge – who would like to admit they’re actually a tourist? Maybe it’s for security reasons – “The tourist guide told us to be wary of street crime!” Maybe it’s such an obvious interaction with AR that we don’t want to expose our obviousness to others?

Why do we have Augmented Reality?

As I often say, technologists aren’t great at coming up with new ideas of their own, they look at science fiction and think ‘that’s pretty cool, I’m going to build it’. We did it with Hover Boards, the Jetson’s car and Star trek Communicators. So I blame Star Trek. I blame concept cars and their head up displays. I blame that scene from Star Wars when R2D2 let loose that Obi Wan was the only hope. I’ll go back further and blame the Ancient Greeks and the Oracle at Delphi. Anything that removes us in any way from the reality that we actually live in can be added to this list.

AR visionary Keiichi Matsuda has released his vision of future AR in video form over the last few years, one example is in the playlist above, but here’s another which requires you to put on some 3D glasses to fully appreciate the interface that may one day surround us.

Augmented City 3D from Keiichi Matsuda on Vimeo.

Ethics. Who owns your logo? Who owns the space around us?

When you look at an environment through the screen on your mobile phone, is that a real space you’re looking at? What is to stop a vigilante from taking the original image and mis-representing it through Augmented Reality? This happened to BP as you can see in the video here. This app let’s users see the BP Deepwater Horizon oil spill when ever they see a BP logo. The user simply launches the app and aims their phone’s camera at a BP logo and the logo is transformed on screen to a broken pipe with oil pluming upward. I can’t comment on the legalities of this, but it is worrying that it could happen to anyone.

Making AR discreet in our lives

Advances in display technology

Vuzix eyewearAt the moment we’re still relying on our phone screens to display the additional information that we can add to our environments. Recent advances have been made in display technology that would make this a more subtle experience. Glasses that have digital displays on their lenses are slowly becoming reality as we’re able to shrink the technology to an acceptable size. Contact lenses were once an alternative to glasses but we are now seeing some useful technology being added to them. We are a long way away from having head up displays in lenses and there are many companies working on this end goal removing the need for bulky eye wear.

Futurologist Michio Kaku says ‘In the next 20 years, the internet will be in our contact lens. We will simply blink and we will be online, and when I look at you I will see your biography right next to you because my contact lens will know exactly who you are. You will always know who you are talking to and if you speak to me in Chinese, no problem. I will see Chinese subtitles translated into English under your picture.’ Slightly scary stuff. I believe him.

On a more realistic note, Vuzix are making waves with their eyewear (pictured above) as long as you’re comfortable with the price.

Advances in input devices

The Kinect as a walking aid demo from MIX11 this year is a great example. The hyperbole of this was lost on many people there (I overheard conversations between people who were complaining about how awful the visually impaired were being treated). If anyone ever goes to their local Specsavers and is told to gaffer-tape a Kinect unit to their head instead of being subscribed glasses then I would see that as equally a success and a failure. The sartorialist in me would snigger, the functionalist in me would sigh.

Kinect is the next opportunity that we have to interface with these alternate realities. We no longer have to hold up a device to browse our world, we can instead interact with it on a more natural level, through voice and gesture. Take the example of Kinect powered World Wide Telescope – this was the first application that I really felt was intuitive and obvious, it was natural to use and navigate. It works. Over the next few months I fully expect to see more shining examples of Kinect controlled AR as the creative developer community take advantage of the Kinect SDK for Windows that was recently released.

Is there an issue of AR standards?

AR experiences on mobile devices are currently outside of the browser. There is no standard that browser developers can adopt and build in as part of their software. There are many SDKs that developers can use to create these experiences and with the release of the Mango software update for Windows Phone 7, it is now possible to create AR applications across all mobile operating systems so these experiences can now be enjoyed by more people on more devices. However, I’d like to see AR implemented as part of the web browsing experience so I don’t need to jump out to specific applications for information. 

I’ll believe it, when I don’t see it

I believe that I walk the streets like a vampire ninja, invisible and casting no shadow. I am un-noticeable and silent in the way I move. I embrace all technology both useful and useless but I feel uncomfortable with anything that makes me stand out in a crowd and currently that’s most forms of AR.

Looking forward to the future, as these sensory devices become less obtrusive to our natural environment and as we are able to provide input and receive output through discreet technologies, we will start to fulfil the vision of the early pioneers of AR. As our intentional interaction becomes less obvious to the outside world these augmentations will become more popular and more prolific. We’ll be able to subtlety request and receive information on everything we see, it will be thrown at us with the added benefit that we can pick and choose to know as much or as little about our current environment as we like.

For now, I’m sorry, but I can’t. I am technology’s proudest user but right now this just doesn’t work. The idea is genius. The execution based on the technology available to us is also, genius. But until the illusion of AR is truly magical and invisible, I can’t play this AR game.

Published by Spooner

Creative Technologist at Microsoft in the UK working in the Developer & Platform Evangelism group, he is at the forefront of emerging technologies being developed across Microsoft and champions their deployment to developers and digital agencies. His work is focused around mobile, the web and Natural User Interfaces.

10 Comments So Far, what do you think?

  1. Thom

    I hadn’t really thought about why this hasn’t taken off but you raise some good points.
    If I saw someone walking down the street with a phone held out in front of them I’d presume they were taking a photo or video. It’s not really socially acceptable to film people walking down the street, you could get some strange looks, or even your phone snatched out of your hand!

    I can see it being used a lot more by kids on their Nintendo 3DS consoles, it seems more acceptable when being used on a gaming device. Maybe we have to wait until that generation are all grown up before it goes mainstream?

  2. Benjamin Howarth

    I am now going to be buying a pair of those glasses… they have a C# SDK. :-D

  3. Pingback:Why I can’t embrace Augmented Reality | Andrew Spooner. Bristol, England.

  4. Henry

    I couldn’t help but feel that, although the author has raised some genuine issues or limitations with AR technology, those issues are fairly benign. It’s a new technology and there are teething issues. I just think that the author is overstating the obvious somewhat. I would happily walk around with an iPhone in front of me running an AR app. Of course, a less intrusive interface would be better, but an iPhone would be sufficient for now.
    I have no doubt that Mobile AR navigation system will take off massive in the coming months. an iPhone sat on your dash is certainly no more intrusive than a TomTom.
    The military has invested a lot in AR technology. HUDs in aircrafts have been using AR for a long time. Now, infantry soldiers are increasingly being deployed with man wearable AR tech, such as the Vuzix glasses which the author mentions.
    The lack of standard is a typical trait of a technology in its embryonic stage. This will certainly pass and I don’t expect this to be a show stopper for consumers. A typical consumer wouldn’t necessarily know the difference anyway.

  5. Tim

    If a technology is sufficiently useful then I suspect people will ignore the fact that it looks naff. Think original analogue mobile phone bricks or bluetooth headsets (admitedly mostly men in white vans).

  6. Iftikhar Ahmad

    From what I have been able to understand about AR is that you need a Point of Reference, like cards 3DS, to start AR and you use 3DS screen to interact with it. The information or data if, updated will be saved into the Console, so you will have the updated information all the time with you and if you need to share that info with someone you need to give them your console and they could use your card or there own to access it.

    What I want to know is, can’t we reverse it. Instead of making Card your Point of reference can’t you make console our point of reference and attach a Memory option combined with Wireless module save our info in to the card. That way just carry your card and use someone else’s Console to access the updated information, anytime anywhere.

  7. Pingback:Enter the HoloDesk: See, touch and explore the future of interaction | MSDN Blogs

  8. Pingback:Enter the HoloDesk: See, touch and explore the future of interaction - MSDN UK Team blog - Site Home - MSDN Blogs

  9. Pingback:See, touch and explore the future of interaction - Microsoft Enterprise Insights Blog - Site Home - MSDN Blogs

  10. Pingback:See, touch and explore the future of interaction | MSDN Blogs

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>