Category Archives: Augmented Reality

A/B: Participatory Navigation with Augmented Reality

Imagine navigating the city with an augmented reality app, but where the choice of route is determined by a crowd and the decision floats in front of you like the hallucinations of a broken cyborg. A/B was an experiment in participatory voting, live streaming and augmented reality by Harald Haraldsson. Created for the digital art exhibition 9to5.tv the project allowed an online audience to guide Haraldsson around Chinatown in New York for 42 minutes. This was achieved through a web interface presenting the livestream from an Android Pixel smartphone.

The smartphone was running Haraldsson’s own augmented reality app implemented with the Unity game engine and Google’s ARCore SDK. At key points Haraldsson could use the app to prompt viewers to vote on the direction he should take, either A or B. ARCore enabled the A/B indicators to be spatially referenced to his urban surroundings in 3D so that they appeared to be floating in the city. Various visual effects and distortions were also overlaid or spatially referenced to the scene.

More images and video including a recording of the the full 45 minute can be found on the A/B project page here.

Thanks to Creative Applications for the link.

Advertisements

Microsoft’s Vision for Mixed and Mixing Realities

A couple of days ago the RoadtoVR website posted about Microsoft’s parent for a wand like controller which appeared in the concept video above. I thought it was worth re-posting the video here as it provides a good indication of what a mixed reality future might look like. In particular it considers a future where augmented and virtual reality systems are used side by side. Where some companies firmly backed one platform or the other, VR in the case of Oculus and the HTC Vive, AR in the case of Meta, more established companies like Microsoft and Google have the resources and brand penetration to back both. Whether Apple follows suite or commits everything to AR following the recent release of ARKit remains to be seen. As such it is interesting to compare the kind of mixed reality ecosystems they want to create. Its then up to developers and consumers to determine which hardware, and by extension which vision, they are most inclined to back.

There are many challenges to overcome before this kind of mixed reality interaction becomes possible. The situated use of AR by the character Penny, and use of VR for telepresence by Samir are particularly well motivated. But what are the characters Samir and Chi actually going to see in this interaction? Will it make a difference if they don’t experience each other’s presence to the same degree? And, how is Samir’s position going to be referenced relative to Penny’s? There are many technical challenges still to be overcome, and compromises will need to be made. For companies like Microsoft and Google the challenge for them is in convincing developers and consumers that the hardware ecosystem they are providing is sufficiently close to their vision of that mixed reality future today…and crucially all at the right price.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

The Human Race: Real-Time Rendering and Augmented Reality in the Movies and Beyond

Back in May at GDC 2017 Epic Games presented a revolutionary pipeline for rendering visual effects in real-time using their Unreal Engine. Developed in partnership with visual effects studio The Mill, the outcome of the project was a short promotional video for Chevrolet called The Human Race (above). While the film’s visual effects are stunning the underlying innovation isn’t immediately apparent. The following film by The Mill’s Rama Allen nicely summarises the process.

Behind the visual effects The Mill have an adjustable car rig called The Blackbird. Mounted on the car is a 360 degree camera rig which uses The Mill’s Cyclops system to stitch the video output from different cameras together and transmits it to Unreal Engine. Using positioning data from the The Blackbird and QR-like tracking markers on the outside of the vehicle as a spatial reference, the Unreal Engine then overlays computer generated imagery in real-time. Because all of this is being done in real-time a viewer can interactively reconfigure the virtual model of the car that has been superimposed on the The Blackbird rig while they are watching.

For the film industry this means that CGI and visual effects can be tested on location. For audiences it might mean that aspects of scenes within the final film become customisable. Perhaps the viewer can choose the protagonists car. Perhaps the implications are wider. If you can instantly revisualise a car or a character in the film why not an entire environment? With the emergence of more powerful augmented reality technologies, will there be a point at which this becomes a viable way to interact with and consume urban space?

The videos The Human Race and The Human Race – Behind The Scenes via Rama Allen and The MIll.