Tag Archives: Augmented Reality

A/B: Participatory Navigation with Augmented Reality

Imagine navigating the city with an augmented reality app, but where the choice of route is determined by a crowd and the decision floats in front of you like the hallucinations of a broken cyborg. A/B was an experiment in participatory voting, live streaming and augmented reality by Harald Haraldsson. Created for the digital art exhibition 9to5.tv the project allowed an online audience to guide Haraldsson around Chinatown in New York for 42 minutes. This was achieved through a web interface presenting the livestream from an Android Pixel smartphone.

The smartphone was running Haraldsson’s own augmented reality app implemented with the Unity game engine and Google’s ARCore SDK. At key points Haraldsson could use the app to prompt viewers to vote on the direction he should take, either A or B. ARCore enabled the A/B indicators to be spatially referenced to his urban surroundings in 3D so that they appeared to be floating in the city. Various visual effects and distortions were also overlaid or spatially referenced to the scene.

More images and video including a recording of the the full 45 minute can be found on the A/B project page here.

Thanks to Creative Applications for the link.

Advertisements

Microsoft’s Vision for Mixed and Mixing Realities

A couple of days ago the RoadtoVR website posted about Microsoft’s parent for a wand like controller which appeared in the concept video above. I thought it was worth re-posting the video here as it provides a good indication of what a mixed reality future might look like. In particular it considers a future where augmented and virtual reality systems are used side by side. Where some companies firmly backed one platform or the other, VR in the case of Oculus and the HTC Vive, AR in the case of Meta, more established companies like Microsoft and Google have the resources and brand penetration to back both. Whether Apple follows suite or commits everything to AR following the recent release of ARKit remains to be seen. As such it is interesting to compare the kind of mixed reality ecosystems they want to create. Its then up to developers and consumers to determine which hardware, and by extension which vision, they are most inclined to back.

There are many challenges to overcome before this kind of mixed reality interaction becomes possible. The situated use of AR by the character Penny, and use of VR for telepresence by Samir are particularly well motivated. But what are the characters Samir and Chi actually going to see in this interaction? Will it make a difference if they don’t experience each other’s presence to the same degree? And, how is Samir’s position going to be referenced relative to Penny’s? There are many technical challenges still to be overcome, and compromises will need to be made. For companies like Microsoft and Google the challenge for them is in convincing developers and consumers that the hardware ecosystem they are providing is sufficiently close to their vision of that mixed reality future today…and crucially all at the right price.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

The Human Race: Real-Time Rendering and Augmented Reality in the Movies and Beyond

Back in May at GDC 2017 Epic Games presented a revolutionary pipeline for rendering visual effects in real-time using their Unreal Engine. Developed in partnership with visual effects studio The Mill, the outcome of the project was a short promotional video for Chevrolet called The Human Race (above). While the film’s visual effects are stunning the underlying innovation isn’t immediately apparent. The following film by The Mill’s Rama Allen nicely summarises the process.

Behind the visual effects The Mill have an adjustable car rig called The Blackbird. Mounted on the car is a 360 degree camera rig which uses The Mill’s Cyclops system to stitch the video output from different cameras together and transmits it to Unreal Engine. Using positioning data from the The Blackbird and QR-like tracking markers on the outside of the vehicle as a spatial reference, the Unreal Engine then overlays computer generated imagery in real-time. Because all of this is being done in real-time a viewer can interactively reconfigure the virtual model of the car that has been superimposed on the The Blackbird rig while they are watching.

For the film industry this means that CGI and visual effects can be tested on location. For audiences it might mean that aspects of scenes within the final film become customisable. Perhaps the viewer can choose the protagonists car. Perhaps the implications are wider. If you can instantly revisualise a car or a character in the film why not an entire environment? With the emergence of more powerful augmented reality technologies, will there be a point at which this becomes a viable way to interact with and consume urban space?

The videos The Human Race and The Human Race – Behind The Scenes via Rama Allen and The MIll.

Microsoft HoloLens: Hands On!

It’s taken a while but I finally had my first hands on look at Microsoft HoloLens last night. The demonstration was given as part of the London Unity Usergroup (LUUG) meetup a talk by Jerome Maurey-Delaunay of Neutral Digital about their initial experiences of building demos for the device with Unity. Neutral are a design and software consultancy who have a portfolio of projects including work with cultural institutions such as the Tate and V&A, engineering and aviation firms like Airbus, and architectural firms such as Zaha Hadid architects who they are currently assisting to develop Virtual Reality visualisation workflows.

During the break following the presentation I had may first chance to try the device out for myself.  One of the great features of HoloLens is that it incorporates video capture straight out of the box. Although clips weren’t taken on the night these videos from the Neutral Digital twitter stream provide a good indication of my experience when I tested it:

After using VR headsets like the Oculus Rift and HTC Vive the first thing you notice about the HoloLens is how unencumbered you feel. Where VR headsets enclose the user’s face to block out ambient light and heighten immersion in a virtual environment, the HoloLens is open affording the user unhindered awareness of their surrounding [augmented] environment over which the virtual objects or ‘holograms’ are projected. The second thing you notice is that the HoloLens runs without a tether. Once applications have been transferred to the device it can be unplugged leaving the user free to move about without worrying about tripping up or garroting themselves.

Being able to see my surroundings also meant that I could easily talk face to face with Jerome and see the gestures he wanted me to perform in order operate the device and manipulate the virtual objects it projected. Tapping forefinger and thumb visualised the otherwise invisible virtual mesh that the HoloLens draws as a reference to anchor holograms to the users environment. A projected aircraft could then be walked around and visualised from any angle. Alternatively holding forefinger and thumb while moving my hand would rotate the object in that direction instead.

Don’t be fooled by the simplicity of these demos. The ability of HoloLens to project animated and interactive Holograms that feel anchored to the user’s environment is impressive. I found the headset comfortable and appreciated being able to see my surroundings and interact easily with the people around me. At the same time I wouldn’t say that I felt immersed in the experience in the sense discussed with reference to virtual reality. The ability to interact through natural gestures helped involve my attention in the virtual objects I was seeing, but the actual field of view available for projection is not as wide as the video captures from the device might suggest.

As it stands I wouldn’t mistake Microsoft’s holograms for ‘real’ objects, but then I’m not convinced that this is what we should be aiming for with AR. While one of the prime virtues of virtual reality technologies like Oculus and Vive is their ability to provide a sense of ‘being there’, I see the strength of augmented reality technologies elsewhere in their potential for visualising complex information at the point of engagement, decision or action.

Kind thanks to Neutral Digital for sharing their videos via Twitter. Thanks also to the London Unity Usergroup meetup for arranging the talks and demo.

Microsoft HoloLens for Architecture and Civil Engineering

Where virtual reality is fantastic for visualising and immersing a user in a scene at a human scale I’ve always felt that augmented reality provided a greater range of options for visualising data at scale. In a bid to bring HoloLens to the wider world of architecture and engineering Microsoft have recently initiated an exciting partnership with the American company Trimble. Trimble primarily work providing hardware, software and services for the provision of locational data to a range of industries including land survey, construction, transportation, telecommunications, utilities and asset tracking and management. I think this partnership offers a fantastic opportunity to demonstrate the potential of HoloLens and augmented reality to work on projects of a larger scale.

 

The most obvious value it provides is helping professionals visualise their data in 3D. In this video Microsoft and Trimble demonstrate how the virtual hologram could be integrated with a traditional physical model to collaboratively visualise and quickly iterate through alternative proposals. The chosen solution can then be visualised on the human scale to verify the proposal. Further than this the video hints at the power of HoloLens to provide crucial data wherever and whenever it is needed during construction, while simultaneously providing the ability to record changes and decisions on the spot. In this way HoloLens and other technologies like it could prove indispensable for urban planning, construction and asset management for the life of a development. Beyond the marketing though it will be fascinating test it to see whether the device can live up to its users’ expectations.

Google Project Tango

Project Tango

Back in the summer Virtual Architectures signed up to go on the waiting list for Google’s Project Tango development kit. The current 7″ development kits are powered by the NVIDIA Tegra K1 processor and have 4GB of RAM, 128GB of storage, motion tracking camera, integrated depth sensing, WiFi, BTLE, and 4G LTE for wireless and mobile connectivity. Due to other exciting developments for Virtual Architectures we haven’t been able to take up the offer at this time. However, its such an exciting project we can’t resist sharing the details from the Project Tango website:

What is Project Tango?

As we walk through our daily lives, we use visual cues to navigate and understand the world around us. We observe the size and shape of objects and rooms, and we learn their position and layout almost effortlessly over time. This awareness of space and motion is fundamental to the way we interact with our environment and each other. We are physical beings that live in a 3D world. Yet, our mobile devices assume that physical world ends at the boundaries of the screen.

The goal of Project Tango is to give mobile devices a human-scale understanding of space and motion.

– Johnny Lee and the ATAP-Project Tango Team

3D motion and depth sensing

Project Tango devices contain customized hardware and software designed to track the full 3D motion of the device, while simultaneously creating a map of the environment. These sensors allow the device to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.

What could I do with it?

What if you could capture the dimensions of your home simply by walking around with your phone before you went furniture shopping? What if directions to a new location didn’t stop at the street address? What if you never again found yourself lost in a new building? What if the visually-impaired could navigate unassisted in unfamiliar indoor places? What if you could search for a product and see where the exact shelf is located in a super-store?

Imagine playing hide-and-seek in your house with your favorite game character, or transforming the hallways into a tree-lined path. Imagine competing against a friend for control over territories in your home with your own miniature army, or hiding secret virtual treasures in physical places around the world?

The Project Tango development kit provides excellent opportunities for new developments in architectural visualisation, Augmented Reality and games. It is also exciting to know that there is an integration with the Unity game engine. We look forward to seeing what developers come up with.