Monthly Archives: July 2017

ViLo: The Virtual London Platform by CASA in VR

This is the third post of the week looking at CASA’s urban data visualisation platform ViLo. Today we are looking at the virtual reality integration with HTC Vive:

Using Virtual Reality technologies such as the HTC Vive we can create data rich virtual environments in which users can freely interact with digital representations of urban spaces. In this demonstration we invite users to enter a virtual representation of the ArcelorMittal Orbit tower, a landmark tower located in the Queen Elizabeth Olympic Park. Using CASA’s Virtual London Platform ViLo it is possible to recursively embed 3D models of the surrounding district within that scene. These models can be digitally coupled to the actual locations they represent through the incorporation of real-time data feeds. In this way events occurring in the actual environment, the arrival and departure of buses and trains for example, are immediately represented within the virtual environment in real-time.

Virtual Reality is a technology which typically uses a head mounted display to immerse the user in a three dimensional, computer generated environment, regularly referred to as a ‘virtual environment’. In this case the virtual environment is a recreation of the viewing gallery at the top of the ArcelorMittal Orbit tower, situated at the Queen Elizabeth Olympic Park in East London. CASA’s ViLo platform is then used to embed further interactive 3D models and data visualisations within that virtual environment.

Using the HTC Vive’s room scale tracking the user can freely walk between exhibits. Otherwise they can teleport between them by pointing and clicking at a spot on the floor with one of the Vive hand controllers. The other hand controller is used for interacting with the exhibits, either by pointing and clicking with the trigger button, or placing the controller over objects and using the grip buttons on the side of the controller to hold them.

In the video we see how the virtual environment can be used to present a range of different media. Visitors can watch 360 degree videos and high quality architectural visualisations, but they can also interact with the 3D models featured in that content more actively using virtual tools like the cross-sectional plane seen in the video.

The ViLo platform provides further flexibility by enabling us to import interactive models of entire urban environments. The Queen Elizabeth Olympic Park is visualised with different layers of data provided by live feeds from Transport for London’s bus, tube, and bike hire APIs. Different layers are selected and removed here by the placing of 3D icons on a panel. Virtual reality affords the user the ability to choose their own view point on the data by simply moving their head. Other contextual information like images from Flickr or articles from Wikipedia can also be imported.

A further feature is the ability to quickly swap between models of different location. In the final section of the video another model of the Queen Elizabeth Olympic Park can be immediately replaced by a model of the area of the Thames in Central London between St Paul’s Cathedral and the Tate Modern gallery. The same tools can be used to manipulate either model. Analysis of building footprint size and building use data are combined with real-time visibility analysis depicting viewsheds from any point the user designates. Wikipedia and Flickr are queried dynamically to provide additional information and context for particular buildings by simply pointing and clicking. In this way many different aspects of urban environments can be digitally reconstructed within the virtual environment, either in miniature or at 1:1 scale.

Where the version of ViLo powered by the ARKit we looked at yesterday provided portability, the virtual reality experience facilitated by the HTC Vive integration can incorporate a much wider variety of data with a far richer level of interaction. Pure data visualisation tasks may not benefit greatly from immersion or presence provided by virtual reality. However, as we see with new creative applications like Google’s Tilt Brush and Blocks, virtual reality really shines in cases where natural and precise interaction is required in the manipulation of virtual objects. Virtual environments also provide useful sites for users who can’t be in the same physical location at the same time. Networked telepresence can be used to enable professionals in different cities to work together synchronously. Alternatively virtual environments can provide forums for public engagement where potential users can drop in at their convenience. Leveraging an urban data visualisation platform like CASA’s ViLo virtual environments can become useful sites for experimentation and communication of built environment interventions.

Many thanks to CASA Research Assistants Lyzette Zeno Cortes and Valerio Signorelli for their work on the ViLo virtual reality integration discussed here. Tweet @ValeSignorelli for more information about the HTC Vive integration.

For further details about ViLo see Monday’s post ViLo: The Virtual London Platform by CASA for Desktop and yesterday’s post ViLo: The Virtual London Platform by CASA with ARKit.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

Advertisements

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA for Desktop

 

This is the first of three posts introducing CASA’s interactive urban data visualisation platform, ViLo. The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. I’ve been fortunate to work alongside the team and learn from them as the project has developed. Initially conceived as a desktop application CASA are now working on integrations for a range of different interaction devices including virtual reality with the HTC Vive and Google Daydream, augmented reality with Google’s Project Tango and Apple’s ARKit, and so called mixed realities with Microsoft’s HoloLens. ViLo forms the basis for these experiments. Underlying each of these projects is the ViLo platform.

ViLo is an interactive urban data visualisation platform developed by The Bartlett Centre for Advanced Spatial Analysis (CASA) at UCL in collaboration with the Future Cities Catapult (FCC). The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. The platform uses both OpenStreetMap data and the MapBox API for the creation of the digital environment. The platform enables us to visualise the precise locations of buildings, trees and various other urban amenities on a high resolution digital terrain model. The buildings, which are generated at runtime from OpenStreetMap data, retain their original identifiers so that they can be queried for semantic descriptions of their properties. ViLo can also visualise custom spatio-temporal data sets provided by the user in various file formats. Custom 3D models can be provided for landmarks and it is possible to switch from the OpenStreetMap generated geometries to a higher detailed CityGML model of the district in LoD2.

Dynamic data sets stored in CSV file format can also be visualised alongside real-time feeds. A specific emphasis has been placed on the visualisation of mobility data sets. Using Transport for London’s APIs ViLo has the capability to retrieve and visualise the location of bike sharing docks and the availability of bikes along with the entire bus and tube networks including the locations of bus stops and tube stations along with the position of buses and trains updated in real-time.

The ViLo platform also integrates real-time weather information from Wunderground’s API, a three dimensional visualisation of Flickr photos relating to points of interest, and a walking route planner for predefined locations using MapBox API.

An innovative aspect of the ViLo project is the possibility of conducting real time urban analysis using the various data sets loaded into the digital environment. At the current stage it is possible to conduct two-dimensional and three-dimensional visibility analysis (intervisibility; area and perimeter of the visible surfaces; maximum, minimum and average distance; compactness, convexity and concavity).

While originally conceived as part of an effort to visualise London in 3D, The ViLo platform can be used to visualise any urban area across the globe. The first version of the platform demonstrated here focuses on visualising the Queen Elizabeth Olympic Park in East London, a new district that was purpose built to host the London Summer Olympics and Paralympics in 2012.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

From a purely aesthetic point of view the design of the desktop application reminds me strongly of the image from the Guide for visitors to Ise Shrine, Japan, 1948–54 that visualisation expert Edward Tufte discussed particularly favourably in his book Envisioning Information (1990). Our current efforts at ‘escaping flatland’ are a continuation of previous work undertaken by Andy Hudson-Smith and Mike Batty at CASA in the early 2000’s to create a Virtual London.

One of the main advances is our increased ability to integrate real-time data in such a way that the digital representation can be more fully coupled to the actual environment in order to reflect change. We also benefit from advances in 3D visualisation and real-time rendering afforded by the use of video game engines such as Unity. As a result the ViLo platform provides a good demonstration of our current capabilities in relation to the observation of dynamic processes as they occur in real-time at an urban scale.

The Human Race: Real-Time Rendering and Augmented Reality in the Movies and Beyond

Back in May at GDC 2017 Epic Games presented a revolutionary pipeline for rendering visual effects in real-time using their Unreal Engine. Developed in partnership with visual effects studio The Mill, the outcome of the project was a short promotional video for Chevrolet called The Human Race (above). While the film’s visual effects are stunning the underlying innovation isn’t immediately apparent. The following film by The Mill’s Rama Allen nicely summarises the process.

Behind the visual effects The Mill have an adjustable car rig called The Blackbird. Mounted on the car is a 360 degree camera rig which uses The Mill’s Cyclops system to stitch the video output from different cameras together and transmits it to Unreal Engine. Using positioning data from the The Blackbird and QR-like tracking markers on the outside of the vehicle as a spatial reference, the Unreal Engine then overlays computer generated imagery in real-time. Because all of this is being done in real-time a viewer can interactively reconfigure the virtual model of the car that has been superimposed on the The Blackbird rig while they are watching.

For the film industry this means that CGI and visual effects can be tested on location. For audiences it might mean that aspects of scenes within the final film become customisable. Perhaps the viewer can choose the protagonists car. Perhaps the implications are wider. If you can instantly revisualise a car or a character in the film why not an entire environment? With the emergence of more powerful augmented reality technologies, will there be a point at which this becomes a viable way to interact with and consume urban space?

The videos The Human Race and The Human Race – Behind The Scenes via Rama Allen and The MIll.

Habitat: Drama and Adventure in Early Online World’s

Today while trawling the web I stumbled on this promo for an early MMO from 1986 called Habitat. Produced by Lucasfilm Games in collaboration with online service provider Quantum Link the game provided for real-time interaction between Commodore 64 users via dial-up modem. The advert repeatedly assures potential players of the possibilities for fast paced ‘drama and adventure’, but its hard to get that sense from the in game footage. The appeal of the game likely had more to do with the novelty of synchronous interaction between players in a persistent and graphically represented online environment. To that point online MUDs had tended to be text based.

I love the way the players’ interactions are presented in the video. Combined with the jaunty music and voice over they seem jarringly innocent. At the same time the implications of their interactions weirdly presage the tensions and less comfortable aspects of online interaction today. While stealing another avatar’s head is totally fair game [IT TOTALLY IS!], the suggestion that an armed robbery in the game might be averted by a trip to the sauna…from the perspective of a critical reading there’s a lot going on there. I believe today’s augmented audiences are a little less naive.

Bang!

The game and its advert are wonderful artefacts from the perspective of media archeology, not only insofar as the game is a precursor to today’s MMOs, but also as it relates to the wider context of early developments in online communities, virtual environments and social media. The Museum of Art and Digital Entertainment (MADE) have obtained the original source code from Lucasfilm Games in an attempt to preserve and restore the game in working condition. The code is available on GitHub here.

Both the advert and the game provide wonderful artefacts for critical reading and media archeologies. The Museum of Art and Digital Entertainment (MADE) have obtained the original source code from Lucasfilm Games in an attempt to preserve and restore the game in working condition. The code is available on GitHub here.