Category Archives: 3D Virtual City Models

ViLo and the Future of Planning

Following our recent posts on CASA’s digital urban visualisation platform ViLo, the Future Cities Catapult who collaborated with CASA on the project have released a video discussing it in further detail. Commencing in 2014 the aim of the project was to identify which combinations of urban data might be most valuable to urban planners, site operators and citizens. CASA research associates Lyzette Zeno Cortes and Valerio Signorelli discuss how it was created using the Unity game engine in order to understand its potential for visualising information in real-time.

Ben Edmonds from the London Legacy Development Corporation who run the Queen Elizabeth Olympic Park where ViLo has been tested discusses how this was used to gather environmental data and qualitative data from park visitors in order to help understand and improve their experience of the park. Including real-time information on transportation links, environmental factors and park usage by the public helps to build up an overview of the whole area so that it can be run more effectively.

Beyond this there is an expectation that use of the 3D model can be extended beyond the Olympic Park and implemented London-wide. This fits in to a wider expectation for City Information Modelling (CIM). As Stefan Webb from the Future Cities Catapult describes it, this is the idea that a 3D model containing sufficient data can enable us to forecast the impact of future developments and changes to the functioning of both the physical and social infrastructure of the city.

Advertisements

Nature Smart Cities: Visualising IoT bat monitor data with ViLo

In the past weeks I’ve been collaborating with researchers at the Intel Collaborative Research Institute (ICRI) for Urban IoT to integrate data from bat monitors on the Queen Elizabeth Olympic Park into CASA’s digital visualisation platform, ViLo. At present we are visualising the geographic location of each bat monitor with a pin that includes an image showing the locational context of each sensor and a flag indicating the total number of bat calls recorded by that sensor on the previous evening. A summary box in the user interface indicates the total number of bat monitors in the vicinity and the total number of bat calls recorded the previous evening. Animated bats are also displayed above pins to help users quickly identify which bat monitors have results from the previous evening to look at.

The data being visualised here comes from custom made ‘Echo Box’ bat monitors that have been specifically designed by ICRI researchers to detect bat calls from ambient sound. They have been created as part of a project called Nature Smart Cities which intends to develop the worlds first open source system for monitoring bats using Internet of Things (IoT) technology. IoT refers to the idea that all sorts of objects can made to communicate and share useful information via the internet. Typically IoT devices incorporate some sort of sensor that can process and transmit information about the environment and/or actuators that respond to data by effecting changes within the environment. Examples of IoT devices in a domestic setting would be Philips Hue Lighting which can be controlled remotely using a smartphone app, or Amazon’s Echo which can respond to voice commands in order to do things like cue up music from Spotify, control your Hue lighting or other IoT devices, and of course order items from Amazon. Billed as a ‘”shazam” for bats’ the ICRI are hoping to use IoT technology to show the value of similar technologies for sensing and conserving urban wildlife populations, in this case bats.

Each Echo Box sensor uses an ultrasonic microphone to record a 3 second sample of audio every 6 seconds. The audio is then processed and transformed into an image called a spectrogram. This is a bit like a fingerprint for sound, which shows the amplitude of sounds across different frequencies. Bat calls can be clearly identified due to their high frequencies. Computer algorithms then analyse the spectrogram to compare it to those of known bat calls in order to identify which type of bat was most likely to have made the call.

The really clever part from a technical perspective is that all of this processing can be done on the device using one of Intel’s Edison chips. Rather than having large amounts of audio transmitted back to a centralised system for storage and analysis, Intel are employing ‘edge processing’, processing on the device at the edge of the network, to massively reduce the amount of data that needs to be sent over the network back to their central data repository. Once the spectrogram has been produced the original sound files are immediately deleted as no longer required. Combined with the fact that sounds within the range of human speech and below 20kHz are ignored by the algorithms that process the data, this ensures that the privacy of passersby is protected.

This is a fascinating project and it has been great having access to such an unusual data set. Further work here can focus on visualising previous evenings data in time-series to better understand patterns of bat activity over the course of the study. We also hope to investigate the use of sonification by incorporating recordings of typical bat calls for each species in order to create a soundscape that complements the visualisation and engages with the core sonic aspect of study.

Kind thanks to Sarah Gallacher and the Intel Collaborative Research Institute for providing access to the data. Thanks also to the Queen Elizabeth Olympic Park for enabling this research. For more information about bats on the Queen Elizabeth Olympic Park check out the project website: Nature Smart Cities.

ViLo: The Virtual London Platform by CASA in VR

This is the third post of the week looking at CASA’s urban data visualisation platform ViLo. Today we are looking at the virtual reality integration with HTC Vive:

Using Virtual Reality technologies such as the HTC Vive we can create data rich virtual environments in which users can freely interact with digital representations of urban spaces. In this demonstration we invite users to enter a virtual representation of the ArcelorMittal Orbit tower, a landmark tower located in the Queen Elizabeth Olympic Park. Using CASA’s Virtual London Platform ViLo it is possible to recursively embed 3D models of the surrounding district within that scene. These models can be digitally coupled to the actual locations they represent through the incorporation of real-time data feeds. In this way events occurring in the actual environment, the arrival and departure of buses and trains for example, are immediately represented within the virtual environment in real-time.

Virtual Reality is a technology which typically uses a head mounted display to immerse the user in a three dimensional, computer generated environment, regularly referred to as a ‘virtual environment’. In this case the virtual environment is a recreation of the viewing gallery at the top of the ArcelorMittal Orbit tower, situated at the Queen Elizabeth Olympic Park in East London. CASA’s ViLo platform is then used to embed further interactive 3D models and data visualisations within that virtual environment.

Using the HTC Vive’s room scale tracking the user can freely walk between exhibits. Otherwise they can teleport between them by pointing and clicking at a spot on the floor with one of the Vive hand controllers. The other hand controller is used for interacting with the exhibits, either by pointing and clicking with the trigger button, or placing the controller over objects and using the grip buttons on the side of the controller to hold them.

In the video we see how the virtual environment can be used to present a range of different media. Visitors can watch 360 degree videos and high quality architectural visualisations, but they can also interact with the 3D models featured in that content more actively using virtual tools like the cross-sectional plane seen in the video.

The ViLo platform provides further flexibility by enabling us to import interactive models of entire urban environments. The Queen Elizabeth Olympic Park is visualised with different layers of data provided by live feeds from Transport for London’s bus, tube, and bike hire APIs. Different layers are selected and removed here by the placing of 3D icons on a panel. Virtual reality affords the user the ability to choose their own view point on the data by simply moving their head. Other contextual information like images from Flickr or articles from Wikipedia can also be imported.

A further feature is the ability to quickly swap between models of different location. In the final section of the video another model of the Queen Elizabeth Olympic Park can be immediately replaced by a model of the area of the Thames in Central London between St Paul’s Cathedral and the Tate Modern gallery. The same tools can be used to manipulate either model. Analysis of building footprint size and building use data are combined with real-time visibility analysis depicting viewsheds from any point the user designates. Wikipedia and Flickr are queried dynamically to provide additional information and context for particular buildings by simply pointing and clicking. In this way many different aspects of urban environments can be digitally reconstructed within the virtual environment, either in miniature or at 1:1 scale.

Where the version of ViLo powered by the ARKit we looked at yesterday provided portability, the virtual reality experience facilitated by the HTC Vive integration can incorporate a much wider variety of data with a far richer level of interaction. Pure data visualisation tasks may not benefit greatly from immersion or presence provided by virtual reality. However, as we see with new creative applications like Google’s Tilt Brush and Blocks, virtual reality really shines in cases where natural and precise interaction is required in the manipulation of virtual objects. Virtual environments also provide useful sites for users who can’t be in the same physical location at the same time. Networked telepresence can be used to enable professionals in different cities to work together synchronously. Alternatively virtual environments can provide forums for public engagement where potential users can drop in at their convenience. Leveraging an urban data visualisation platform like CASA’s ViLo virtual environments can become useful sites for experimentation and communication of built environment interventions.

Many thanks to CASA Research Assistants Lyzette Zeno Cortes and Valerio Signorelli for their work on the ViLo virtual reality integration discussed here. Tweet @ValeSignorelli for more information about the HTC Vive integration.

For further details about ViLo see Monday’s post ViLo: The Virtual London Platform by CASA for Desktop and yesterday’s post ViLo: The Virtual London Platform by CASA with ARKit.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA for Desktop

 

This is the first of three posts introducing CASA’s interactive urban data visualisation platform, ViLo. The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. I’ve been fortunate to work alongside the team and learn from them as the project has developed. Initially conceived as a desktop application CASA are now working on integrations for a range of different interaction devices including virtual reality with the HTC Vive and Google Daydream, augmented reality with Google’s Project Tango and Apple’s ARKit, and so called mixed realities with Microsoft’s HoloLens. ViLo forms the basis for these experiments. Underlying each of these projects is the ViLo platform.

ViLo is an interactive urban data visualisation platform developed by The Bartlett Centre for Advanced Spatial Analysis (CASA) at UCL in collaboration with the Future Cities Catapult (FCC). The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. The platform uses both OpenStreetMap data and the MapBox API for the creation of the digital environment. The platform enables us to visualise the precise locations of buildings, trees and various other urban amenities on a high resolution digital terrain model. The buildings, which are generated at runtime from OpenStreetMap data, retain their original identifiers so that they can be queried for semantic descriptions of their properties. ViLo can also visualise custom spatio-temporal data sets provided by the user in various file formats. Custom 3D models can be provided for landmarks and it is possible to switch from the OpenStreetMap generated geometries to a higher detailed CityGML model of the district in LoD2.

Dynamic data sets stored in CSV file format can also be visualised alongside real-time feeds. A specific emphasis has been placed on the visualisation of mobility data sets. Using Transport for London’s APIs ViLo has the capability to retrieve and visualise the location of bike sharing docks and the availability of bikes along with the entire bus and tube networks including the locations of bus stops and tube stations along with the position of buses and trains updated in real-time.

The ViLo platform also integrates real-time weather information from Wunderground’s API, a three dimensional visualisation of Flickr photos relating to points of interest, and a walking route planner for predefined locations using MapBox API.

An innovative aspect of the ViLo project is the possibility of conducting real time urban analysis using the various data sets loaded into the digital environment. At the current stage it is possible to conduct two-dimensional and three-dimensional visibility analysis (intervisibility; area and perimeter of the visible surfaces; maximum, minimum and average distance; compactness, convexity and concavity).

While originally conceived as part of an effort to visualise London in 3D, The ViLo platform can be used to visualise any urban area across the globe. The first version of the platform demonstrated here focuses on visualising the Queen Elizabeth Olympic Park in East London, a new district that was purpose built to host the London Summer Olympics and Paralympics in 2012.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

From a purely aesthetic point of view the design of the desktop application reminds me strongly of the image from the Guide for visitors to Ise Shrine, Japan, 1948–54 that visualisation expert Edward Tufte discussed particularly favourably in his book Envisioning Information (1990). Our current efforts at ‘escaping flatland’ are a continuation of previous work undertaken by Andy Hudson-Smith and Mike Batty at CASA in the early 2000’s to create a Virtual London.

One of the main advances is our increased ability to integrate real-time data in such a way that the digital representation can be more fully coupled to the actual environment in order to reflect change. We also benefit from advances in 3D visualisation and real-time rendering afforded by the use of video game engines such as Unity. As a result the ViLo platform provides a good demonstration of our current capabilities in relation to the observation of dynamic processes as they occur in real-time at an urban scale.

Open3D: Crowd-Sourced Distributed Curation of City Models

Open3D is a project by the Smart Geometry Processing Group in UCL’s Computer Science department. The project aims to provide tools for the crowd-sourcing of large-scale 3D urban models. It achieves this by giving users access to a basic 3D data set and providing an editor enabling them to amend the model and add further detail.

The model that users start with is created by vertically extruding 2D building footprints derived from OpenStreetMap or Ordnance Survey map data. Access to the resulting the 3D model is granted using a viewer based on the Cesium javascript library for rendering virtual globes in a web browser. The interface allows users to select particular buildings to work on. As changes are made to the model with the Open3D editor they are parameterised behind the scenes. This means that each changes become variables in an underlying set of repeatable rules that form templates representing common objects such as different types of window or door. These templates can then be shared between users and reapplied to other similar buildings within the model. This helps facilitate collaboration between multiple users and speeds up model creation.

Crowd-sourcing 3D urban models is not new. As we saw in an earlier post on 3D Imagery in Google Earth, Google’s acquisition of SketchUp in 2006 enabled enthusiasts to model and texture 3D buildings directly from satellite imagery. These models could then be uploaded to the 3D Warehouse where they were curated by Google who choose the best models for inclusion in their platform. Despite the enthusiasm of the user community there were limits to the speed of progress and the amount of coverage that could be achieved. In 2012 Google sold SketchUp to engineeting company Trimble after adopting a more automated process relying on a combination of photogrammetry and computer vision techniques. We recently saw similar techniques being used by ETH Zurich in our last post on their project VarCity.

In this context the Open3D approach which heavily relies on human intervention may seem outdated. However, while the kinds of textured surface models that are created using automated photogrammetry look very good from a distance, closer inspection reveals all sorts of issues. The challenges involved in creating 3D models through photogrammetry include: (i) gaining sufficient coverage of the object; (ii) the need to use images taken at different times in order to achieve sufficient coverage; (iii) having images of sufficient resolution to obtain the required level of detail; (iv) the indiscriminate nature of the captured images in the sense that they include everything within the camera’s field of view, regardless of whether it is intended for inclusion in the final model or not. Without manual editing or further processing this can result in noisy surfaces with hollow, blob-like structures for mobile or poorly defined structures and objects. The unofficial Google Earth Blog has done a great job of documenting such anomalies within the Google platform over the years. These include ghostly images and hollow objects, improbably deep riversdrowned citiesproblems with overhanging trees and buildings and blobby people.

The VarCity project sought to address these issues by developing new algorithms and combining techniques to improve the quality of the surface meshes they generated using aerial photogrammetry. For example, vehicle mounted cameras were used in combination with tourist photographs to provide higher resolution data at street level. In this way the ETH Zurich team were able to improve the level of detail and reduce noise in the building facades considerably. Despite this the results of the VarCity project still have limitations. For example, with regard to their use in first person virtual reality applications it could be argued that a more precisely modeled environment might better support a sense of presence and immersion for the user. While such a data set would be more artificial by virtue of the artifice involved in its production, it would also appear less jarringly course in appearance and feel more seamlessly realistic.

In their own ways both VarCity and Open3D seek to reduce the time and effort required in the production of 3D urban models. VarCity uses a combination of methods and increasingly sophisticated algorithms to help reduce noise in the automated reconstruction of urban environments. Open3D on the other hand starts with a relatively clean data set and provides tools to enhance productivity while leveraging the human intelligence of users and their familiarity with the environment they are modelling to maintain a high level of quality. Hence, while the current output for Open3D may appear quite rudimentary compared to VarCity this would improve through the effort of the systems potential users.

Unlike the VarCity project in which crowd-sourcing was effectively achieved by proxy through the secondary exploitation of tourist photos gathered via social media, Open3D seeks to engage a community of users through direct and voluntary citizen participation. In this regard Open3D has a considerable challenge. In order to work the project needs to find an enthusiastic user group and engage them by providing highly accessible and enjoyable tools and features that lower the bar to participation. To that end the Open3D team are collaborating with UCL’s Interaction Group (UCLIC) who will be focused on usability testing and adding new features. There is definitely an appetite for online creation of 3D which is evident in the success of new platforms like Sketchfab. Whether there is still sufficient enthusiasm for the bottom-up creation of 3D urban data sets without the influence of a brand like Google remains to be seen.

For more information on Open3D check out the Smart Geometry Processing Group page or have a look at the accompanying paper here.

VarCity: 3D and Semantic Urban Modelling from Images

In this video we see the results of a 5 year VarCity research project at the Computer Vision Lab, ETH Zurich. The aim of the project was to automatically generate 3D city models from photos such as those openly available online via social media.

The VarCity system uses computer algorithms to analyse and stitch together overlapping photographs. Point clouds are then created on the basis of overlapping points and then used to generate a geometric mesh or surface model. Other algorithms are used to identify and tag different types of urban objects like streets, buildings, roofs, windows and doors. These semantic labels can then be used to query the model to automatically determine meaningful information about buildings and streets as the video describes. In this way the VarCity project demonstrates one way in which comprehensive 3D city models could effectively be crowd sourced over time.

It is also interesting that VarCity is using computer vision to connect real-time video feeds or content from social media to actual locations. This is used to determine local vehicle and pedestrian traffic. As the video suggests, there may be limitations to this method for determining urban dynamics across the city as it is dependent of accessibility of a suitably large number of camera feeds. This also has implications for privacy and surveillance. The VarCity team address this by showing representative simulated views that replace actual scenes. As such the 3D modelling of urban regions can no longer be viewed as a neutral and purely technical enterprise.

The wider project covers four main areas of research:

  • Automatic city-scale 3D reconstruction
  • Automatic semantic understanding of the 3D city
  • Automatic analysis of dynamics within the city
  • Automatic multimedia production

A fuller breakdown of the VarCity project can be viewed in the video below.

The work on automatic 3D reconstruction is particularly interesting. A major difficulty with the creation of 3D city models has been the amount of manual effort they require to create and update through traditional 3D modelling workflows. One solution has been to procedurally generate such models using software such as ESRI’s CityEngine. With CityEngine preset rules are used randomly determine the values for parameters like the height of buildings, the pitch of the roofs, the types of walls and doors. This is a great technique for generating fictional cities for movies and video games. However, this has never been fully successful for the modelling of actually existing urban environments. This is because the outputs of procedurally generated models are only as good as the inputs, both the complexity of the rules used for generating the geometry, but also the representational accuracy of things like the models for street furniture and textures for buildings if they are to be applied.

Procedural generation also involves an element of randomness requiring the application of constraints such as the age of buildings in specific areas which determines which types of street furniture and textures should be applied. Newer districts may be more likely to feature concrete and glass whereas much older districts will likely consist of buildings made of brick. The more homogeneous an area is in terms of age and design the more easy it is to procedurally generate, especially if it is laid out in a grid. Even so there is always the need for manual adjustment which takes considerable effort and may involve ground truthing. Using such methods for particularly heterogeneous cities like London are problematic, especially if regular updates are required to capture changes as they occur.

For my own part I’m currently looking at the processing of point cloud information so it will be fascinating to read the VarCity team’s research papers, available here.