Tag Archives: Visualisation

Roames: Virtual Asset Management With Unity

Roames is a Unity-based 3D data visualisation platform created by Dutch company Fugro for the purpose of asset management. In particular it has been used to visualise LiDAR point clouds as the basis for the management and maintenance of power networks. In the video below Glen Ross-Sampson and Peter O’Loughlin from the Fugro Roames team discuss the challenges involved in creating a platform for geospatial data in Unity.

Behind the scenes Amazon Web Services (AWS) are used to provide scalable computation for processing large amounts of point cloud data. Algorithms are run on the AWS clusters to classify and extract different types of features from the point cloud. These include power lines, poles, vegetation and buildings. Further business rules can then be applied and visualised to help users make decisions. In this case they helped Ergon Energy in Australia assess the risk of damage to overhead power cables caused by growing vegetation. The benefit of this kind of ‘virtual asset management’ is that it allows clients like Ergon to make assessments about the assets they manage without having to send crews to inspect every site. By prioritising those sites most at risk they can expect to make significant savings.

Roames was the outcome of a five year project. Unity was chosen as the visualisation client because commercial GIS software didn’t provide the performance they required. They also wanted to be able to customise the interface and experiment with simulation. Using Unity enabled the team to prototype without having to build low level functionality from scratch.

The system allows the user to explore the scene in real-time. Data are streamed in to the scene and unloaded dynamically with the aid of memory and hard-disk caches. Changing level of detail (LOD) is used to support zooming the view from out of space all the way in to a ground level view. As the user zooms in points are replaced by a voxel representation. All of this is achieved using Amazon S3 for cloud storage.

As well as discussing the motivation behind Roames’ and their technical stack the talk does a great job of discussing some common problems and solutions in working with large spatial data sets in Unity.

Technical notes:

Regionation – Map data in Roames is structured according to the Tile Map Service (TMS) specification developed by the Open Source Geospatial Foundation (OSGeo) and served via a Rest API endpoint. Tiles of different LOD are provided based on proximity to the camera. This is also used when the camera is tilted to ensure lower levels of details are used for objects that are further away.

Floating Point Precision – Unity using 32-bit single precision floating point numbers to store the positions of assets. This gives an average accuracy to 7 significant figures. The need to map data across the whole globe to millimeter precision. However, on the scale of the globe accuracy to the nearest metre alone requires 8 significant figures. The spatial uncertainty this introduces is visibly represented by an onscreen spatial jitter. This was resolved by storing the positions of objects with 64-bit double precision and using a floating origin. The floating origin was achieved by setting the position of the main camera to the Unity world origin (0,0,0) each frame and moving the other objects relative to that position rather than moving the camera.

Manipulating large numbers of objects – Manipulating the positions of thousands of objects is computationally expensive and reduces the frame rate. The Roames team used a number of evenly distributed empty game objects or ‘terminal nodes’ as references that other objects could be parented to. This meant Instead of updating the positions of all objects in the scene they just had to update those of the terminal nodes.

Memory Management – As objects are loaded and removed from the scene their are spikes in activity caused by lags in Unity’s automated memory management or ‘garbage collection’; the process by which unused memory is freed for reuse. These issues were resolved by reusing existing objects to avoid allocating more memory and making those objects static where possible. Use of for loops or enumerators was recommended over foreach loops which allocate memory internally. Reducing the amount of string manipulation is also recommended.

Scratch Arrays – Roames introduced their own ‘Scratch Array’ pattern to resuse commonly sized arrays.

Binary Formats – Rather than use KML which is a verbose, text-based XML format, Roames uses the Binary PLY format which performs much better. This reduced file sizes and improved load times and garbage collection allocations.

In order to display the points efficiently they are batched into single meshes on 65,000 vertices. They also lower the density of their clouds prior to loading.

The Core Engine and the other aspects of the product like the user interface were separated to make the project easier to handle. This enabled the 14 developers on the project to work more efficiently. It also meant that other custom tools could be quickly developed in separation from the main project.

The team’s goals going forward to get the product working across the web, to open the Core API to developers, and to start using Unity physics for simulations.

Advertisements

ViLo and the Future of Planning

Following our recent posts on CASA’s digital urban visualisation platform ViLo, the Future Cities Catapult who collaborated with CASA on the project have released a video discussing it in further detail. Commencing in 2014 the aim of the project was to identify which combinations of urban data might be most valuable to urban planners, site operators and citizens. CASA research associates Lyzette Zeno Cortes and Valerio Signorelli discuss how it was created using the Unity game engine in order to understand its potential for visualising information in real-time.

Ben Edmonds from the London Legacy Development Corporation who run the Queen Elizabeth Olympic Park where ViLo has been tested discusses how this was used to gather environmental data and qualitative data from park visitors in order to help understand and improve their experience of the park. Including real-time information on transportation links, environmental factors and park usage by the public helps to build up an overview of the whole area so that it can be run more effectively.

Beyond this there is an expectation that use of the 3D model can be extended beyond the Olympic Park and implemented London-wide. This fits in to a wider expectation for City Information Modelling (CIM). As Stefan Webb from the Future Cities Catapult describes it, this is the idea that a 3D model containing sufficient data can enable us to forecast the impact of future developments and changes to the functioning of both the physical and social infrastructure of the city.

Nature Smart Cities: Visualising IoT bat monitor data with ViLo

In the past weeks I’ve been collaborating with researchers at the Intel Collaborative Research Institute (ICRI) for Urban IoT to integrate data from bat monitors on the Queen Elizabeth Olympic Park into CASA’s digital visualisation platform, ViLo. At present we are visualising the geographic location of each bat monitor with a pin that includes an image showing the locational context of each sensor and a flag indicating the total number of bat calls recorded by that sensor on the previous evening. A summary box in the user interface indicates the total number of bat monitors in the vicinity and the total number of bat calls recorded the previous evening. Animated bats are also displayed above pins to help users quickly identify which bat monitors have results from the previous evening to look at.

The data being visualised here comes from custom made ‘Echo Box’ bat monitors that have been specifically designed by ICRI researchers to detect bat calls from ambient sound. They have been created as part of a project called Nature Smart Cities which intends to develop the worlds first open source system for monitoring bats using Internet of Things (IoT) technology. IoT refers to the idea that all sorts of objects can made to communicate and share useful information via the internet. Typically IoT devices incorporate some sort of sensor that can process and transmit information about the environment and/or actuators that respond to data by effecting changes within the environment. Examples of IoT devices in a domestic setting would be Philips Hue Lighting which can be controlled remotely using a smartphone app, or Amazon’s Echo which can respond to voice commands in order to do things like cue up music from Spotify, control your Hue lighting or other IoT devices, and of course order items from Amazon. Billed as a ‘”shazam” for bats’ the ICRI are hoping to use IoT technology to show the value of similar technologies for sensing and conserving urban wildlife populations, in this case bats.

Each Echo Box sensor uses an ultrasonic microphone to record a 3 second sample of audio every 6 seconds. The audio is then processed and transformed into an image called a spectrogram. This is a bit like a fingerprint for sound, which shows the amplitude of sounds across different frequencies. Bat calls can be clearly identified due to their high frequencies. Computer algorithms then analyse the spectrogram to compare it to those of known bat calls in order to identify which type of bat was most likely to have made the call.

The really clever part from a technical perspective is that all of this processing can be done on the device using one of Intel’s Edison chips. Rather than having large amounts of audio transmitted back to a centralised system for storage and analysis, Intel are employing ‘edge processing’, processing on the device at the edge of the network, to massively reduce the amount of data that needs to be sent over the network back to their central data repository. Once the spectrogram has been produced the original sound files are immediately deleted as no longer required. Combined with the fact that sounds within the range of human speech and below 20kHz are ignored by the algorithms that process the data, this ensures that the privacy of passersby is protected.

This is a fascinating project and it has been great having access to such an unusual data set. Further work here can focus on visualising previous evenings data in time-series to better understand patterns of bat activity over the course of the study. We also hope to investigate the use of sonification by incorporating recordings of typical bat calls for each species in order to create a soundscape that complements the visualisation and engages with the core sonic aspect of study.

Kind thanks to Sarah Gallacher and the Intel Collaborative Research Institute for providing access to the data. Thanks also to the Queen Elizabeth Olympic Park for enabling this research. For more information about bats on the Queen Elizabeth Olympic Park check out the project website: Nature Smart Cities.

ViLo: The Virtual London Platform by CASA in VR

This is the third post of the week looking at CASA’s urban data visualisation platform ViLo. Today we are looking at the virtual reality integration with HTC Vive:

Using Virtual Reality technologies such as the HTC Vive we can create data rich virtual environments in which users can freely interact with digital representations of urban spaces. In this demonstration we invite users to enter a virtual representation of the ArcelorMittal Orbit tower, a landmark tower located in the Queen Elizabeth Olympic Park. Using CASA’s Virtual London Platform ViLo it is possible to recursively embed 3D models of the surrounding district within that scene. These models can be digitally coupled to the actual locations they represent through the incorporation of real-time data feeds. In this way events occurring in the actual environment, the arrival and departure of buses and trains for example, are immediately represented within the virtual environment in real-time.

Virtual Reality is a technology which typically uses a head mounted display to immerse the user in a three dimensional, computer generated environment, regularly referred to as a ‘virtual environment’. In this case the virtual environment is a recreation of the viewing gallery at the top of the ArcelorMittal Orbit tower, situated at the Queen Elizabeth Olympic Park in East London. CASA’s ViLo platform is then used to embed further interactive 3D models and data visualisations within that virtual environment.

Using the HTC Vive’s room scale tracking the user can freely walk between exhibits. Otherwise they can teleport between them by pointing and clicking at a spot on the floor with one of the Vive hand controllers. The other hand controller is used for interacting with the exhibits, either by pointing and clicking with the trigger button, or placing the controller over objects and using the grip buttons on the side of the controller to hold them.

In the video we see how the virtual environment can be used to present a range of different media. Visitors can watch 360 degree videos and high quality architectural visualisations, but they can also interact with the 3D models featured in that content more actively using virtual tools like the cross-sectional plane seen in the video.

The ViLo platform provides further flexibility by enabling us to import interactive models of entire urban environments. The Queen Elizabeth Olympic Park is visualised with different layers of data provided by live feeds from Transport for London’s bus, tube, and bike hire APIs. Different layers are selected and removed here by the placing of 3D icons on a panel. Virtual reality affords the user the ability to choose their own view point on the data by simply moving their head. Other contextual information like images from Flickr or articles from Wikipedia can also be imported.

A further feature is the ability to quickly swap between models of different location. In the final section of the video another model of the Queen Elizabeth Olympic Park can be immediately replaced by a model of the area of the Thames in Central London between St Paul’s Cathedral and the Tate Modern gallery. The same tools can be used to manipulate either model. Analysis of building footprint size and building use data are combined with real-time visibility analysis depicting viewsheds from any point the user designates. Wikipedia and Flickr are queried dynamically to provide additional information and context for particular buildings by simply pointing and clicking. In this way many different aspects of urban environments can be digitally reconstructed within the virtual environment, either in miniature or at 1:1 scale.

Where the version of ViLo powered by the ARKit we looked at yesterday provided portability, the virtual reality experience facilitated by the HTC Vive integration can incorporate a much wider variety of data with a far richer level of interaction. Pure data visualisation tasks may not benefit greatly from immersion or presence provided by virtual reality. However, as we see with new creative applications like Google’s Tilt Brush and Blocks, virtual reality really shines in cases where natural and precise interaction is required in the manipulation of virtual objects. Virtual environments also provide useful sites for users who can’t be in the same physical location at the same time. Networked telepresence can be used to enable professionals in different cities to work together synchronously. Alternatively virtual environments can provide forums for public engagement where potential users can drop in at their convenience. Leveraging an urban data visualisation platform like CASA’s ViLo virtual environments can become useful sites for experimentation and communication of built environment interventions.

Many thanks to CASA Research Assistants Lyzette Zeno Cortes and Valerio Signorelli for their work on the ViLo virtual reality integration discussed here. Tweet @ValeSignorelli for more information about the HTC Vive integration.

For further details about ViLo see Monday’s post ViLo: The Virtual London Platform by CASA for Desktop and yesterday’s post ViLo: The Virtual London Platform by CASA with ARKit.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA for Desktop

 

This is the first of three posts introducing CASA’s interactive urban data visualisation platform, ViLo. The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. I’ve been fortunate to work alongside the team and learn from them as the project has developed. Initially conceived as a desktop application CASA are now working on integrations for a range of different interaction devices including virtual reality with the HTC Vive and Google Daydream, augmented reality with Google’s Project Tango and Apple’s ARKit, and so called mixed realities with Microsoft’s HoloLens. ViLo forms the basis for these experiments. Underlying each of these projects is the ViLo platform.

ViLo is an interactive urban data visualisation platform developed by The Bartlett Centre for Advanced Spatial Analysis (CASA) at UCL in collaboration with the Future Cities Catapult (FCC). The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. The platform uses both OpenStreetMap data and the MapBox API for the creation of the digital environment. The platform enables us to visualise the precise locations of buildings, trees and various other urban amenities on a high resolution digital terrain model. The buildings, which are generated at runtime from OpenStreetMap data, retain their original identifiers so that they can be queried for semantic descriptions of their properties. ViLo can also visualise custom spatio-temporal data sets provided by the user in various file formats. Custom 3D models can be provided for landmarks and it is possible to switch from the OpenStreetMap generated geometries to a higher detailed CityGML model of the district in LoD2.

Dynamic data sets stored in CSV file format can also be visualised alongside real-time feeds. A specific emphasis has been placed on the visualisation of mobility data sets. Using Transport for London’s APIs ViLo has the capability to retrieve and visualise the location of bike sharing docks and the availability of bikes along with the entire bus and tube networks including the locations of bus stops and tube stations along with the position of buses and trains updated in real-time.

The ViLo platform also integrates real-time weather information from Wunderground’s API, a three dimensional visualisation of Flickr photos relating to points of interest, and a walking route planner for predefined locations using MapBox API.

An innovative aspect of the ViLo project is the possibility of conducting real time urban analysis using the various data sets loaded into the digital environment. At the current stage it is possible to conduct two-dimensional and three-dimensional visibility analysis (intervisibility; area and perimeter of the visible surfaces; maximum, minimum and average distance; compactness, convexity and concavity).

While originally conceived as part of an effort to visualise London in 3D, The ViLo platform can be used to visualise any urban area across the globe. The first version of the platform demonstrated here focuses on visualising the Queen Elizabeth Olympic Park in East London, a new district that was purpose built to host the London Summer Olympics and Paralympics in 2012.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

From a purely aesthetic point of view the design of the desktop application reminds me strongly of the image from the Guide for visitors to Ise Shrine, Japan, 1948–54 that visualisation expert Edward Tufte discussed particularly favourably in his book Envisioning Information (1990). Our current efforts at ‘escaping flatland’ are a continuation of previous work undertaken by Andy Hudson-Smith and Mike Batty at CASA in the early 2000’s to create a Virtual London.

One of the main advances is our increased ability to integrate real-time data in such a way that the digital representation can be more fully coupled to the actual environment in order to reflect change. We also benefit from advances in 3D visualisation and real-time rendering afforded by the use of video game engines such as Unity. As a result the ViLo platform provides a good demonstration of our current capabilities in relation to the observation of dynamic processes as they occur in real-time at an urban scale.

Urban X-Rays: Wi-Fi for Spatial Scanning

Many of us in cities increasingly depend on Wi-Fi connectivity for communication as we go about our every day lives. However, beyond providing for our mobile and wireless communication needs, the intentional or directed use of Wi-Fi also provides new possibilities for urban sensing.

In this video professor Yasamin Mostofi from the University of California discusses research into the scanning or x-ray of built structures using a combination of drones and Wi-Fi transceivers. By transmitting a Wi-Fi signal from a drone on one side of a structure, and using a drone on the opposite side to receive and measure the strength of that signal it is possible to build up a 3D image of the structure and its contents. This methodology has great potential in areas like structural monitoring for the built environment, archaeological surveying, and even emergency response as outlined on the 3D Through-Wall Imaging project page.

Particularly with regard to emergency response one can easily imagine the value of being able to identify people trapped or hiding within a structure. Indeed Mostofi’s group are have also researched the potential these techniques provide for monitoring of humans in their Head Counting with WiFI project as demonstrated with the next video.

What is striking is that this technique enables individuals to be counted without themselves needing a Wi-Fi enabled device. Several potential uses are proposed which are particularly relevant to urban environments:

For instance, heating and cooling of a building can be better optimized based on learning the concentration of the people over the building. Emergency evacuation can also benefit from an estimation of the level of occupancy. Finally, stores can benefit from counting the number of shoppers for better business planning.

Given that WiFi networks are available in many buildings, we envision that they can provide a new way for occupancy estimation, in addition to cameras and other sensing mechanisms. In particular, its potential for counting behind walls can be a nice complement to existing vision-based methods.

I’m fascinated by the way experiments like this reveal the hidden potentials already latent within many of our cities. The roll out of citywide Wi-Fi infrastructure provides the material support for an otherwise invisible electromagnetic environment designers Dunne & Raby have called ‘Hertzian Space’. By finding new ways to sense the dynamics of this space, cities can tap in to these resources and exploit new potentialities, hopefully for the benefit of both the city and its inhabitants.

Thanks to Geo Awesomeness for posting the drone story here.