Category Archives: Visualisation

Roames: Virtual Asset Management With Unity

Roames is a Unity-based 3D data visualisation platform created by Dutch company Fugro for the purpose of asset management. In particular it has been used to visualise LiDAR point clouds as the basis for the management and maintenance of power networks. In the video below Glen Ross-Sampson and Peter O’Loughlin from the Fugro Roames team discuss the challenges involved in creating a platform for geospatial data in Unity.

Behind the scenes Amazon Web Services (AWS) are used to provide scalable computation for processing large amounts of point cloud data. Algorithms are run on the AWS clusters to classify and extract different types of features from the point cloud. These include power lines, poles, vegetation and buildings. Further business rules can then be applied and visualised to help users make decisions. In this case they helped Ergon Energy in Australia assess the risk of damage to overhead power cables caused by growing vegetation. The benefit of this kind of ‘virtual asset management’ is that it allows clients like Ergon to make assessments about the assets they manage without having to send crews to inspect every site. By prioritising those sites most at risk they can expect to make significant savings.

Roames was the outcome of a five year project. Unity was chosen as the visualisation client because commercial GIS software didn’t provide the performance they required. They also wanted to be able to customise the interface and experiment with simulation. Using Unity enabled the team to prototype without having to build low level functionality from scratch.

The system allows the user to explore the scene in real-time. Data are streamed in to the scene and unloaded dynamically with the aid of memory and hard-disk caches. Changing level of detail (LOD) is used to support zooming the view from out of space all the way in to a ground level view. As the user zooms in points are replaced by a voxel representation. All of this is achieved using Amazon S3 for cloud storage.

As well as discussing the motivation behind Roames’ and their technical stack the talk does a great job of discussing some common problems and solutions in working with large spatial data sets in Unity.

Technical notes:

Regionation – Map data in Roames is structured according to the Tile Map Service (TMS) specification developed by the Open Source Geospatial Foundation (OSGeo) and served via a Rest API endpoint. Tiles of different LOD are provided based on proximity to the camera. This is also used when the camera is tilted to ensure lower levels of details are used for objects that are further away.

Floating Point Precision – Unity using 32-bit single precision floating point numbers to store the positions of assets. This gives an average accuracy to 7 significant figures. The need to map data across the whole globe to millimeter precision. However, on the scale of the globe accuracy to the nearest metre alone requires 8 significant figures. The spatial uncertainty this introduces is visibly represented by an onscreen spatial jitter. This was resolved by storing the positions of objects with 64-bit double precision and using a floating origin. The floating origin was achieved by setting the position of the main camera to the Unity world origin (0,0,0) each frame and moving the other objects relative to that position rather than moving the camera.

Manipulating large numbers of objects – Manipulating the positions of thousands of objects is computationally expensive and reduces the frame rate. The Roames team used a number of evenly distributed empty game objects or ‘terminal nodes’ as references that other objects could be parented to. This meant Instead of updating the positions of all objects in the scene they just had to update those of the terminal nodes.

Memory Management – As objects are loaded and removed from the scene their are spikes in activity caused by lags in Unity’s automated memory management or ‘garbage collection’; the process by which unused memory is freed for reuse. These issues were resolved by reusing existing objects to avoid allocating more memory and making those objects static where possible. Use of for loops or enumerators was recommended over foreach loops which allocate memory internally. Reducing the amount of string manipulation is also recommended.

Scratch Arrays – Roames introduced their own ‘Scratch Array’ pattern to resuse commonly sized arrays.

Binary Formats – Rather than use KML which is a verbose, text-based XML format, Roames uses the Binary PLY format which performs much better. This reduced file sizes and improved load times and garbage collection allocations.

In order to display the points efficiently they are batched into single meshes on 65,000 vertices. They also lower the density of their clouds prior to loading.

The Core Engine and the other aspects of the product like the user interface were separated to make the project easier to handle. This enabled the 14 developers on the project to work more efficiently. It also meant that other custom tools could be quickly developed in separation from the main project.

The team’s goals going forward to get the product working across the web, to open the Core API to developers, and to start using Unity physics for simulations.

Advertisements

Nature Smart Cities: Visualising IoT bat monitor data with ViLo

In the past weeks I’ve been collaborating with researchers at the Intel Collaborative Research Institute (ICRI) for Urban IoT to integrate data from bat monitors on the Queen Elizabeth Olympic Park into CASA’s digital visualisation platform, ViLo. At present we are visualising the geographic location of each bat monitor with a pin that includes an image showing the locational context of each sensor and a flag indicating the total number of bat calls recorded by that sensor on the previous evening. A summary box in the user interface indicates the total number of bat monitors in the vicinity and the total number of bat calls recorded the previous evening. Animated bats are also displayed above pins to help users quickly identify which bat monitors have results from the previous evening to look at.

The data being visualised here comes from custom made ‘Echo Box’ bat monitors that have been specifically designed by ICRI researchers to detect bat calls from ambient sound. They have been created as part of a project called Nature Smart Cities which intends to develop the worlds first open source system for monitoring bats using Internet of Things (IoT) technology. IoT refers to the idea that all sorts of objects can made to communicate and share useful information via the internet. Typically IoT devices incorporate some sort of sensor that can process and transmit information about the environment and/or actuators that respond to data by effecting changes within the environment. Examples of IoT devices in a domestic setting would be Philips Hue Lighting which can be controlled remotely using a smartphone app, or Amazon’s Echo which can respond to voice commands in order to do things like cue up music from Spotify, control your Hue lighting or other IoT devices, and of course order items from Amazon. Billed as a ‘”shazam” for bats’ the ICRI are hoping to use IoT technology to show the value of similar technologies for sensing and conserving urban wildlife populations, in this case bats.

Each Echo Box sensor uses an ultrasonic microphone to record a 3 second sample of audio every 6 seconds. The audio is then processed and transformed into an image called a spectrogram. This is a bit like a fingerprint for sound, which shows the amplitude of sounds across different frequencies. Bat calls can be clearly identified due to their high frequencies. Computer algorithms then analyse the spectrogram to compare it to those of known bat calls in order to identify which type of bat was most likely to have made the call.

The really clever part from a technical perspective is that all of this processing can be done on the device using one of Intel’s Edison chips. Rather than having large amounts of audio transmitted back to a centralised system for storage and analysis, Intel are employing ‘edge processing’, processing on the device at the edge of the network, to massively reduce the amount of data that needs to be sent over the network back to their central data repository. Once the spectrogram has been produced the original sound files are immediately deleted as no longer required. Combined with the fact that sounds within the range of human speech and below 20kHz are ignored by the algorithms that process the data, this ensures that the privacy of passersby is protected.

This is a fascinating project and it has been great having access to such an unusual data set. Further work here can focus on visualising previous evenings data in time-series to better understand patterns of bat activity over the course of the study. We also hope to investigate the use of sonification by incorporating recordings of typical bat calls for each species in order to create a soundscape that complements the visualisation and engages with the core sonic aspect of study.

Kind thanks to Sarah Gallacher and the Intel Collaborative Research Institute for providing access to the data. Thanks also to the Queen Elizabeth Olympic Park for enabling this research. For more information about bats on the Queen Elizabeth Olympic Park check out the project website: Nature Smart Cities.

NXT BLD: Emerging Design Technology for the Built Environment

NXT BLD is a new conference in London specifically aimed at the discussion on emerging technologies and their applications in the fields of architecture, engineering and construction. Organised by AEC Magazine, the first event was held in the British Museum on the 28th of June 2017. Videos of the event presentations have been released and provide some useful insight into the ways in which technologies like VR are being used within industry. I found the following talk by Dan Harper, managing director of CityScape Digital, particularly useful:

In the video Dan discusses the motivation for their use of VR. Focused on architectural visualisation the company often found that the high quality renderings they were producing quickly became outdated due to the fact that render times were not keeping pace with the iterative nature of the design process. They found that the real time rendering capabilities of game engines, in their case Unreal, helped them iterate images more quickly. Encountering similar challenges with the production of 3D models they realised that having clients inspect the 3D model could be used not only for communication but also as a spatial decision making tool. Supported by 3D data, real-time rendering and VR, which provides a one to one scale experience of the space, value can be added and costs saved by placing a group of decision makers within the space they are discussing rather than relying on the personal impressions each would draw from their own subjective imagining  based on 2D plans and architectural renderings.

Innovation of the design process with VR not only makes it less expensive but also makes the product more valuable. With reference to similar uses of VR in the car industry Dan identifies opportunities for ‘personalisation’, ‘build to order’, ‘collaboration’, ‘focus grouping’ experientially, ‘efficient construction’ and ‘driving margins at point of sale’. Case studies include the Sky Broadcasting Campus at Osterley, the Battersea Power Station redevelopment and the Earls Court masterplan. These use cases demonstrate that return on investment is increased through reuse of the 3D models and assets in successive stages of the project from concept design, investor briefings, stakeholders consultation right through to marketing.

Videos of the other presentations from the day can be found on the NXT BLD website here.

Urban X-Rays: Wi-Fi for Spatial Scanning

Many of us in cities increasingly depend on Wi-Fi connectivity for communication as we go about our every day lives. However, beyond providing for our mobile and wireless communication needs, the intentional or directed use of Wi-Fi also provides new possibilities for urban sensing.

In this video professor Yasamin Mostofi from the University of California discusses research into the scanning or x-ray of built structures using a combination of drones and Wi-Fi transceivers. By transmitting a Wi-Fi signal from a drone on one side of a structure, and using a drone on the opposite side to receive and measure the strength of that signal it is possible to build up a 3D image of the structure and its contents. This methodology has great potential in areas like structural monitoring for the built environment, archaeological surveying, and even emergency response as outlined on the 3D Through-Wall Imaging project page.

Particularly with regard to emergency response one can easily imagine the value of being able to identify people trapped or hiding within a structure. Indeed Mostofi’s group are have also researched the potential these techniques provide for monitoring of humans in their Head Counting with WiFI project as demonstrated with the next video.

What is striking is that this technique enables individuals to be counted without themselves needing a Wi-Fi enabled device. Several potential uses are proposed which are particularly relevant to urban environments:

For instance, heating and cooling of a building can be better optimized based on learning the concentration of the people over the building. Emergency evacuation can also benefit from an estimation of the level of occupancy. Finally, stores can benefit from counting the number of shoppers for better business planning.

Given that WiFi networks are available in many buildings, we envision that they can provide a new way for occupancy estimation, in addition to cameras and other sensing mechanisms. In particular, its potential for counting behind walls can be a nice complement to existing vision-based methods.

I’m fascinated by the way experiments like this reveal the hidden potentials already latent within many of our cities. The roll out of citywide Wi-Fi infrastructure provides the material support for an otherwise invisible electromagnetic environment designers Dunne & Raby have called ‘Hertzian Space’. By finding new ways to sense the dynamics of this space, cities can tap in to these resources and exploit new potentialities, hopefully for the benefit of both the city and its inhabitants.

Thanks to Geo Awesomeness for posting the drone story here.