Tag Archives: 3D

ViLo and the Future of Planning

Following our recent posts on CASA’s digital urban visualisation platform ViLo, the Future Cities Catapult who collaborated with CASA on the project have released a video discussing it in further detail. Commencing in 2014 the aim of the project was to identify which combinations of urban data might be most valuable to urban planners, site operators and citizens. CASA research associates Lyzette Zeno Cortes and Valerio Signorelli discuss how it was created using the Unity game engine in order to understand its potential for visualising information in real-time.

Ben Edmonds from the London Legacy Development Corporation who run the Queen Elizabeth Olympic Park where ViLo has been tested discusses how this was used to gather environmental data and qualitative data from park visitors in order to help understand and improve their experience of the park. Including real-time information on transportation links, environmental factors and park usage by the public helps to build up an overview of the whole area so that it can be run more effectively.

Beyond this there is an expectation that use of the 3D model can be extended beyond the Olympic Park and implemented London-wide. This fits in to a wider expectation for City Information Modelling (CIM). As Stefan Webb from the Future Cities Catapult describes it, this is the idea that a 3D model containing sufficient data can enable us to forecast the impact of future developments and changes to the functioning of both the physical and social infrastructure of the city.

Advertisements

ViLo: The Virtual London Platform by CASA in VR

This is the third post of the week looking at CASA’s urban data visualisation platform ViLo. Today we are looking at the virtual reality integration with HTC Vive:

Using Virtual Reality technologies such as the HTC Vive we can create data rich virtual environments in which users can freely interact with digital representations of urban spaces. In this demonstration we invite users to enter a virtual representation of the ArcelorMittal Orbit tower, a landmark tower located in the Queen Elizabeth Olympic Park. Using CASA’s Virtual London Platform ViLo it is possible to recursively embed 3D models of the surrounding district within that scene. These models can be digitally coupled to the actual locations they represent through the incorporation of real-time data feeds. In this way events occurring in the actual environment, the arrival and departure of buses and trains for example, are immediately represented within the virtual environment in real-time.

Virtual Reality is a technology which typically uses a head mounted display to immerse the user in a three dimensional, computer generated environment, regularly referred to as a ‘virtual environment’. In this case the virtual environment is a recreation of the viewing gallery at the top of the ArcelorMittal Orbit tower, situated at the Queen Elizabeth Olympic Park in East London. CASA’s ViLo platform is then used to embed further interactive 3D models and data visualisations within that virtual environment.

Using the HTC Vive’s room scale tracking the user can freely walk between exhibits. Otherwise they can teleport between them by pointing and clicking at a spot on the floor with one of the Vive hand controllers. The other hand controller is used for interacting with the exhibits, either by pointing and clicking with the trigger button, or placing the controller over objects and using the grip buttons on the side of the controller to hold them.

In the video we see how the virtual environment can be used to present a range of different media. Visitors can watch 360 degree videos and high quality architectural visualisations, but they can also interact with the 3D models featured in that content more actively using virtual tools like the cross-sectional plane seen in the video.

The ViLo platform provides further flexibility by enabling us to import interactive models of entire urban environments. The Queen Elizabeth Olympic Park is visualised with different layers of data provided by live feeds from Transport for London’s bus, tube, and bike hire APIs. Different layers are selected and removed here by the placing of 3D icons on a panel. Virtual reality affords the user the ability to choose their own view point on the data by simply moving their head. Other contextual information like images from Flickr or articles from Wikipedia can also be imported.

A further feature is the ability to quickly swap between models of different location. In the final section of the video another model of the Queen Elizabeth Olympic Park can be immediately replaced by a model of the area of the Thames in Central London between St Paul’s Cathedral and the Tate Modern gallery. The same tools can be used to manipulate either model. Analysis of building footprint size and building use data are combined with real-time visibility analysis depicting viewsheds from any point the user designates. Wikipedia and Flickr are queried dynamically to provide additional information and context for particular buildings by simply pointing and clicking. In this way many different aspects of urban environments can be digitally reconstructed within the virtual environment, either in miniature or at 1:1 scale.

Where the version of ViLo powered by the ARKit we looked at yesterday provided portability, the virtual reality experience facilitated by the HTC Vive integration can incorporate a much wider variety of data with a far richer level of interaction. Pure data visualisation tasks may not benefit greatly from immersion or presence provided by virtual reality. However, as we see with new creative applications like Google’s Tilt Brush and Blocks, virtual reality really shines in cases where natural and precise interaction is required in the manipulation of virtual objects. Virtual environments also provide useful sites for users who can’t be in the same physical location at the same time. Networked telepresence can be used to enable professionals in different cities to work together synchronously. Alternatively virtual environments can provide forums for public engagement where potential users can drop in at their convenience. Leveraging an urban data visualisation platform like CASA’s ViLo virtual environments can become useful sites for experimentation and communication of built environment interventions.

Many thanks to CASA Research Assistants Lyzette Zeno Cortes and Valerio Signorelli for their work on the ViLo virtual reality integration discussed here. Tweet @ValeSignorelli for more information about the HTC Vive integration.

For further details about ViLo see Monday’s post ViLo: The Virtual London Platform by CASA for Desktop and yesterday’s post ViLo: The Virtual London Platform by CASA with ARKit.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA for Desktop

 

This is the first of three posts introducing CASA’s interactive urban data visualisation platform, ViLo. The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. I’ve been fortunate to work alongside the team and learn from them as the project has developed. Initially conceived as a desktop application CASA are now working on integrations for a range of different interaction devices including virtual reality with the HTC Vive and Google Daydream, augmented reality with Google’s Project Tango and Apple’s ARKit, and so called mixed realities with Microsoft’s HoloLens. ViLo forms the basis for these experiments. Underlying each of these projects is the ViLo platform.

ViLo is an interactive urban data visualisation platform developed by The Bartlett Centre for Advanced Spatial Analysis (CASA) at UCL in collaboration with the Future Cities Catapult (FCC). The platform enables visualisation of both real-time and offline spatio-temporal data sets in a digital, three-dimensional representation of the urban environment. The platform uses both OpenStreetMap data and the MapBox API for the creation of the digital environment. The platform enables us to visualise the precise locations of buildings, trees and various other urban amenities on a high resolution digital terrain model. The buildings, which are generated at runtime from OpenStreetMap data, retain their original identifiers so that they can be queried for semantic descriptions of their properties. ViLo can also visualise custom spatio-temporal data sets provided by the user in various file formats. Custom 3D models can be provided for landmarks and it is possible to switch from the OpenStreetMap generated geometries to a higher detailed CityGML model of the district in LoD2.

Dynamic data sets stored in CSV file format can also be visualised alongside real-time feeds. A specific emphasis has been placed on the visualisation of mobility data sets. Using Transport for London’s APIs ViLo has the capability to retrieve and visualise the location of bike sharing docks and the availability of bikes along with the entire bus and tube networks including the locations of bus stops and tube stations along with the position of buses and trains updated in real-time.

The ViLo platform also integrates real-time weather information from Wunderground’s API, a three dimensional visualisation of Flickr photos relating to points of interest, and a walking route planner for predefined locations using MapBox API.

An innovative aspect of the ViLo project is the possibility of conducting real time urban analysis using the various data sets loaded into the digital environment. At the current stage it is possible to conduct two-dimensional and three-dimensional visibility analysis (intervisibility; area and perimeter of the visible surfaces; maximum, minimum and average distance; compactness, convexity and concavity).

While originally conceived as part of an effort to visualise London in 3D, The ViLo platform can be used to visualise any urban area across the globe. The first version of the platform demonstrated here focuses on visualising the Queen Elizabeth Olympic Park in East London, a new district that was purpose built to host the London Summer Olympics and Paralympics in 2012.

Credits

The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

From a purely aesthetic point of view the design of the desktop application reminds me strongly of the image from the Guide for visitors to Ise Shrine, Japan, 1948–54 that visualisation expert Edward Tufte discussed particularly favourably in his book Envisioning Information (1990). Our current efforts at ‘escaping flatland’ are a continuation of previous work undertaken by Andy Hudson-Smith and Mike Batty at CASA in the early 2000’s to create a Virtual London.

One of the main advances is our increased ability to integrate real-time data in such a way that the digital representation can be more fully coupled to the actual environment in order to reflect change. We also benefit from advances in 3D visualisation and real-time rendering afforded by the use of video game engines such as Unity. As a result the ViLo platform provides a good demonstration of our current capabilities in relation to the observation of dynamic processes as they occur in real-time at an urban scale.

Urban X-Rays: Wi-Fi for Spatial Scanning

Many of us in cities increasingly depend on Wi-Fi connectivity for communication as we go about our every day lives. However, beyond providing for our mobile and wireless communication needs, the intentional or directed use of Wi-Fi also provides new possibilities for urban sensing.

In this video professor Yasamin Mostofi from the University of California discusses research into the scanning or x-ray of built structures using a combination of drones and Wi-Fi transceivers. By transmitting a Wi-Fi signal from a drone on one side of a structure, and using a drone on the opposite side to receive and measure the strength of that signal it is possible to build up a 3D image of the structure and its contents. This methodology has great potential in areas like structural monitoring for the built environment, archaeological surveying, and even emergency response as outlined on the 3D Through-Wall Imaging project page.

Particularly with regard to emergency response one can easily imagine the value of being able to identify people trapped or hiding within a structure. Indeed Mostofi’s group are have also researched the potential these techniques provide for monitoring of humans in their Head Counting with WiFI project as demonstrated with the next video.

What is striking is that this technique enables individuals to be counted without themselves needing a Wi-Fi enabled device. Several potential uses are proposed which are particularly relevant to urban environments:

For instance, heating and cooling of a building can be better optimized based on learning the concentration of the people over the building. Emergency evacuation can also benefit from an estimation of the level of occupancy. Finally, stores can benefit from counting the number of shoppers for better business planning.

Given that WiFi networks are available in many buildings, we envision that they can provide a new way for occupancy estimation, in addition to cameras and other sensing mechanisms. In particular, its potential for counting behind walls can be a nice complement to existing vision-based methods.

I’m fascinated by the way experiments like this reveal the hidden potentials already latent within many of our cities. The roll out of citywide Wi-Fi infrastructure provides the material support for an otherwise invisible electromagnetic environment designers Dunne & Raby have called ‘Hertzian Space’. By finding new ways to sense the dynamics of this space, cities can tap in to these resources and exploit new potentialities, hopefully for the benefit of both the city and its inhabitants.

Thanks to Geo Awesomeness for posting the drone story here.

The Art & Science of 3D Cities at the Transport Systems Catapult

Back in March I attended a day long workshop the at the Transport Systems Catapult (TCS) in Milton Keynes on the subject of ‘The Barriers to Building 3D Synthetic Environments’. The aim of the workshop was to bring together key SMEs and Academics to collaboratively identify challenges and discuss solutions for the creation of virtual environments that would be suitable for simulating and testing transport scenarios.

Alongside presentations from the Transport Systems, Future Cities and Satellite Applications catapults a number of SMEs also presented on topics as diverse as LiDAR data capture, GNSS positioning, 3D GIS and the use of GIS data in game engines. For my purposes the following talk on ‘The Art & Science of 3D Cities’ by Elliot Hartley of Garsdale Design was particularly interesting and raised a number of great points:

One of the key challenges for the generation and use of 3D data discussed by Elliot derives from the heightened expectation generated by the depiction of 3D urban environments in films, video games and Google Earth. The truth is the creation of these kinds of environments require considerable investment in terms of time and investment. Elliot’s talk poses key questions for stakeholders when embarking on a 3D project:

  • Why do you want a 3D model?
  • Do you actually need a 3D model?
  • What kind of 3D model do you want?
  • What 3D model do you actually need?
    • Small areas with lots of detail?
    • Large areas with little detail?
  • How much time and/or money do you have?
  • Will you want to publish the model?
  • What hardware and software do you have?
  • What’s the consequence of getting the model wrong?

While the primary focuses for the day were the practical and technical challenges of creating 3D environments, the further implication of Elliot’s discussion is that the use of 3D data and the creation of virtual environments can no longer be considered a purely technical activity with neutral products and outputs. For me the last question in particular foregrounded the stakes involved in moving beyond visualisation toward the growing use of 3D data in various forms of analysis. Thanks to Elliot for the stimulating talk.

After the presentations we had a tour of the TCS facilities and then broke up into work groups to discuss a number of themes. A report and summary is expected to be published by the TCS soon.

A Brief History of Google Maps…and a not so Brief Video

In this long but useful presentation from 2012 Google Maps vice president Brian McClendon and colleages provide a detailed overview of the platforms evolution. Some of the key points are summarised below.

In the mid 90s Silicon Graphics developed the ‘Space-to-Your-Face’ demo to demonstrate the power of their Onyx Infinite Reality CGI workstation. In the demo the view zooms from orbit to the Matterhorn via Lake Geneva, using a combination of satellite, aerial imagery and terrain data. This is included in the Silicon Graphics showreel from 1996 which be viewed on YouTube here.

In 2001 the company Keyhole was founded as a startup providing mapping for the travel and real estate industries on the basis of a subscription model. After achieving wider recognition through use by CNN during the invasion of Iraq in 2003, the company was subsequently acquired by Google in 2004.

At the same time Google were working on the creation of Google Maps which used a combination of client side processing via AJAX and pre-rendered map tiles to enable its highly interactive and smooth scrolling slippy map system. However, now that network bandwidth and processing power has been increased Google Map tiles are no longer pre-rendered and are instead provided on demand.

Between 2005 and 2008 Google Maps licensed further data to obtain a full world map with more comprehensive coverage. At the same time Google were also working to acquire high resolution imagery.

Street View started in five US cities in 2007 but had expanded to 3000 cities in 39 countries by 2012. In 2008 Google released Map Maker to capture data where other basic mapping data and Street View were absent.

Google’s Ground Truth project now enables them to generate their own maps from raw data by combining satellite and aerial imagery with road data and information capture via Street View. This data is processed with an application callled ‘Atlas’ that Google developed internally. With the aid of advanced computer vision techniques they are able to detect and correct errors and extract further contextual information from the raw imagery data that helps them make their maps more complete and accurate. This includes details as specific as the names of streets and businesses appearing on signs.

Corrections are also crowd-sourced from users with the aid of their ‘Report Maps Issue’ feature. Staff at Google are then able to verify the issue with Street View, edit the map and publish the corrections within minutes.

The presentation moves on to further discussions on ‘Google Maps For Good’ and their work with NGOs (19:20), ‘Google Maps for Mobile’ and the provision of offline map availability (27:35), the evolution of the equipment used to capture Street View (31:30), and finally the evolution of their 3D technology (37:40). The final discussion in particular reiterates the content in my post yesterday from a slightly different perspective.

What I found particularly interesting in this video was the continued manual intervention via Atlas but also the extent to which they are able to gather contextual information from Street View imagery.