Author Archives: Virtual Architectures

ViLo and the Future of Planning

Following our recent posts on CASA’s digital urban visualisation platform ViLo, the Future Cities Catapult who collaborated with CASA on the project have released a video discussing it in further detail. Commencing in 2014 the aim of the project was to identify which combinations of urban data might be most valuable to urban planners, site operators and citizens. CASA research associates Lyzette Zeno Cortes and Valerio Signorelli discuss how it was created using the Unity game engine in order to understand its potential for visualising information in real-time.

Ben Edmonds from the London Legacy Development Corporation who run the Queen Elizabeth Olympic Park where ViLo has been tested discusses how this was used to gather environmental data and qualitative data from park visitors in order to help understand and improve their experience of the park. Including real-time information on transportation links, environmental factors and park usage by the public helps to build up an overview of the whole area so that it can be run more effectively.

Beyond this there is an expectation that use of the 3D model can be extended beyond the Olympic Park and implemented London-wide. This fits in to a wider expectation for City Information Modelling (CIM). As Stefan Webb from the Future Cities Catapult describes it, this is the idea that a 3D model containing sufficient data can enable us to forecast the impact of future developments and changes to the functioning of both the physical and social infrastructure of the city.


Digital Literacy in the context of Smart Cities

September 8th is UNESCO’s International Literacy Day. This year the theme is ‘Literacy in a digital world’:

At record speed, digital technologies are fundamentally changing the way people live, work, learn and socialise everywhere. They are giving new possibilities to people to improve all areas of their lives including access to information; knowledge management; networking; social services; industrial production, and mode of work. However, those who lack access to digital technologies and the knowledge, skills and competencies required to navigate them, can end up marginalised in increasingly digitally driven societies. Literacy is one such essential skill.

Just as knowledge, skills and competencies evolve in the digital world, so does what it means to be literate. In order to close the literacy skills gap and reduce inequalities, this year’s International Literacy Day will highlight the challenges and opportunities in promoting literacy in the digital world, a world where, despite progress, at least 750 million adults and 264 million out-of-school children still lack basic literacy skills.

International Literacy Day is celebrated annually worldwide and brings together governments, multi- and bilateral organizations, NGOs, private sectors, communities, teachers, learners and experts in the field. It is an occasion to mark achievements and reflect on ways to counter remaining challenges for the promotion of literacy as an integral part of lifelong learning within and beyond the 2030 Education Agenda.

In the past days I’ve been preparing for a conference talk this weekend and its has become clear to me that digital literacy is of key importance for helping individuals to engage with urban technologies and exercise digital agency. It is through digital literacy that people living in cities will be able to understand and make informed decisions with regard to the use and impact of emerging technologies. Smartphones, the Internet of Things, driverless cars, drones, artificial intelligence and automation can be very daunting and their implications unclear.

A common response to the perceived imposition of digital technologies is to try to disconnect. It is up to the individual to determine to what extent they engage with such technologies. However, ignoring these technologies altogether is no solution. At the very least we have to provide the opportunity for those that are sufficiently capable to be able to inform themselves, enabling them to more effectively assess the advantages and disadvantages of different technologies. We need to move away from the kind of binary thinking that leads to an all or nothing approach to technology. Fostering digital literacy is key for helping individuals and communities negotiate lives that are increasingly mediated by digitally technologies.

At the conference on Monday afternoon I’ll be presenting my paper ‘Opening Urban Mirror Worlds: Possibilities for Participation in Digital Urban Dataspaces’. In this talk I’ll discuss some of the ways in which technologies like virtual and augmented reality might be used to give people access urban data. I’ll also be part of a panel discussing ‘Engagement in the Smart City’. Further details can be found on the conference website: Whose Right To The Smart City.

Microsoft’s Vision for Mixed and Mixing Realities

A couple of days ago the RoadtoVR website posted about Microsoft’s parent for a wand like controller which appeared in the concept video above. I thought it was worth re-posting the video here as it provides a good indication of what a mixed reality future might look like. In particular it considers a future where augmented and virtual reality systems are used side by side. Where some companies firmly backed one platform or the other, VR in the case of Oculus and the HTC Vive, AR in the case of Meta, more established companies like Microsoft and Google have the resources and brand penetration to back both. Whether Apple follows suite or commits everything to AR following the recent release of ARKit remains to be seen. As such it is interesting to compare the kind of mixed reality ecosystems they want to create. Its then up to developers and consumers to determine which hardware, and by extension which vision, they are most inclined to back.

There are many challenges to overcome before this kind of mixed reality interaction becomes possible. The situated use of AR by the character Penny, and use of VR for telepresence by Samir are particularly well motivated. But what are the characters Samir and Chi actually going to see in this interaction? Will it make a difference if they don’t experience each other’s presence to the same degree? And, how is Samir’s position going to be referenced relative to Penny’s? There are many technical challenges still to be overcome, and compromises will need to be made. For companies like Microsoft and Google the challenge for them is in convincing developers and consumers that the hardware ecosystem they are providing is sufficiently close to their vision of that mixed reality future today…and crucially all at the right price.

Nature Smart Cities: Visualising IoT bat monitor data with ViLo

In the past weeks I’ve been collaborating with researchers at the Intel Collaborative Research Institute (ICRI) for Urban IoT to integrate data from bat monitors on the Queen Elizabeth Olympic Park into CASA’s digital visualisation platform, ViLo. At present we are visualising the geographic location of each bat monitor with a pin that includes an image showing the locational context of each sensor and a flag indicating the total number of bat calls recorded by that sensor on the previous evening. A summary box in the user interface indicates the total number of bat monitors in the vicinity and the total number of bat calls recorded the previous evening. Animated bats are also displayed above pins to help users quickly identify which bat monitors have results from the previous evening to look at.

The data being visualised here comes from custom made ‘Echo Box’ bat monitors that have been specifically designed by ICRI researchers to detect bat calls from ambient sound. They have been created as part of a project called Nature Smart Cities which intends to develop the worlds first open source system for monitoring bats using Internet of Things (IoT) technology. IoT refers to the idea that all sorts of objects can made to communicate and share useful information via the internet. Typically IoT devices incorporate some sort of sensor that can process and transmit information about the environment and/or actuators that respond to data by effecting changes within the environment. Examples of IoT devices in a domestic setting would be Philips Hue Lighting which can be controlled remotely using a smartphone app, or Amazon’s Echo which can respond to voice commands in order to do things like cue up music from Spotify, control your Hue lighting or other IoT devices, and of course order items from Amazon. Billed as a ‘”shazam” for bats’ the ICRI are hoping to use IoT technology to show the value of similar technologies for sensing and conserving urban wildlife populations, in this case bats.

Each Echo Box sensor uses an ultrasonic microphone to record a 3 second sample of audio every 6 seconds. The audio is then processed and transformed into an image called a spectrogram. This is a bit like a fingerprint for sound, which shows the amplitude of sounds across different frequencies. Bat calls can be clearly identified due to their high frequencies. Computer algorithms then analyse the spectrogram to compare it to those of known bat calls in order to identify which type of bat was most likely to have made the call.

The really clever part from a technical perspective is that all of this processing can be done on the device using one of Intel’s Edison chips. Rather than having large amounts of audio transmitted back to a centralised system for storage and analysis, Intel are employing ‘edge processing’, processing on the device at the edge of the network, to massively reduce the amount of data that needs to be sent over the network back to their central data repository. Once the spectrogram has been produced the original sound files are immediately deleted as no longer required. Combined with the fact that sounds within the range of human speech and below 20kHz are ignored by the algorithms that process the data, this ensures that the privacy of passersby is protected.

This is a fascinating project and it has been great having access to such an unusual data set. Further work here can focus on visualising previous evenings data in time-series to better understand patterns of bat activity over the course of the study. We also hope to investigate the use of sonification by incorporating recordings of typical bat calls for each species in order to create a soundscape that complements the visualisation and engages with the core sonic aspect of study.

Kind thanks to Sarah Gallacher and the Intel Collaborative Research Institute for providing access to the data. Thanks also to the Queen Elizabeth Olympic Park for enabling this research. For more information about bats on the Queen Elizabeth Olympic Park check out the project website: Nature Smart Cities.

NXT BLD: Emerging Design Technology for the Built Environment

NXT BLD is a new conference in London specifically aimed at the discussion on emerging technologies and their applications in the fields of architecture, engineering and construction. Organised by AEC Magazine, the first event was held in the British Museum on the 28th of June 2017. Videos of the event presentations have been released and provide some useful insight into the ways in which technologies like VR are being used within industry. I found the following talk by Dan Harper, managing director of CityScape Digital, particularly useful:

In the video Dan discusses the motivation for their use of VR. Focused on architectural visualisation the company often found that the high quality renderings they were producing quickly became outdated due to the fact that render times were not keeping pace with the iterative nature of the design process. They found that the real time rendering capabilities of game engines, in their case Unreal, helped them iterate images more quickly. Encountering similar challenges with the production of 3D models they realised that having clients inspect the 3D model could be used not only for communication but also as a spatial decision making tool. Supported by 3D data, real-time rendering and VR, which provides a one to one scale experience of the space, value can be added and costs saved by placing a group of decision makers within the space they are discussing rather than relying on the personal impressions each would draw from their own subjective imagining  based on 2D plans and architectural renderings.

Innovation of the design process with VR not only makes it less expensive but also makes the product more valuable. With reference to similar uses of VR in the car industry Dan identifies opportunities for ‘personalisation’, ‘build to order’, ‘collaboration’, ‘focus grouping’ experientially, ‘efficient construction’ and ‘driving margins at point of sale’. Case studies include the Sky Broadcasting Campus at Osterley, the Battersea Power Station redevelopment and the Earls Court masterplan. These use cases demonstrate that return on investment is increased through reuse of the 3D models and assets in successive stages of the project from concept design, investor briefings, stakeholders consultation right through to marketing.

Videos of the other presentations from the day can be found on the NXT BLD website here.

ViLo: The Virtual London Platform by CASA in VR

This is the third post of the week looking at CASA’s urban data visualisation platform ViLo. Today we are looking at the virtual reality integration with HTC Vive:

Using Virtual Reality technologies such as the HTC Vive we can create data rich virtual environments in which users can freely interact with digital representations of urban spaces. In this demonstration we invite users to enter a virtual representation of the ArcelorMittal Orbit tower, a landmark tower located in the Queen Elizabeth Olympic Park. Using CASA’s Virtual London Platform ViLo it is possible to recursively embed 3D models of the surrounding district within that scene. These models can be digitally coupled to the actual locations they represent through the incorporation of real-time data feeds. In this way events occurring in the actual environment, the arrival and departure of buses and trains for example, are immediately represented within the virtual environment in real-time.

Virtual Reality is a technology which typically uses a head mounted display to immerse the user in a three dimensional, computer generated environment, regularly referred to as a ‘virtual environment’. In this case the virtual environment is a recreation of the viewing gallery at the top of the ArcelorMittal Orbit tower, situated at the Queen Elizabeth Olympic Park in East London. CASA’s ViLo platform is then used to embed further interactive 3D models and data visualisations within that virtual environment.

Using the HTC Vive’s room scale tracking the user can freely walk between exhibits. Otherwise they can teleport between them by pointing and clicking at a spot on the floor with one of the Vive hand controllers. The other hand controller is used for interacting with the exhibits, either by pointing and clicking with the trigger button, or placing the controller over objects and using the grip buttons on the side of the controller to hold them.

In the video we see how the virtual environment can be used to present a range of different media. Visitors can watch 360 degree videos and high quality architectural visualisations, but they can also interact with the 3D models featured in that content more actively using virtual tools like the cross-sectional plane seen in the video.

The ViLo platform provides further flexibility by enabling us to import interactive models of entire urban environments. The Queen Elizabeth Olympic Park is visualised with different layers of data provided by live feeds from Transport for London’s bus, tube, and bike hire APIs. Different layers are selected and removed here by the placing of 3D icons on a panel. Virtual reality affords the user the ability to choose their own view point on the data by simply moving their head. Other contextual information like images from Flickr or articles from Wikipedia can also be imported.

A further feature is the ability to quickly swap between models of different location. In the final section of the video another model of the Queen Elizabeth Olympic Park can be immediately replaced by a model of the area of the Thames in Central London between St Paul’s Cathedral and the Tate Modern gallery. The same tools can be used to manipulate either model. Analysis of building footprint size and building use data are combined with real-time visibility analysis depicting viewsheds from any point the user designates. Wikipedia and Flickr are queried dynamically to provide additional information and context for particular buildings by simply pointing and clicking. In this way many different aspects of urban environments can be digitally reconstructed within the virtual environment, either in miniature or at 1:1 scale.

Where the version of ViLo powered by the ARKit we looked at yesterday provided portability, the virtual reality experience facilitated by the HTC Vive integration can incorporate a much wider variety of data with a far richer level of interaction. Pure data visualisation tasks may not benefit greatly from immersion or presence provided by virtual reality. However, as we see with new creative applications like Google’s Tilt Brush and Blocks, virtual reality really shines in cases where natural and precise interaction is required in the manipulation of virtual objects. Virtual environments also provide useful sites for users who can’t be in the same physical location at the same time. Networked telepresence can be used to enable professionals in different cities to work together synchronously. Alternatively virtual environments can provide forums for public engagement where potential users can drop in at their convenience. Leveraging an urban data visualisation platform like CASA’s ViLo virtual environments can become useful sites for experimentation and communication of built environment interventions.

Many thanks to CASA Research Assistants Lyzette Zeno Cortes and Valerio Signorelli for their work on the ViLo virtual reality integration discussed here. Tweet @ValeSignorelli for more information about the HTC Vive integration.

For further details about ViLo see Monday’s post ViLo: The Virtual London Platform by CASA for Desktop and yesterday’s post ViLo: The Virtual London Platform by CASA with ARKit.


The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.

ViLo: The Virtual London Platform by CASA with ARKit

Yesterday I posted about CASA’s urban data visualisation platform, ViLo. Today we’re looking at an integration with Apple’s ARKit that has been created by CASA research assistant Valerio Signorelli.

Using ARKit by Apple we can place and scale a digital model of the Queen Elizabeth Olympic Park, visualise real-time bike sharing and tube data from TFL, query building information by tapping on them, analyse sunlight and shadows in real-time, and watch the boundary between the virtual and physical blur as bouncy balls simulated in the digital environment interact with the structure of the user’s physical environment.

The demo was created in Unity and deployed to Apple iPad Pro with iOS11. The ARKit needs an Apple device with and A9 or A10 processor in order to work. In the video posted above you can see the ARKit in action. As the camera observes the space around the user computer vision techniques are employed to identify specific points of reference like the corners of tables and chairs, or the points where the floor meets the walls. These points can be used to generate a virtual 3D representation of the physical space on the device, currently constructed of horizontally oriented planes. As the user moves around data about the position and orientation of the iPad are also captured. Using a technique called Visual Inertial Odometry the point data and motion data are combined enabling points to be tracked even when they aren’t within the view of the camera. Effectively a virtual room and virtual camera are constructed on the device which reference and synchronise with the relative positions of their physical counterparts.

After the ARKit has created its virtual representation of the room ViLo can be placed within the space and will retain its position within the space. Using the iPad’s WiFi receiver we can then stream in data from real-time data just as we did with the the desktop version. The advantage of the ARKit integration is that you can now take ViLo wherever you can take the iPad. Even without a WiFi connection offline data sets related to the built environment are still available for visualisation. What’s particularly impressive with ARKit running on the iPad is the way it achieves several of the benefits provided by the Microsoft HoloLens on a consumer device. Definitely one to watch! Many thanks to Valerio for sharing his work. Tweet @ValeSignorelli for more information about the ARKit integration.

For further details about ViLo see yesterday’s post ViLo: The Virtual London Platform by CASA for Desktop. Check in tomorrow for details of ViLo in virtual reality using HTC Vive.


The Bartlett Centre for Advanced Spatial Analysis (CASA)

Project Supervisor – Professor Andrew Hudson-Smith
Backend Development – Gareth Simons
Design and Visualisation – Lyzette Zeno Cortes
VR, AR and Mixed Reality Interaction – Valerio Signorelli / Kostas Cheliotis / Oliver Dawkins
Additional Coding – Jascha Grübel

Developed in collaboration with The Future Cities Catapult (FCC)

Thanks to the London Legacy Development Corporation and Queen Elizabeth Olympic Park for their cooperation with the project.