Tag Archives: Participation

Living With A Digital Twin: CASA Research into IoT technologies at Here East on the Olympic Park

Last week as part of the final project for my PhD I completed the installation of a network of eighteen environment sensing devices at UCL’s Here East campus on the Queen Elizabeth Olympic Park.

The custom built devices have been donated to this project by the Intel Collaborative Research Institute (ICRI). For the next four months each device will be measuring temperature, humidity, air pressure and ambient light levels at different throughout the Here East Campus on a minute-by-minute basis.

Each of the sensor devices is connected to the internet and participates in the Internet of Things (IoT) by transmitting the data they collect to a cloud-based platform that aggregates it for further analysis. That data will simultaneously be visualised in real-time in a dynamic 3D model or ‘Digital Twin’ of the Here East campus. In this way changes in the state of the building’s internal environment will be mirrored, in the instant they occur, by corresponding changes in the site’s 3D digital twin.

The technology has direct application for building and facility managers who want the ability to monitor the environmental conditions of the sites they operate in real-time. In this project we attempt to take the technology further and make it more participatory by opening up the digital twin system to other building occupants.

To this end the digital twin at Here East is being augmented with openly available data relating to the site’s wider physical and social context. In addition live data feeds from the internal sensors, the digital twin will also incorporate information on external environmental conditions and interactions via social media. As the study proceeds further feeds of information can be added as required.

In the coming weeks the digital twin will be made available online. Visitors to the site will also be able to interact with the sensors more directly using their mobile phones with the aid of beacon technology installed in each of the sensor devices. Efforts are also being made to open the data to interested researchers.

The objectives of the project are:

  • To operationalise the use of IoT and Digital Twin technologies in the built environment
  • To understand how building occupants and visitors interact and engage with IoT
  • To explore and asses methods for visualising and interacting with sensor data and IoT systems in real-time

If you wish to read more about the project a paper I presented at the GISRUK 2018 conference is available for download here.

Authors: Oliver Dawkins, Adam Dennett, Andy Hudson-Smith, all authors from the Bartlett Centre for Advanced Spatial Analysis University College London WC1E 6BT.

Note: This blog post has been cross posted on the CASA website news pages here.

Advertisements

A/B: Participatory Navigation with Augmented Reality

Imagine navigating the city with an augmented reality app, but where the choice of route is determined by a crowd and the decision floats in front of you like the hallucinations of a broken cyborg. A/B was an experiment in participatory voting, live streaming and augmented reality by Harald Haraldsson. Created for the digital art exhibition 9to5.tv the project allowed an online audience to guide Haraldsson around Chinatown in New York for 42 minutes. This was achieved through a web interface presenting the livestream from an Android Pixel smartphone.

The smartphone was running Haraldsson’s own augmented reality app implemented with the Unity game engine and Google’s ARCore SDK. At key points Haraldsson could use the app to prompt viewers to vote on the direction he should take, either A or B. ARCore enabled the A/B indicators to be spatially referenced to his urban surroundings in 3D so that they appeared to be floating in the city. Various visual effects and distortions were also overlaid or spatially referenced to the scene.

More images and video including a recording of the the full 45 minute can be found on the A/B project page here.

Thanks to Creative Applications for the link.

Open3D: Crowd-Sourced Distributed Curation of City Models

Open3D is a project by the Smart Geometry Processing Group in UCL’s Computer Science department. The project aims to provide tools for the crowd-sourcing of large-scale 3D urban models. It achieves this by giving users access to a basic 3D data set and providing an editor enabling them to amend the model and add further detail.

The model that users start with is created by vertically extruding 2D building footprints derived from OpenStreetMap or Ordnance Survey map data. Access to the resulting the 3D model is granted using a viewer based on the Cesium javascript library for rendering virtual globes in a web browser. The interface allows users to select particular buildings to work on. As changes are made to the model with the Open3D editor they are parameterised behind the scenes. This means that each changes become variables in an underlying set of repeatable rules that form templates representing common objects such as different types of window or door. These templates can then be shared between users and reapplied to other similar buildings within the model. This helps facilitate collaboration between multiple users and speeds up model creation.

Crowd-sourcing 3D urban models is not new. As we saw in an earlier post on 3D Imagery in Google Earth, Google’s acquisition of SketchUp in 2006 enabled enthusiasts to model and texture 3D buildings directly from satellite imagery. These models could then be uploaded to the 3D Warehouse where they were curated by Google who choose the best models for inclusion in their platform. Despite the enthusiasm of the user community there were limits to the speed of progress and the amount of coverage that could be achieved. In 2012 Google sold SketchUp to engineeting company Trimble after adopting a more automated process relying on a combination of photogrammetry and computer vision techniques. We recently saw similar techniques being used by ETH Zurich in our last post on their project VarCity.

In this context the Open3D approach which heavily relies on human intervention may seem outdated. However, while the kinds of textured surface models that are created using automated photogrammetry look very good from a distance, closer inspection reveals all sorts of issues. The challenges involved in creating 3D models through photogrammetry include: (i) gaining sufficient coverage of the object; (ii) the need to use images taken at different times in order to achieve sufficient coverage; (iii) having images of sufficient resolution to obtain the required level of detail; (iv) the indiscriminate nature of the captured images in the sense that they include everything within the camera’s field of view, regardless of whether it is intended for inclusion in the final model or not. Without manual editing or further processing this can result in noisy surfaces with hollow, blob-like structures for mobile or poorly defined structures and objects. The unofficial Google Earth Blog has done a great job of documenting such anomalies within the Google platform over the years. These include ghostly images and hollow objects, improbably deep riversdrowned citiesproblems with overhanging trees and buildings and blobby people.

The VarCity project sought to address these issues by developing new algorithms and combining techniques to improve the quality of the surface meshes they generated using aerial photogrammetry. For example, vehicle mounted cameras were used in combination with tourist photographs to provide higher resolution data at street level. In this way the ETH Zurich team were able to improve the level of detail and reduce noise in the building facades considerably. Despite this the results of the VarCity project still have limitations. For example, with regard to their use in first person virtual reality applications it could be argued that a more precisely modeled environment might better support a sense of presence and immersion for the user. While such a data set would be more artificial by virtue of the artifice involved in its production, it would also appear less jarringly course in appearance and feel more seamlessly realistic.

In their own ways both VarCity and Open3D seek to reduce the time and effort required in the production of 3D urban models. VarCity uses a combination of methods and increasingly sophisticated algorithms to help reduce noise in the automated reconstruction of urban environments. Open3D on the other hand starts with a relatively clean data set and provides tools to enhance productivity while leveraging the human intelligence of users and their familiarity with the environment they are modelling to maintain a high level of quality. Hence, while the current output for Open3D may appear quite rudimentary compared to VarCity this would improve through the effort of the systems potential users.

Unlike the VarCity project in which crowd-sourcing was effectively achieved by proxy through the secondary exploitation of tourist photos gathered via social media, Open3D seeks to engage a community of users through direct and voluntary citizen participation. In this regard Open3D has a considerable challenge. In order to work the project needs to find an enthusiastic user group and engage them by providing highly accessible and enjoyable tools and features that lower the bar to participation. To that end the Open3D team are collaborating with UCL’s Interaction Group (UCLIC) who will be focused on usability testing and adding new features. There is definitely an appetite for online creation of 3D which is evident in the success of new platforms like Sketchfab. Whether there is still sufficient enthusiasm for the bottom-up creation of 3D urban data sets without the influence of a brand like Google remains to be seen.

For more information on Open3D check out the Smart Geometry Processing Group page or have a look at the accompanying paper here.