Monthly Archives: June 2017

Point Cloud Gaming: Scanner Sombre in VR

Scanner Sombre is an exploration game which places the player in the depths of a pitch black cave system with nothing to guide them except an experimental headset and LiDAR like sensor enabling them to see in the dark. I first saw Scanner Sombre back in April at the EGZ Rezzed computer game trade fair. I immediately fell in love with the beautiful visual style which renders the environment as a point cloud. The visual style links cleverly to the central game mechanic by which the points representing the contours of the cave walls only appear through the player’s use of the scanning device, providing an eerily partial view of the environment.

Following the initial release for desktop PC in April the game’s makers Introversion Software have just released a new VR version, now available on Steam for both Oculus Rift and HTC Vive. Having played the two I’d argue that players really have to try Scanner Sombre in VR to get the most out of the experience. Producer Mark Morris and designer Chris Delay touch on this in the following video which discusses the the process of transferring the desktop game to VR and the differences between the two. They also provide a very frank discussion of the factors contributing to the game’s poor sales relative to the runaway success of their earlier runaway success with Prison Architect.

One area that Mark and Chris discuss at length is narrative. The difficulty they discuss is providing the player sufficient motivation to play, and the pressure they felt to fit a narrative to the experience part way through development. At the same time they are uncertain that a more developed narrative would have added anything. I’d tend to agree. The unusual visual style and game mechanic have a niche feel which some players will love and some will hate. I love the VR version of the game but I could see others might feel it is more of an extended demo.

While Scanner Sombre has not met the designer’s expectations for sales I’ve found it a really enjoyable and atmospheric experience, particularly with the heightened sense of immersion provided by VR. If you’re interested in giving it a go you can currently pick it up for less that a fiver on Steam here.

Advertisements

Urban X-Rays: Wi-Fi for Spatial Scanning

Many of us in cities increasingly depend on Wi-Fi connectivity for communication as we go about our every day lives. However, beyond providing for our mobile and wireless communication needs, the intentional or directed use of Wi-Fi also provides new possibilities for urban sensing.

In this video professor Yasamin Mostofi from the University of California discusses research into the scanning or x-ray of built structures using a combination of drones and Wi-Fi transceivers. By transmitting a Wi-Fi signal from a drone on one side of a structure, and using a drone on the opposite side to receive and measure the strength of that signal it is possible to build up a 3D image of the structure and its contents. This methodology has great potential in areas like structural monitoring for the built environment, archaeological surveying, and even emergency response as outlined on the 3D Through-Wall Imaging project page.

Particularly with regard to emergency response one can easily imagine the value of being able to identify people trapped or hiding within a structure. Indeed Mostofi’s group are have also researched the potential these techniques provide for monitoring of humans in their Head Counting with WiFI project as demonstrated with the next video.

What is striking is that this technique enables individuals to be counted without themselves needing a Wi-Fi enabled device. Several potential uses are proposed which are particularly relevant to urban environments:

For instance, heating and cooling of a building can be better optimized based on learning the concentration of the people over the building. Emergency evacuation can also benefit from an estimation of the level of occupancy. Finally, stores can benefit from counting the number of shoppers for better business planning.

Given that WiFi networks are available in many buildings, we envision that they can provide a new way for occupancy estimation, in addition to cameras and other sensing mechanisms. In particular, its potential for counting behind walls can be a nice complement to existing vision-based methods.

I’m fascinated by the way experiments like this reveal the hidden potentials already latent within many of our cities. The roll out of citywide Wi-Fi infrastructure provides the material support for an otherwise invisible electromagnetic environment designers Dunne & Raby have called ‘Hertzian Space’. By finding new ways to sense the dynamics of this space, cities can tap in to these resources and exploit new potentialities, hopefully for the benefit of both the city and its inhabitants.

Thanks to Geo Awesomeness for posting the drone story here.

Open3D: Crowd-Sourced Distributed Curation of City Models

Open3D is a project by the Smart Geometry Processing Group in UCL’s Computer Science department. The project aims to provide tools for the crowd-sourcing of large-scale 3D urban models. It achieves this by giving users access to a basic 3D data set and providing an editor enabling them to amend the model and add further detail.

The model that users start with is created by vertically extruding 2D building footprints derived from OpenStreetMap or Ordnance Survey map data. Access to the resulting the 3D model is granted using a viewer based on the Cesium javascript library for rendering virtual globes in a web browser. The interface allows users to select particular buildings to work on. As changes are made to the model with the Open3D editor they are parameterised behind the scenes. This means that each changes become variables in an underlying set of repeatable rules that form templates representing common objects such as different types of window or door. These templates can then be shared between users and reapplied to other similar buildings within the model. This helps facilitate collaboration between multiple users and speeds up model creation.

Crowd-sourcing 3D urban models is not new. As we saw in an earlier post on 3D Imagery in Google Earth, Google’s acquisition of SketchUp in 2006 enabled enthusiasts to model and texture 3D buildings directly from satellite imagery. These models could then be uploaded to the 3D Warehouse where they were curated by Google who choose the best models for inclusion in their platform. Despite the enthusiasm of the user community there were limits to the speed of progress and the amount of coverage that could be achieved. In 2012 Google sold SketchUp to engineeting company Trimble after adopting a more automated process relying on a combination of photogrammetry and computer vision techniques. We recently saw similar techniques being used by ETH Zurich in our last post on their project VarCity.

In this context the Open3D approach which heavily relies on human intervention may seem outdated. However, while the kinds of textured surface models that are created using automated photogrammetry look very good from a distance, closer inspection reveals all sorts of issues. The challenges involved in creating 3D models through photogrammetry include: (i) gaining sufficient coverage of the object; (ii) the need to use images taken at different times in order to achieve sufficient coverage; (iii) having images of sufficient resolution to obtain the required level of detail; (iv) the indiscriminate nature of the captured images in the sense that they include everything within the camera’s field of view, regardless of whether it is intended for inclusion in the final model or not. Without manual editing or further processing this can result in noisy surfaces with hollow, blob-like structures for mobile or poorly defined structures and objects. The unofficial Google Earth Blog has done a great job of documenting such anomalies within the Google platform over the years. These include ghostly images and hollow objects, improbably deep riversdrowned citiesproblems with overhanging trees and buildings and blobby people.

The VarCity project sought to address these issues by developing new algorithms and combining techniques to improve the quality of the surface meshes they generated using aerial photogrammetry. For example, vehicle mounted cameras were used in combination with tourist photographs to provide higher resolution data at street level. In this way the ETH Zurich team were able to improve the level of detail and reduce noise in the building facades considerably. Despite this the results of the VarCity project still have limitations. For example, with regard to their use in first person virtual reality applications it could be argued that a more precisely modeled environment might better support a sense of presence and immersion for the user. While such a data set would be more artificial by virtue of the artifice involved in its production, it would also appear less jarringly course in appearance and feel more seamlessly realistic.

In their own ways both VarCity and Open3D seek to reduce the time and effort required in the production of 3D urban models. VarCity uses a combination of methods and increasingly sophisticated algorithms to help reduce noise in the automated reconstruction of urban environments. Open3D on the other hand starts with a relatively clean data set and provides tools to enhance productivity while leveraging the human intelligence of users and their familiarity with the environment they are modelling to maintain a high level of quality. Hence, while the current output for Open3D may appear quite rudimentary compared to VarCity this would improve through the effort of the systems potential users.

Unlike the VarCity project in which crowd-sourcing was effectively achieved by proxy through the secondary exploitation of tourist photos gathered via social media, Open3D seeks to engage a community of users through direct and voluntary citizen participation. In this regard Open3D has a considerable challenge. In order to work the project needs to find an enthusiastic user group and engage them by providing highly accessible and enjoyable tools and features that lower the bar to participation. To that end the Open3D team are collaborating with UCL’s Interaction Group (UCLIC) who will be focused on usability testing and adding new features. There is definitely an appetite for online creation of 3D which is evident in the success of new platforms like Sketchfab. Whether there is still sufficient enthusiasm for the bottom-up creation of 3D urban data sets without the influence of a brand like Google remains to be seen.

For more information on Open3D check out the Smart Geometry Processing Group page or have a look at the accompanying paper here.