Category Archives: Point Clouds

Point Cloud Gaming: Scanner Sombre in VR

Scanner Sombre is an exploration game which places the player in the depths of a pitch black cave system with nothing to guide them except an experimental headset and LiDAR like sensor enabling them to see in the dark. I first saw Scanner Sombre back in April at the EGZ Rezzed computer game trade fair. I immediately fell in love with the beautiful visual style which renders the environment as a point cloud. The visual style links cleverly to the central game mechanic by which the points representing the contours of the cave walls only appear through the player’s use of the scanning device, providing an eerily partial view of the environment.

Following the initial release for desktop PC in April the game’s makers Introversion Software have just released a new VR version, now available on Steam for both Oculus Rift and HTC Vive. Having played the two I’d argue that players really have to try Scanner Sombre in VR to get the most out of the experience. Producer Mark Morris and designer Chris Delay touch on this in the following video which discusses the the process of transferring the desktop game to VR and the differences between the two. They also provide a very frank discussion of the factors contributing to the game’s poor sales relative to the runaway success of their earlier runaway success with Prison Architect.

One area that Mark and Chris discuss at length is narrative. The difficulty they discuss is providing the player sufficient motivation to play, and the pressure they felt to fit a narrative to the experience part way through development. At the same time they are uncertain that a more developed narrative would have added anything. I’d tend to agree. The unusual visual style and game mechanic have a niche feel which some players will love and some will hate. I love the VR version of the game but I could see others might feel it is more of an extended demo.

While Scanner Sombre has not met the designer’s expectations for sales I’ve found it a really enjoyable and atmospheric experience, particularly with the heightened sense of immersion provided by VR. If you’re interested in giving it a go you can currently pick it up for less that a fiver on Steam here.

Advertisements

VarCity: 3D and Semantic Urban Modelling from Images

In this video we see the results of a 5 year VarCity research project at the Computer Vision Lab, ETH Zurich. The aim of the project was to automatically generate 3D city models from photos such as those openly available online via social media.

The VarCity system uses computer algorithms to analyse and stitch together overlapping photographs. Point clouds are then created on the basis of overlapping points and then used to generate a geometric mesh or surface model. Other algorithms are used to identify and tag different types of urban objects like streets, buildings, roofs, windows and doors. These semantic labels can then be used to query the model to automatically determine meaningful information about buildings and streets as the video describes. In this way the VarCity project demonstrates one way in which comprehensive 3D city models could effectively be crowd sourced over time.

It is also interesting that VarCity is using computer vision to connect real-time video feeds or content from social media to actual locations. This is used to determine local vehicle and pedestrian traffic. As the video suggests, there may be limitations to this method for determining urban dynamics across the city as it is dependent of accessibility of a suitably large number of camera feeds. This also has implications for privacy and surveillance. The VarCity team address this by showing representative simulated views that replace actual scenes. As such the 3D modelling of urban regions can no longer be viewed as a neutral and purely technical enterprise.

The wider project covers four main areas of research:

  • Automatic city-scale 3D reconstruction
  • Automatic semantic understanding of the 3D city
  • Automatic analysis of dynamics within the city
  • Automatic multimedia production

A fuller breakdown of the VarCity project can be viewed in the video below.

The work on automatic 3D reconstruction is particularly interesting. A major difficulty with the creation of 3D city models has been the amount of manual effort they require to create and update through traditional 3D modelling workflows. One solution has been to procedurally generate such models using software such as ESRI’s CityEngine. With CityEngine preset rules are used randomly determine the values for parameters like the height of buildings, the pitch of the roofs, the types of walls and doors. This is a great technique for generating fictional cities for movies and video games. However, this has never been fully successful for the modelling of actually existing urban environments. This is because the outputs of procedurally generated models are only as good as the inputs, both the complexity of the rules used for generating the geometry, but also the representational accuracy of things like the models for street furniture and textures for buildings if they are to be applied.

Procedural generation also involves an element of randomness requiring the application of constraints such as the age of buildings in specific areas which determines which types of street furniture and textures should be applied. Newer districts may be more likely to feature concrete and glass whereas much older districts will likely consist of buildings made of brick. The more homogeneous an area is in terms of age and design the more easy it is to procedurally generate, especially if it is laid out in a grid. Even so there is always the need for manual adjustment which takes considerable effort and may involve ground truthing. Using such methods for particularly heterogeneous cities like London are problematic, especially if regular updates are required to capture changes as they occur.

For my own part I’m currently looking at the processing of point cloud information so it will be fascinating to read the VarCity team’s research papers, available here.

Pointerra: Points in the Cloud

Pointerra New York

Pointerra are an Australian geospatial start-up offering point cloud and LiDAR data as a service. Their platform which is deployed on Amazon Web Services enables online visualisation of massive point clouds in 3D via a standard browser.

The U.S. Geological Survey point cloud of New York visualised above has a massive 3.1 billion points. These can be navigated in 3D, viewed with or without a base map, and visualised by intensity, classification or height, as depicted here. Quality settings can be adjusted to speed up render times. Even on the highest setting the point cloud updates in a matter of seconds on our rigP.

Pointerra St Pauls

This second Pointerra example of St Paul’s Cathedral in London is visualised with RGB values. Being able to view point clouds on the web is great. With their plans to be “the Getty Images of 3D data”, as reported by The Australian, it will be interesting to see how the platform develops and what features get added over time. The platform isn’t yet live but you can try it here today.