Tag Archives: 3D

Urban X-Rays: Wi-Fi for Spatial Scanning

Many of us in cities increasingly depend on Wi-Fi connectivity for communication as we go about our every day lives. However, beyond providing for our mobile and wireless communication needs, the intentional or directed use of Wi-Fi also provides new possibilities for urban sensing.

In this video professor Yasamin Mostofi from the University of California discusses research into the scanning or x-ray of built structures using a combination of drones and Wi-Fi transceivers. By transmitting a Wi-Fi signal from a drone on one side of a structure, and using a drone on the opposite side to receive and measure the strength of that signal it is possible to build up a 3D image of the structure and its contents. This methodology has great potential in areas like structural monitoring for the built environment, archaeological surveying, and even emergency response as outlined on the 3D Through-Wall Imaging project page.

Particularly with regard to emergency response one can easily imagine the value of being able to identify people trapped or hiding within a structure. Indeed Mostofi’s group are have also researched the potential these techniques provide for monitoring of humans in their Head Counting with WiFI project as demonstrated with the next video.

What is striking is that this technique enables individuals to be counted without themselves needing a Wi-Fi enabled device. Several potential uses are proposed which are particularly relevant to urban environments:

For instance, heating and cooling of a building can be better optimized based on learning the concentration of the people over the building. Emergency evacuation can also benefit from an estimation of the level of occupancy. Finally, stores can benefit from counting the number of shoppers for better business planning.

Given that WiFi networks are available in many buildings, we envision that they can provide a new way for occupancy estimation, in addition to cameras and other sensing mechanisms. In particular, its potential for counting behind walls can be a nice complement to existing vision-based methods.

I’m fascinated by the way experiments like this reveal the hidden potentials already latent within many of our cities. The roll out of citywide Wi-Fi infrastructure provides the material support for an otherwise invisible electromagnetic environment designers Dunne & Raby have called ‘Hertzian Space’. By finding new ways to sense the dynamics of this space, cities can tap in to these resources and exploit new potentialities, hopefully for the benefit of both the city and its inhabitants.

Thanks to Geo Awesomeness for posting the drone story here.

The Art & Science of 3D Cities at the Transport Systems Catapult

Back in March I attended a day long workshop the at the Transport Systems Catapult (TCS) in Milton Keynes on the subject of ‘The Barriers to Building 3D Synthetic Environments’. The aim of the workshop was to bring together key SMEs and Academics to collaboratively identify challenges and discuss solutions for the creation of virtual environments that would be suitable for simulating and testing transport scenarios.

Alongside presentations from the Transport Systems, Future Cities and Satellite Applications catapults a number of SMEs also presented on topics as diverse as LiDAR data capture, GNSS positioning, 3D GIS and the use of GIS data in game engines. For my purposes the following talk on ‘The Art & Science of 3D Cities’ by Elliot Hartley of Garsdale Design was particularly interesting and raised a number of great points:

One of the key challenges for the generation and use of 3D data discussed by Elliot derives from the heightened expectation generated by the depiction of 3D urban environments in films, video games and Google Earth. The truth is the creation of these kinds of environments require considerable investment in terms of time and investment. Elliot’s talk poses key questions for stakeholders when embarking on a 3D project:

  • Why do you want a 3D model?
  • Do you actually need a 3D model?
  • What kind of 3D model do you want?
  • What 3D model do you actually need?
    • Small areas with lots of detail?
    • Large areas with little detail?
  • How much time and/or money do you have?
  • Will you want to publish the model?
  • What hardware and software do you have?
  • What’s the consequence of getting the model wrong?

While the primary focuses for the day were the practical and technical challenges of creating 3D environments, the further implication of Elliot’s discussion is that the use of 3D data and the creation of virtual environments can no longer be considered a purely technical activity with neutral products and outputs. For me the last question in particular foregrounded the stakes involved in moving beyond visualisation toward the growing use of 3D data in various forms of analysis. Thanks to Elliot for the stimulating talk.

After the presentations we had a tour of the TCS facilities and then broke up into work groups to discuss a number of themes. A report and summary is expected to be published by the TCS soon.

A Brief History of Google Maps…and a not so Brief Video

In this long but useful presentation from 2012 Google Maps vice president Brian McClendon and colleages provide a detailed overview of the platforms evolution. Some of the key points are summarised below.

In the mid 90s Silicon Graphics developed the ‘Space-to-Your-Face’ demo to demonstrate the power of their Onyx Infinite Reality CGI workstation. In the demo the view zooms from orbit to the Matterhorn via Lake Geneva, using a combination of satellite, aerial imagery and terrain data. This is included in the Silicon Graphics showreel from 1996 which be viewed on YouTube here.

In 2001 the company Keyhole was founded as a startup providing mapping for the travel and real estate industries on the basis of a subscription model. After achieving wider recognition through use by CNN during the invasion of Iraq in 2003, the company was subsequently acquired by Google in 2004.

At the same time Google were working on the creation of Google Maps which used a combination of client side processing via AJAX and pre-rendered map tiles to enable its highly interactive and smooth scrolling slippy map system. However, now that network bandwidth and processing power has been increased Google Map tiles are no longer pre-rendered and are instead provided on demand.

Between 2005 and 2008 Google Maps licensed further data to obtain a full world map with more comprehensive coverage. At the same time Google were also working to acquire high resolution imagery.

Street View started in five US cities in 2007 but had expanded to 3000 cities in 39 countries by 2012. In 2008 Google released Map Maker to capture data where other basic mapping data and Street View were absent.

Google’s Ground Truth project now enables them to generate their own maps from raw data by combining satellite and aerial imagery with road data and information capture via Street View. This data is processed with an application callled ‘Atlas’ that Google developed internally. With the aid of advanced computer vision techniques they are able to detect and correct errors and extract further contextual information from the raw imagery data that helps them make their maps more complete and accurate. This includes details as specific as the names of streets and businesses appearing on signs.

Corrections are also crowd-sourced from users with the aid of their ‘Report Maps Issue’ feature. Staff at Google are then able to verify the issue with Street View, edit the map and publish the corrections within minutes.

The presentation moves on to further discussions on ‘Google Maps For Good’ and their work with NGOs (19:20), ‘Google Maps for Mobile’ and the provision of offline map availability (27:35), the evolution of the equipment used to capture Street View (31:30), and finally the evolution of their 3D technology (37:40). The final discussion in particular reiterates the content in my post yesterday from a slightly different perspective.

What I found particularly interesting in this video was the continued manual intervention via Atlas but also the extent to which they are able to gather contextual information from Street View imagery.