A Brief History of Google Maps…and a not so Brief Video

In this long but useful presentation from 2012 Google Maps vice president Brian McClendon and colleages provide a detailed overview of the platforms evolution. Some of the key points are summarised below.

In the mid 90s Silicon Graphics developed the ‘Space-to-Your-Face’ demo to demonstrate the power of their Onyx Infinite Reality CGI workstation. In the demo the view zooms from orbit to the Matterhorn via Lake Geneva, using a combination of satellite, aerial imagery and terrain data. This is included in the Silicon Graphics showreel from 1996 which be viewed on YouTube here.

In 2001 the company Keyhole was founded as a startup providing mapping for the travel and real estate industries on the basis of a subscription model. After achieving wider recognition through use by CNN during the invasion of Iraq in 2003, the company was subsequently acquired by Google in 2004.

At the same time Google were working on the creation of Google Maps which used a combination of client side processing via AJAX and pre-rendered map tiles to enable its highly interactive and smooth scrolling slippy map system. However, now that network bandwidth and processing power has been increased Google Map tiles are no longer pre-rendered and are instead provided on demand.

Between 2005 and 2008 Google Maps licensed further data to obtain a full world map with more comprehensive coverage. At the same time Google were also working to acquire high resolution imagery.

Street View started in five US cities in 2007 but had expanded to 3000 cities in 39 countries by 2012. In 2008 Google released Map Maker to capture data where other basic mapping data and Street View were absent.

Google’s Ground Truth project now enables them to generate their own maps from raw data by combining satellite and aerial imagery with road data and information capture via Street View. This data is processed with an application callled ‘Atlas’ that Google developed internally. With the aid of advanced computer vision techniques they are able to detect and correct errors and extract further contextual information from the raw imagery data that helps them make their maps more complete and accurate. This includes details as specific as the names of streets and businesses appearing on signs.

Corrections are also crowd-sourced from users with the aid of their ‘Report Maps Issue’ feature. Staff at Google are then able to verify the issue with Street View, edit the map and publish the corrections within minutes.

The presentation moves on to further discussions on ‘Google Maps For Good’ and their work with NGOs (19:20), ‘Google Maps for Mobile’ and the provision of offline map availability (27:35), the evolution of the equipment used to capture Street View (31:30), and finally the evolution of their 3D technology (37:40). The final discussion in particular reiterates the content in my post yesterday from a slightly different perspective.

What I found particularly interesting in this video was the continued manual intervention via Atlas but also the extent to which they are able to gather contextual information from Street View imagery.

3D Imagery in Google Earth

Since 2006 Google Earth has included textured 3D building models for urban areas. Initially these were crowd-sourced from enthusiastic members of the user community who modeled them by hand with the aid of SketchUp, sold to Timble in 2012, or the simpler Google Building Maker, retired in 2013. As the video above shows, from 2012 onward Google have instead been using aerial imagery captured at a 45 degree angle and employing photogrammetry to automate the generation of 3D building and landscape models.In the following video from the Nat and Friends YouTube channel Google employees help explain the process.

As explained Google Earth’s digital representation of the world is created with the aid of different types of imagery. For the global view 2D satellite imagery is captured from above and wrapped around Google Earth’s virtual globe. The 3D data that appears when users zoom in to the globe is captured via aircraft.

Each aircraft has five different cameras. One faces directly downward while the others are aimed to the front, back, left and right of the plane at a 45 degree angle. By flying in a stripe-like pattern and taking consecutive photos with multiple cameras, the aircraft is able to capture each location it passes from multiple directions. However, the need to obtain cloud free images means that multiple flights have to be taken, entailing that the images captured for any single location may be taken at different times days apart. The captured imagery is colour corrected to account for different lighting conditions, and the images for some areas even have finer details like cars removed.

The process of photogrammetry as employed by Google works by combining the different images of a location and generating a 3D geometric surface mesh. Computer vision techniques are used to identify common features within the different images so that they can be aligned. A GPS receiver on the aircraft also records the position from which each photograph was taken enabling the calculation of the distance between the camera on the plan and any given feature within the photograph. This facilitates the creation of depth maps which can be stitched together using the common features identified earlier to form a combined geometric surface mesh. The process is completed by texturing the mesh with the original aerial imagery. For regularly shaped objects like buildings this can be done very accurately with the aid of edge detection algorithms that can identify the edges of buildings in the imagery and help align them with the edges of features in the mesh. For organic structures this is more challenging.

Google Earth includes imagery for may different levels of detail or zoom. According to the video the number of images required is staggering, in the order of the tens of millions. While the zoomed out global view in Google Earth is only fully updated once every few years the aerial imagery for particular urban areas may be updated in less than a year. Gathered over time this imagery can enable users to observe changes and this can be leveraged for analysis with the aid of Google’s Earth Engine.

Pointerra: Points in the Cloud

Pointerra New York

Pointerra are an Australian geospatial start-up offering point cloud and LiDAR data as a service. Their platform which is deployed on Amazon Web Services enables online visualisation of massive point clouds in 3D via a standard browser.

The U.S. Geological Survey point cloud of New York visualised above has a massive 3.1 billion points. These can be navigated in 3D, viewed with or without a base map, and visualised by intensity, classification or height, as depicted here. Quality settings can be adjusted to speed up render times. Even on the highest setting the point cloud updates in a matter of seconds on our rigP.

Pointerra St Pauls

This second Pointerra example of St Paul’s Cathedral in London is visualised with RGB values. Being able to view point clouds on the web is great. With their plans to be “the Getty Images of 3D data”, as reported by The Australian, it will be interesting to see how the platform develops and what features get added over time. The platform isn’t yet live but you can try it here today.

Drones for Participatory Planning with Flora Roumpani

In this recent video from AMD’s RADEON Creator series fellow CASA PhD candidate Flora Roumpani discusses her involvement in the UCL Development Planning Unit’s ReMap Lima project.

The project sought to map the favelas on the outskirts of Peru’s capital Lima in order to help the communities living there to better understand the planning challenges they face and more effectively participate in the informal local planning processes that tend rely on short term and hoc responses that risk creating new problems for every one solved.

The project involved flying drone’s over Lima and turning the captured data and imagery into digital maps and 3D models that could be used for further analysis and communication. By creating fly-through visualisations and 3D printed models that could be shared with the favela communities Flora helped them to better understand and respond to the problems they were facing.

Read more about Flora’s involvement here or checkout the ReMap Lima project blog here.

Exploring London’s Underworld in VR

In collaboration with academic and urban explorer Bradley Garrett the Guardian have released this new interactive tour virtual reality of London’s sewers. The tour has been specially created for Google Daydream. However, there is also an interactive demo for your desktop web browser. For more information check out the Guardian VR page.

CleanSpace: Mapping Air Pollution in London

phones-two

Today I received a personal air quality sensor, the CleanSpace sensor tag. The device is a carbon monoxide (CO) sensor which is designed to be carried by the user and paired with the CleanSpace Android or iOS app via blueetooth. While the sensor takes readings the app provides real time feedback to the user on local air quality. It also pushes the anonymised sensor readings to a cloud server which aggregates them to create a map of air quality in London.

As well as providing data for analytics the app is intended to encourage behaviour change. It does this by rewarding users with ‘CleanMiles’ for every journey made on foot or by bike. The clean miles can then be exchanged for rewards with CleanSpace partner companies and retailers.

Another interesting aspect of the project is that the sensor tag is powered using Drayson Technologies’ Freevolt. This enables the device to harvest radio frequency (RF) energy from wireless and broadcast networks including 3G, 4G and WiFi. In theory this means that the device can operate continually without needing to have its batteries recharged because it can draw energy directly from its environment. In this way the CleanSpace tag provides a perfect test bed for Drayson’s method of powering Low Energy IoT devices.

The project kicked off with a campaign on Crowdfunder last autumn which raised £103,136 in 28 days. The campaign was initiated shortly after the announcement of results from a study at Kings College which found that nearly 9,500 deaths per year could be attributed to air pollution. Two pollutants in particular were found to be responsible: fine PM2.5 particles in the air from vehicle exhaust along with toxic Nitrogen Dioxide (NO2) gas released through the combustion of diesel fuel on city streets. While the CleanSpace tag does not measure PM2.5 or NO directly it is believed that recorded levels of CO can provide a suitable surrogate for other forms of air pollution given their shared source in car fuel emissions.

While the UK government are under pressure to clean up air pollution from the top-down, Lord Drayson who leads the CleanSpace project argues that there is also need for a complementary response from the bottom up:

“I think the effect of air pollution is still relatively underappreciated and there is work to do in raising awareness of the impact it has.”

“Yes, the government has a role to play, but this isn’t solely a government issue to tackle. The best way to achieve change, and for legislation and regulation to work, is for it to grow from and reflect the beliefs and behaviours of the general public as a whole.”

I’m looking forward to seeing what the device reveals about my own exposure to air pollution on my daily commute. It’ll also be interesting to see how my contribution fits in with the broader map being built up by the CleanSpace user community. After collecting some data I’m keen to compare the apps output with the data collected by the London Air Quality Network based at King’s College.

I’m a card carrying walker. At the same time I’m struck by the paradox that every CleanMile walked or cycled is essentially a dirty mile for the user. I can see the device and app appealing massively to those who already walk and cycle, and want to contribute to raising awareness on the issue of air pollution. However, with the sensor retailing at £49.99 the CleanMile rewards will have to be sufficiently compelling to encourage a wider base of new users participate, especially if the project is expected to have a genuine impact on the way they commute. Of course, it has to start somewhere! It’s an exciting challenge so I’m looking forward to seeing how it goes.

Microsoft HoloLens: Hands On!

It’s taken a while but I finally had my first hands on look at Microsoft HoloLens last night. The demonstration was given as part of the London Unity Usergroup (LUUG) meetup a talk by Jerome Maurey-Delaunay of Neutral Digital about their initial experiences of building demos for the device with Unity. Neutral are a design and software consultancy who have a portfolio of projects including work with cultural institutions such as the Tate and V&A, engineering and aviation firms like Airbus, and architectural firms such as Zaha Hadid architects who they are currently assisting to develop Virtual Reality visualisation workflows.

During the break following the presentation I had may first chance to try the device out for myself.  One of the great features of HoloLens is that it incorporates video capture straight out of the box. Although clips weren’t taken on the night these videos from the Neutral Digital twitter stream provide a good indication of my experience when I tested it:

After using VR headsets like the Oculus Rift and HTC Vive the first thing you notice about the HoloLens is how unencumbered you feel. Where VR headsets enclose the user’s face to block out ambient light and heighten immersion in a virtual environment, the HoloLens is open affording the user unhindered awareness of their surrounding [augmented] environment over which the virtual objects or ‘holograms’ are projected. The second thing you notice is that the HoloLens runs without a tether. Once applications have been transferred to the device it can be unplugged leaving the user free to move about without worrying about tripping up or garroting themselves.

Being able to see my surroundings also meant that I could easily talk face to face with Jerome and see the gestures he wanted me to perform in order operate the device and manipulate the virtual objects it projected. Tapping forefinger and thumb visualised the otherwise invisible virtual mesh that the HoloLens draws as a reference to anchor holograms to the users environment. A projected aircraft could then be walked around and visualised from any angle. Alternatively holding forefinger and thumb while moving my hand would rotate the object in that direction instead.

Don’t be fooled by the simplicity of these demos. The ability of HoloLens to project animated and interactive Holograms that feel anchored to the user’s environment is impressive. I found the headset comfortable and appreciated being able to see my surroundings and interact easily with the people around me. At the same time I wouldn’t say that I felt immersed in the experience in the sense discussed with reference to virtual reality. The ability to interact through natural gestures helped involve my attention in the virtual objects I was seeing, but the actual field of view available for projection is not as wide as the video captures from the device might suggest.

As it stands I wouldn’t mistake Microsoft’s holograms for ‘real’ objects, but then I’m not convinced that this is what we should be aiming for with AR. While one of the prime virtues of virtual reality technologies like Oculus and Vive is their ability to provide a sense of ‘being there’, I see the strength of augmented reality technologies elsewhere in their potential for visualising complex information at the point of engagement, decision or action.

Kind thanks to Neutral Digital for sharing their videos via Twitter. Thanks also to the London Unity Usergroup meetup for arranging the talks and demo.