Tag Archives: Google

A Brief History of Google Maps…and a not so Brief Video

In this long but useful presentation from 2012 Google Maps vice president Brian McClendon and colleages provide a detailed overview of the platforms evolution. Some of the key points are summarised below.

In the mid 90s Silicon Graphics developed the ‘Space-to-Your-Face’ demo to demonstrate the power of their Onyx Infinite Reality CGI workstation. In the demo the view zooms from orbit to the Matterhorn via Lake Geneva, using a combination of satellite, aerial imagery and terrain data. This is included in the Silicon Graphics showreel from 1996 which be viewed on YouTube here.

In 2001 the company Keyhole was founded as a startup providing mapping for the travel and real estate industries on the basis of a subscription model. After achieving wider recognition through use by CNN during the invasion of Iraq in 2003, the company was subsequently acquired by Google in 2004.

At the same time Google were working on the creation of Google Maps which used a combination of client side processing via AJAX and pre-rendered map tiles to enable its highly interactive and smooth scrolling slippy map system. However, now that network bandwidth and processing power has been increased Google Map tiles are no longer pre-rendered and are instead provided on demand.

Between 2005 and 2008 Google Maps licensed further data to obtain a full world map with more comprehensive coverage. At the same time Google were also working to acquire high resolution imagery.

Street View started in five US cities in 2007 but had expanded to 3000 cities in 39 countries by 2012. In 2008 Google released Map Maker to capture data where other basic mapping data and Street View were absent.

Google’s Ground Truth project now enables them to generate their own maps from raw data by combining satellite and aerial imagery with road data and information capture via Street View. This data is processed with an application callled ‘Atlas’ that Google developed internally. With the aid of advanced computer vision techniques they are able to detect and correct errors and extract further contextual information from the raw imagery data that helps them make their maps more complete and accurate. This includes details as specific as the names of streets and businesses appearing on signs.

Corrections are also crowd-sourced from users with the aid of their ‘Report Maps Issue’ feature. Staff at Google are then able to verify the issue with Street View, edit the map and publish the corrections within minutes.

The presentation moves on to further discussions on ‘Google Maps For Good’ and their work with NGOs (19:20), ‘Google Maps for Mobile’ and the provision of offline map availability (27:35), the evolution of the equipment used to capture Street View (31:30), and finally the evolution of their 3D technology (37:40). The final discussion in particular reiterates the content in my post yesterday from a slightly different perspective.

What I found particularly interesting in this video was the continued manual intervention via Atlas but also the extent to which they are able to gather contextual information from Street View imagery.

Advertisements

Google Project Tango

Project Tango

Back in the summer Virtual Architectures signed up to go on the waiting list for Google’s Project Tango development kit. The current 7″ development kits are powered by the NVIDIA Tegra K1 processor and have 4GB of RAM, 128GB of storage, motion tracking camera, integrated depth sensing, WiFi, BTLE, and 4G LTE for wireless and mobile connectivity. Due to other exciting developments for Virtual Architectures we haven’t been able to take up the offer at this time. However, its such an exciting project we can’t resist sharing the details from the Project Tango website:

What is Project Tango?

As we walk through our daily lives, we use visual cues to navigate and understand the world around us. We observe the size and shape of objects and rooms, and we learn their position and layout almost effortlessly over time. This awareness of space and motion is fundamental to the way we interact with our environment and each other. We are physical beings that live in a 3D world. Yet, our mobile devices assume that physical world ends at the boundaries of the screen.

The goal of Project Tango is to give mobile devices a human-scale understanding of space and motion.

– Johnny Lee and the ATAP-Project Tango Team

3D motion and depth sensing

Project Tango devices contain customized hardware and software designed to track the full 3D motion of the device, while simultaneously creating a map of the environment. These sensors allow the device to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.

What could I do with it?

What if you could capture the dimensions of your home simply by walking around with your phone before you went furniture shopping? What if directions to a new location didn’t stop at the street address? What if you never again found yourself lost in a new building? What if the visually-impaired could navigate unassisted in unfamiliar indoor places? What if you could search for a product and see where the exact shelf is located in a super-store?

Imagine playing hide-and-seek in your house with your favorite game character, or transforming the hallways into a tree-lined path. Imagine competing against a friend for control over territories in your home with your own miniature army, or hiding secret virtual treasures in physical places around the world?

The Project Tango development kit provides excellent opportunities for new developments in architectural visualisation, Augmented Reality and games. It is also exciting to know that there is an integration with the Unity game engine. We look forward to seeing what developers come up with.