David Amerland
Smart Machines and machine Learning

Google’s Machine Learning Cracks World Recognition

How big is the world? From a human perspective, despite the fact that it is getting ‘smaller’ all the time thanks to increased connectivity and faster means of travel the world is always massively big. Humans make it small only by approximating some things, glossing over others and creating abstractions that allow generalizations to emerge. For a machine however the answer to the question is rather more precise: the world is 26,000 squares big, each of which forms part of a grid.

The number, big as it may be, is nothing when it comes to a neural net examining it and that is why Google has been able to do the seemingly impossible: train a machine learning algorithm to look at a photograph, any photograph and accurately guess where it was taken even if the traditional contextual clues associated with a photograph’s location (landmarks, signs, flags or buildings) are missing. 

What may sound like magic (or might even feel a little creepy) is nothing more than a really smart way to use the minimum computational power available (at 377MB of memory it could even work from a smartphone) to derive associative clues that lead to educated guesses, governed by statistical probability. 

Incremental Gains

This has been an incremental path for Google. In May, 2015 Google researchers showed that pictures we share are sufficiently data-dense for them to put together time-lapse videos showing cityscapes and landscapes at different times of the day. Then, in June, 2015 we saw how the new Google Photo App was excellent at mining data found in pictures shared online. and around the same time the machine learning AI showed off its creative side as it applied itself to understanding the data it was being showed

The thing about Machine Learning is that its improvement is exponential. It may spend a lot of time achieving only marginal improvements but then, as it reaches a ‘breakthrough’ stage, it accelerates very quickly. 

This is the case here. 

By studying the current images at hand from the ‘known’ universe, Google’s AI can determine where any picture was taken, anywhere. 

Accuracy and Marketing

There are two things remaining to discuss right now: First, how accurate is it? Second, what is its impact? The two are linked. 

The accuracy is pretty impressive but still, relatively low percentage-wise with the current range being from 3.6% - 48% depending on what you want to know. A picture showing a guy, standing in front of a doorway holding a vase (as an example) might be easy for a machine to determine nationality, but if you also ask “what street is this house in?” the likelihood of a correct answer will depend upon whether any other pictures from this photoset or others very similar to it, have been publicly shared elsewhere. 

Obviously it will get better. 

Good news for marketers (and SEO). Imagine a picture of “this is the brand new table I made” proudly posted by a craftsman on the web to promote his business. He knows nothing about EXIF Data, Geotags or SEO. All he knows in fact is that he loves making tables and wants to sell them to earn a living. In the ideal world of the very near future, posting that picture in a social media network or his website (if he has one) or his Google Plus Profile would be all the action he has to take for the AI in Google’s core to pull all the relevant bits together, fast and determine that this is a craftsman that does something, locally, we could benefit from when we say to our phone: “Ok, Google, local furniture makers”

We are not there yet. Obviously. For every breakthrough in “Indexing the World’s Information” we also have fresh hurdles to overcome and new questions to answer. 

But for the moment, let’s just agree that this newfound ability of Google’s is awesome and it raises fresh possibilities in terms of marketing and branding that did not exist before. It also now makes it imperative that anyone with a website or a business, understands how to best create data-density in their online footprint. (And here, SEO Help: 20 Semantic Search Steps to Help Your Business Grow) is of direct, practical help. 

Research Sources

Time-lapse Mining from Internet Photos
PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce
Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

© 2017 David Amerland. All rights reserved