David Amerland
Google Translate Uses AI to Translate Languages it Didn't Know

Google’s AI Behaves Like A Star Trek Translator

In the 1967 episode Arena, Captain Kirk is whisked off to a distant planet to battle a creature never encountered before by humans - a Gorn. Even during the somewhat uneven battle between Kirk and the …ahem, evil-looking reptilian creature, there is an instant translator that allows the two species that have never met before to talk to each other.

Now pause there for just one moment. This is not computer-aided translation in the traditional sense. A computerized device is used, for sure, but for those who exercise critical thinking, it is required to translate between one language and another it has never encountered before. So a Gom and a human can talk to each other. 

Well, Google Translate did just that, teaching itself to translate between two languages it had not been taught and therefore didn’t know before. In my book on semantic search, Google Semantic Search I described how semantic search basically acts like a Star Trek computer using real-world ascribed attributes to generate entities it understands. 

Attributes are important because not just for recognizing familiar actions but also because one system can be taught to recognizes action categories it has have never seen before by transferring knowledge from known classes to unknown classes via attributes as a bridge (this makes it possible, for example, to use machine learning to teach a computer to recognize an action film by transferring already known attributes of action films). The action is called Zero-Shot Translation and it is the same principle the Google Translate team used in their approach. 

To achieve it Google switched the Google Translate system to a neural network called Google Neural Machine Translation (GNMT), back in September and then added an additional token in the mix. A token is an instance of a sequence of characters in some particular document that are grouped together as a useful semantic unit for processing. Tokenization is important because beyond it featuring in semantic search, it also comes with language specific issues which in themselves use classifiers to resolve the question of how to identify a specific language. 

Zero-shot Translation at Google Translate  

All this is wildly exciting for three reasons:

First, this is the first time that zero-shot translation has happened at machine learning level.

Second, by mapping one language meaning to another GNMT must be mapping semantic representations (read language patterns and attributes) instead of direct phrase-to-phrase translations.

Third, if GNMT is doing all that then it must also be using what linguists call interlingua, i.e. structures whereby sentences with common meanings are represented within the system in ways that are similar even if the languages are different. (Indeed a perusal by the Google researchers reveal 3D grouping of language clusters that are classified by meaning rather than word mapping).  

For the immediate future this bodes really well for Google Translate that will now be able to provide more translation options faster and improve quality as it goes along. But it also opens up a lot of possibilities for search as the indices built through the current load of queries will be able to be enriched further by a tokenized cross-referencing of terms just like in GNMT. 

What this will mean is better quality, more accurate search results for ever more obscure search queries and an ever more responsive, data-enriched search irrespective of language.

We may also get one of those handy, hand-held instant translation gizmos like the one used by Gom, too (once we learn not to throw large rocks at each other, of course).      

 

Sources

Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System

Tokenization in Information Retrieval

Recognizing Human Actions by Attributes (pdf)

Machine Translation, Linguistics and Interlingua (pdf)

Semantic Search of Schema Repositories (pdf)

Exploring Semantic Inter-Class Relationships (SIR) for Zero-Shot Action Recognition (pdf)

© 2017 David Amerland. All rights reserved