Without Alfred, Batman would be reduced to cooking his own meals, cleaning the Wayne Mansion and bandaging his own cuts and grazes. As a matter of fact, after a good day’s cleaning, washing and cooking he might not even be able to effectively function as Batman, at all.
The point is that in today’s complex world we’re each only as good as our personal support network. Semantic technology is about to give us the ultimate power: the ability to command technology to give us accurate contextual answers that frequently will predict our needs.
What this has to do with Batman and Alfred (his butler) will be revealed in good time.
At the time of writing there are three major contenders:
Google with Google Now (essentially a predictive search service with Voice Search integration)
Apple’s Siri (A voice enabled search assistant)
Microsoft’s Cortana (A voice enabled predictive search hybrid)
Of the three, the latest addition to the pack has the most street cred. Microsoft’s personal assistant is based on the popular HALO 26th century AI named Cortana (voiced by actress Jen Taylor). The name, originally picked as a pre-launch code name was leaked and became the focus of a petition to retain it, also provides an instant appeal to Xbox gamers (and there are many) who live in the HALO universe (see picture below on how the AI looks in the game).
With that let’s get rid of the hype first. Online reviews rave about Cortana citing it as the Siri and Google Now killer. Well, it’s neither. Street-cred and gamer-appeal aside, at the moment Cortana is in Beta. It is inconsistent in its results, buggy in its performance and certainly not on a par with its other two rivals.
That does not make it bad. It is in Beta. Like anything that has to do with semantic search Cortana will need time to accumulate data, learn to classify it properly and serve it when appropriate.
Semantic Search, Again.
There, I said it. Each of the three personal assistants is semantic search driven. The end game of semantic search is to accurately match up a search query intent with the information in its index and provide one, highly satisfying answer (unless the end user has specifically asked for more than one choice).
This can only be achieved when the end user is a willing participant in the construct between search indexing, search interface technology and the point where he supplies his data signals. Apple gets round this issue by pretty much reading anything its users do inside its environment using Siri, Google asks users to participate as much as possible in order to “improve their search experience” and Microsoft has gone down a route that’s a hybrid of the two where the end user can choose not to submit some data while having no choice over other.
There are two points that are important here: First, each of the three giants is adopting a similar model. They create an ecosystem in which the end user can operate with ease and put search at the heart of it as a service rather than a product. Google’s Android is free, Apple has made updates to its iOS free and Microsoft announced at its Build conference that Windows OS is now free for any device under nine inches. An ecosystem guarantees end user loyalty as those who spend any time in one find it difficult and messy to transfer their data and activities to a new one or start from scratch.
The second point is that this is where Batman’s butler comes in. Just like Alfred has to be privy to everything Bruce Wayne does in order for Batman to be effective, so in order for semantic search to succeed there has to be a willingness to provide a lot of contextual signals from the end user. While historically we may have never had any real privacy (or even the right to it) the fact that we are being asked to now willingly surrender something we possess and others want sits uncomfortably in our psychological horizon.
Personal assistants overcome that hurdle because they do not feel like a product that requires data from us in order to “work better” but rather a free service we access and program to serve us better (the helpful butler thing). A large part of that ‘programming’ involves setting permissions to what kind of social signal and personal data the personal assistant has access to.
Search is Changing
In Google Semantic Search I wrote how search is heading towards a screenless, keyless future where the main interface will be voice and visual. While that sounds like a radical change it’s not. The really radical change to search is what’s happening under the hood. In order for us to confidently control a service just with words we have to be confident it understands and we also have to be confident that we control it properly.
All of this requires data. Our data. Given willingly, in order for search to become educated, intelligent, personalized and responsive.
Will Cortana prevail? Will Siri endure? Are we all going to be swallowed up in a Google universe? None of these questions ultimately matter. Semantic search in its feel, operation and results will only grow and develop.
Microsoft might succeed in killing Siri, and the Redmond giant’s approach of embedding search as a service to all its products may actually pay off and succeed in giving it a share of the market that will provide a little more competition for Google. What is really important here is that by taking search across devices and products every company that has a real stake in it is doing the same thing: it is moving, by degrees, away from the “ten little blue links” of the traditional desktop search and heading towards a search that is invisible, ever-present and customized to our needs.
You will notice here there is one company absent from the fray: Yahoo. It has a heck of a lot of catching up to do and the gap between itself and the rest is widening.