In a world where everything is data, navigating to the right place, finding the right answer or matching the right pair (of anything) is always a search problem. Data only makes sense when it is networked, connected, indexed, analyzed, assessed, abstracted, categorized, organized and presented in relation to other data.
The process is an endless rinse-and-repeat cycle where the metadata surfaced becomes semantically dense enough to become data in its own right, allowing further metadata to be extracted from it.
Let’s get practical. Apply all the theoretical abstraction I’ve written above to the usual “Morning!” Greeting between neighbors. The depth of the relational connection between them (are they good friends, or are they being civil to each other?) will reveal itself in the warmth of the candour of that one, single word, exchanged. Is one distracted, lost in thought? Depressed? Angry? Clipped tones, trailing endings, a pitch that’s so low as to be barely audible or too high and sounds like a whine can be used to analyze emotions. Is the sound harsh? The word spoken fast, like an expletive almost, or are the syllables, long-drawn out? The difference could spell out whether there is enmity in the relationship, hidden aggression or it’s a casual, social connection with no other overtones.
We’ve only used one word and that’s before we begin to analyze whether there is a male/female interaction involved or whether a regional or national accent comes into play.
This is exactly the kind of semantic analysis Google does with speech in order to help improve its understanding of spoken queries in search. Because speech is data, possessing it also allows the accumulation of knowledge which stems from a sense of how speech is broken down into discrete units, analyzed for content, context and importance and classified. This allows Google the ability to reverse-engineer the process and create human-like speech using a computer that can now use inflexion, pitch, rhythm and speed to denote warmth, friendliness and openness.
There are several important takeaways here:
- In a data-centric world search is everywhere, even if we do not actively call it search or have a sense of it as such.
- Everything that has an effect is information. Information is data. Data is subject to analysis and classification. That includes relatively ethereal things like emotion and intent.
- Once metadata accumulates it becomes substantial enough to be subject to further analysis and classification so it becomes data which gives rise to further metadata.
- The process of labelling, classification and refinement can be continued ad infinitum unless there are clear boundaries marked by benefits vs costs which do not fully justify the reiteration.
- Data always has value. Its value is always contextual.
As Google’s machine learning gets better and better its voice recognition and voice synthesis capabilities will exponentially improve. Machine learning is closely linked to exponential growth because of the way training sets of data are sampled and the algorithms are then recalibrated. Exponential growth, as the graph below illustrates, has a latency period after which change accelerates dramatically. In practical terms this means that once machine learning gets past a tipping point it begins to produce good results at an accelerated rate.
Getting to the Very Core of Reality
Marketing has never quite been about being real. It has always been seen as the means through which a stimulus is created which is then satisfied by the product or service that is being marketed. But that is, to put it mildly, manipulation. It plays on desires, needs and fears to create a false sense of urgency that will lead to a purchase before the potential buyer has had the chance to research anything, think things through or change her mind.
Semantic search promised to change all of this by creating entities which are based on identity. This generates data, that needs to be classifed and validated.
Machine learning makes all of this faster and less costly which means that more and more can be done without increasing operating costs.
Search queries posed in natural language can be processed and matched against real world concepts and objects without going through the traditional ‘translation’ phase where we try to think what specific search terms might possibly describe those objects. The search query “Red cylindrical object used to fight fire” returns, without any hesitation, “fire hydrant” on Voice Search.
One of the most specific areas where this takes place is voice search and voice interaction. Without a keyboard to input a search query we have no drop-down autosuggestions from Google. We also cannot always remember what we searched for two queries earlier so the very concept of search terms (or even keywords) becomes redundant.
The approach has two very significant effects:
- Natural language description frequently supplants exact search terms and, even a search methodology.
- It often does not feel like search. (Google Now, Waze, Google Maps, YouTube, GMail and Google Photos) are examples where search technology is active in the background.
The video below on Google Voice and how it is put together beautifully explains some of the concepts:
What it really means is that everything a business, a brand or a person does online and offline now really matters. This concept of “data density” was first broached in SEO Help designed, very specifically to address issues of identity, brand values and entity formation as part of a business’ or a brand’s day-to-day activities.
Because everything is data and everything is beginning to be understood and indexed, creating the necessary semantically rich data density required to succeed in search has to be part of an incremental, sustained and sustainable process that weds brand identity and core values with brand marketing activities and brand voice. Of course, in a semantic web, from a presence point of view everyone and everything is, from a practical point of view, a brand.