As marketers and search engine optimizers (SEOs), we’re not here just to develop and advance your existing sites, but we’re here to keep an eye on trends and explore how future advancements in search could affect what we do as an industry. With that stated, there’s a trend gaining speed that might be fundamentally changing how we go about our work.
As we’ve discussed in previous voice search posts, the virtual assistant (VA), Google Home, Amazon Echo, etc. is already changing the game at what the future holds but isn’t necessarily affecting current search habits. However, digging deeper into what is driving virtual assistants (machine learning, a.k.a. artificial intelligence or AI), we see the area where change will need to occur.
Engineers are advancing machine learning daily, but what data is it that machines are feeding on to learn? As SEO’s that comes from us, so it’s important that we provide the correct data for [search] machines to learn.
It’s predicted that 2017 will have 33 million voice-first devices in use after 24 million devices get shipped this year alone. A survey conducted of nearly 1000 users found they pick voice search as a third option after traditional search and app search. However, 61% stated they wanted their VA to provide a more direct answer rather than displaying a search engine results page (SERP). That’s a large enough audience that developers and engineers are scrambling to make machine learning more effective.
There’s a lot of data on position zero and how to go about “winning” the coveted featured snippet for the desired search. Achieving position zero and the first spot in SERPs are not simple feats, just look at the average SERP displayed today. How often do we see a true simple list of 10 blue links? Rarely. There are pay per click (PPC) ads, local results, and maps results all vying for that top spot. So, how does that work with voice search? What ends up being dictated?
“I’m feeling lucky” is a button that we all used to see on the Google search screen. It’s now been retired, but do you know what it did? If you entered your search query and clicked the “I’m feeling lucky” button instead of “search,” you were brought directly to the first results webpage, foregoing the SERP.
In essence, VAs are reading you the “I’m feeling lucky” result (in the form of a featured snippet) which more than likely is position zero or position one. By 2020, it is estimated that 50% of search will be voice search and 30% of browsing will be done without a screen. Without a screen. That means you’re speaking to a device across the room and trusting it to read back to you exactly what you want to hear – without sounding like a robot just reading a list but rather in a conversational manner.
The ultimate goal of advancing virtual assistants is to improve the dialogue, or the banter, between the searcher and the assistant. A question is asked, an answer is read aloud. But then what?
“No, that’s not what I was looking for. Can you refine your answer for me?”
“Actually I was thinking more along the lines of ________”
“Hmm, that doesn’t seem current. Can you narrow your answer down to information from just the last six months?”
“No, not the original version of the song, the current cover version that’s popular.”
As mentioned above and in previous posts, most of the controllable voice search results are coming from the notion of simple, human-voiced question queries, where the question has already been answered in previous posts/pages on websites. What, when, where, how, etc. are the quick wins. This is what is currently being seen and strategized over through many SEO circles; however, what about that next round of development? How do we really ensure that all the great content is being picked up by those 50% of searchers using voice and 30% of searchers not using a screen in just a few short years? And how do we ensure the search results that are read aloud across the room are based off an interpretation of the question as it was intended? Does the machine learn the dictation pattern of the common users of the device? Based on a recognized voice within a household, will the VA know who’s speaking and automatically apply search restrictions (e.g. age specific parental controls)?
Whatever your industry, the content leads (titles, descriptions, etc.) really need to be written in a human “voice.” The way you ask the VA, is the way it will search. If the VA sees that in its internal results, it will deliver the content to the searcher. Is that dark search? Is that position zero? Is it a simple transcription of the first result? That seems to be the case today, but not for tomorrow. As AI improves, so will SEOs. Marketing tools will advance in an effort to keep up. Search Analytics in tools like Google Search Console will become more important. Will there be a new category for not just an impression or click, but a “VA dictation” as well?
It’s up to us as SEOs to drive this learning process. As machines evolve, they will be learning based on the website data that is already out there. It’s up to us to ensure the machine is getting the best data – the most accurate, relevant, and current data possible. Stay on top of search trends. Do the work now. Implement the site data that feeds the machine learning. Win the virtual assistant search advancement. Keep it human.