“Ok Google… How Will Machine Learning Affect the Work of SEOs?”


As marketers and search engine optimizers (SEOs), we’re not here just to develop and advance your existing sites, but we’re here to keep an eye on trends and explore how future advancements in search could affect what we do as an industry. With that stated, there’s a trend gaining speed that may fundamentally change how we go about our work.

As we’ve discussed in a previous post about voice search, the virtual assistant (VA) in the form of Google Home, Amazon Echo, etc., is already changing the game. However, digging deeper into what is driving virtual assistants (machine learning), we see the area where change will need to occur.

Engineers are advancing machine learning daily, but what data are machines feeding on to learn? When it comes to search, that comes from us as SEOs, so it’s important that we provide the correct data for [search] machines to learn.

It’s predicted that 2017 will have 33 million voice-first devices in use after 24 million devices get shipped this year alone. A survey conducted of nearly 1000 users found they pick voice search as a third option after traditional search and app search. However, 61% stated they wanted their VA to provide a more direct answer rather than displaying a search engine results page (SERP). That’s a large enough audience that developers and engineers are scrambling to make machine learning more effective.

There’s a lot of data on position zero and how to go about “winning” the coveted featured snippet for the desired search. Achieving position zero and the first spot in SERPs are not simple feats; just look at the average SERP displayed today. How often do we see a true simple list of 10 blue links? Rarely. There are pay per click (PPC) ads, local results, and maps results all vying for that top spot. So, how does that work with voice search? What ends up being dictated?

Lucky Guesses?

“I’m feeling lucky” is a button that we used to see on the Google search screen. It’s now been retired, but here’s how it worked: If you entered your search query and clicked the “I’m feeling lucky” button instead of “search,” you were brought directly to the first results’ webpage, foregoing the SERP.

In essence, VAs are reading you the “I’m feeling lucky” result (in the form of a featured snippet), which more than likely is position zero or position one. By 2020, it is estimated that 50% of search will be voice search, and 30% of browsing will be done without a screen. Without a screen. That means we’ll be speaking to a device across the room and trusting it to read back exactly what we want to hear. Based on the performance of today’s virtual assistants (*cough* Siri), this might seem like a tall order.

Try, try again…

The ultimate goal of advancing virtual assistants is to improve the dialogue, or the banter, between the searcher and the assistant. A question is asked; an answer is read aloud. But then what?

“No, that’s not what I was looking for. Can you refine your answer for me?”

“Actually, I was thinking more along the lines of ________.”

“Hmm, that doesn’t seem current. Can you narrow your answer down to information from just the last six months?”

“No, not the original version of the song, the current cover version that’s popular.”

As mentioned above and in our previous voice search post, most of the controllable voice search results are coming from the notion of simple, human-voiced question queries, where the question has already been answered in previous posts/pages on websites. What, when, where, how, etc. are the quick wins. This is what is currently being seen and strategized over in many SEO circles; however, what about that next round of development? How do we really ensure that all the great content is being picked up by those 50% of searchers using voice, and 30% of searchers not using a screen in just a few short years? And how do we ensure the search results that are read aloud across the room are based off an interpretation of the question as it was intended? Does the machine learn the dictation pattern of the common users of the device? Based on a recognized voice within a household, will the VA know who’s speaking and automatically apply search restrictions (e.g. age-specific parental controls)?

What it means for today

Whatever your industry, the content leads (titles, descriptions, etc.) really need to be written in a human “voice.” The way you ask the VA, is the way it will search. If the VA sees that in its internal results, it will deliver the content to the searcher. Is that considered dark search? Is that position zero? Is it a simple transcription of the first result? That seems to be the case today, but not for tomorrow. As machine learning improves, so will SEOs. Marketing tools will advance in an effort to keep up. Search Analytics in tools like Google Search Console will become more important. Will there be a new category for not just an impression or click, but a “VA dictation,” as well?

It’s up to us as SEOs to drive this learning process. As machines evolve, they will be learning based on the website data that is already out there. It’s up to us to ensure the machine is getting the best data – the most accurate, relevant, and current data possible. Stay on top of search trends. Implement the site data that feeds the machine learning. Win the virtual assistant search advancement by, ironically, keeping it human.

This entry was posted in PPC. Bookmark the permalink.