Episode 47 / May 25, 2018

Listen now:

Welcome to The Redirect Podcast, where the BlackTruck team shares recent insights and takeaways from the world of search marketing.

In this week’s episode:

  • GDPR is officially in effect as of today. It’s important to make sure you’re compliant with how you collect data online. What do you need to know? (begins at 1:00).
  • We chat about performing technical site crawl analysis, tools you can consider, and elements to look at for taking action (begins at 3:33).
  • Google has updated its Google Trends tool. What kind of information can you find there, and how can you use it? (begins at 13:45).
  • A dive into robots.txt files—what they are, what they do, and why you should care (begins at 20:05).

Happy GDPR Day!

We’ve written about how GDPR impacts your business and warned you of the impending date of GDPR enforcement. Today’s the day, friends. GDPR is in effect. Even if you don’t think GDPR affects your business, it probably does.

Hopefully by now you have made the necessary changes to your privacy policies and internal processes to be compliant. If not, you need to at least show that you’re making an effort to comply.

We explore data retention settings inside Google Analytics as a result of GDPR, what this could mean for your reporting and how to take back some control. We also addressed this in a blog post on GDPR considerations for Google Analytics.

Technical SEO Audit Tips

From Patrick: I’ve been technical site audit heavy lately, and my brain is focused on finding faults. Here are some sound areas to dig into, recently expounded upon at Search Engine Watch, that might be no-brainers to some, but areas of oversight to others.

  1. Crawl report errors
  2. HTTPS status code errors and redirects
  3. Sitemap status
  4. Load time issues
  5. Mobile-friendliness
  6. Keyword cannibalization
  7. Check robots.txt
  8. Google site search
  9. Duplicate metadata check
  10. Meta description length
  11. Site-wide duplicate content
  12. Broken links

For me, a site audits start and almost ends with the crawl. As we discussed, there are several different versions of crawl tools, and then several things you can interpret with a crawl once you have that beautiful spreadsheet of nerdery in front of you. Many of the items on this list of 12 items can be deduced from the crawl report—and that’s barely scratching the surface of many of the features available in many of the professional crawling tools that are available out there, both paid and free.

Several of these items on this list have been thoroughly discussed on this podcast, most recently meta description length last week, which we refresh on again this week. Additionally, Jason takes an in-depth look at robots.txt files and how they should be used.

Google Trends Redesign 

Google Trends was recently redesigned and now has additional features. This Google resource lets you see what’s trending on a national level, state level, or even regionally (if the data is there), as well as compare search terms—perhaps to see what’s used more nationally or in a specific region. The data is great for creating visuals, as Jason did in a post last year on natural disasters and search.

With the redesign comes the ability to compare search terms vs. interests and break down data further by subregions and city. As always, you can view similar topics and queries to your area of research.

The redesign also includes Google’s “Year in Search” roundups going back to 2001, offering an easy way to see how search (and culture) has changed over time.

Robots.txt Files

Robots.txt files: What are they? How are they used? Why you should care about them?

Robots.txt is a text file that you or developer creates, which acts as a road map for search engines. It helps guide the bots through crawling your site and indicates whether they can access or index particular pages. For instance, you wouldn’t want the admin access for your CMS to be indexed, or a shopping cart on e-commerce sites.

The file should be in the top-level directory of site to be found and used by bots. You’ll also want to have your sitemap included in the file, or indicate its location on our site in the file.

The robots.txt file can help keep duplicate pages from being indexed, as well as keep internal search results pages from showing up, which can be seen as duplicate content on the site. You can also use the file to implement a crawl delay on your site, to throttle what kind of activity your server sees so it doesn’t get overloaded. For instance, giving special treatment for certain search engines to access the site before other search engines.

Your CMS may output a robots.txt file, and it may be easy to customize with plugins. Find more info on robots.txt from Moz.

 

Thanks for tuning in! To catch future episodes of The Redirect Podcast, subscribe on SoundCloud, iTunes, or Stitcher.