Last week we started off October by tuning into the world’s largest search conference BrightonSEO. Two days of inspiring talks allowed us to dive into several topics which were divided into categories. We offer a short wrap up of a few talks we attended.
Robots.txt files tell search engines, which pages on your site can or can’t be crawled and can impact search engine optimisation largely. It is important more than ever that robots.txt file is used correctly. The talk was focused on uncovering some interesting findings and common implementation mistakes. The surprising takeaway of this talk was that Google will ignore the user agent:* rules entirely if you have a specific :Googlebot section.
This talk gave us an understanding of how Excel Fuzzy Lookup works and how it can save a ton of time using this formula when dealing with tasks such as 404s redirects, site migration, keywords pruning and reduce some manual work. The key is to give it a go and practice.
The speaker presented some SEO tips based on what we are exposed to as SEOs such as facts, tests, problems, the articles we have read or information we have heard from other SEO professionals.
One of the takeaways: Internal Headers added to GSC crawls using the URL inspector shows if Google sees if you’ve modified a page, or they’re using a previous version.
To solve the problem of SEO recommendations that don’t get implemented, the speaker surveyed 51 SEOs for their experiences of non-implementation and read around if any biases could be affecting decision making in SEO process. He then identified and shared in his presentation common non-implementation scenarios and offered some tips on what to do about it.
Takeaway: Something we can all do better at is framing recommendations in a more positive light.
The speaker walked us through the story of zero click searches and SERP features.
Paige walked us through the SERP landscape and SERP Features trends and how they have changed over time.
Key takeaway: Because the search landscape is constantly changing, you need to know your vertical and its specific needs and threats.
Recommendation for large site: cloud based crawlers: Ryte, Oncrawl, Deepcrawl
Factors affecting the number of URLs spider CAN crawl:
Factors affecting the number of URLs spider WANT crawl:
Log File contains a record of page requests for a website.
Log file uses:
Interested in learning more about SEO? Check our the industry-first SERP Features glossary by Paige Hobart