4 elements of SEO audit that happen to be forgotten

SEO
Jakub ZeidJakub Zeid
Published: 24.12.2018
7 minute

SEO specialists involved in auditing websites, stores or portals are aware that each case they work on is different. Each has its own shortcomings, inadequacies, and other barriers that prevent the successful implementation of optimization.

Standard problems, which recur on many sites and are easier to catch, usually stem from errors in the script or template of the site. However, it is more difficult to catch irregularities that are the result of individual actions by the client, webmaster or copywriter. Below you will find 4 elements you may have forgotten about the last time you audited or optimized your site.

1. 404 errors from external sources

.
Speaking of errors, 404 pages could not be missing from this list.

Of course, this is a basic element of SEO audits, but very often recommendations of fixes for the client are based only on a scan of the site made with a dedicated tool (such as Screaming Frog).

The results of such a report are generated solely on the basis of pages available as links that the robot finds during its tour of the site.

In addition to the 404 error pages you found this way, also look for addresses available on your site in the past. In addition, also find addresses available on the web that contain outdated or incorrect URLs inside your site.

Tools to help you in this regard include:

Google Search Console, which in the “Index -> Status” tab will indicate addresses containing errors found by Googlebot

Majestic tool, specifically the “Pages/Pages” tab. There you will find all addresses detected by Majestic – including those returning 3XX redirects or 4XX errors.

Sometimes addresses with a large number of links will be found – you can take advantage of them by applying redirects to the corresponding working address, or by creating an additional sub-page on such an address.

An example is an address in the subdomain of the nike.com domain, to which a large number of links are directed, and which returns error 410:

 

The next example is an address in the allegro.pl subdomain:

 

Interestingly, the address http://moto.allegro.pl was redirected, but the specialists forgot the address with the prefix “www”.

2. Visibility of page elements – comparison Googlebot vs. ordinary user

.
Increasingly, web developers are using JavaScript to display content on pages in a more attractive and accessible way for users.

Sometimes, however, the content available to the user when the page is opened is completely invisible to the search engine robot. Googlebot is not only unable to render it correctly, but it is not available at all in the HTML response the robot receives.

In one online store, the developers of the template decided to load the list of products on the category subpages using JS. The code that was responsible for displaying them was run only after the page loaded, so the robot missed the links to the product pages.

True, it found them in the sitemap and indexed them, but the presence of such an application with the help of JavaScript probably prevented them from achieving better search engine rankings, since they did not receive “power” from internal linking.

To avoid this type of mishap, always audit your site run in a browser with JavaScript disabled. For Google Chrome, I recommend a plug-in: JavaScript Toggle On and Off.

Which elements to pay special attention to? The answer to this question would be “it depends”, because as I pointed out at the beginning, every site is different, but you should definitely check the operation:

  • all menu/navigation elements,
  • .

  • lists on category pages (products, entries, etc.),
  • .

  • sliders/carousels,
  • .

3. Pagination tags – wrong address of the first page

.
Another of the mistakes that happens to be overlooked by specialists preparing SEO audits is the wrong (highlighted in red in the example) address of the first page indicated in paging/pagination tags. When dividing content into pages, Google recommends one of three options:

    1. “Don’t do anything, the robot will guess itself and interpret your page content correctly – sometimes yes, sometimes no. Therefore, I do not pto recommend this solution.
    2. Add a “Display All” page on each subpage paginated. Also add a canonical tag, pointing to the page with all products.
    3. .

    4. Use links in the head section or HTTP headers with rel=”prev”, rel=”next” attributes, where on each of the subpages of a category you indicate the previous and next page in order, thus making it easier for robots to navigate the site. In my opinion, this is the best solution, which makes things clear and Googlebot should not have any problems with interpretation with such use.
    5. .

You can find detailed recommendations from Google regarding pagination at the link:

https://developers.google.com/search/docs/advanced/ecommerce/pagination-and-incremental-page-loading

With the third option, however, there is often an error when indicating the address of the first page. Automatically, a pagination parameter is added in the URL, which, in the case of a basic category page, will trigger a 301 redirect, or worse, create a duplicate.

Example:.

Basic category address x (which is also the address of the first page):

https://domena.pl/nazwa_kategorii_x/

Second page address of the category:

https://domena.pl/nazwa_kategorii_x/strona/2/

Incorrect indication on the second page:

<link rel=”prev” href=”https://domena.pl/nazwa_kategorii_x/page/1/“>

Correctly the link on the second page should indicate as the address of the first page:

<link rel=”prev” href=”https://domena.pl/nazwa_kategorii_x/”>

Incorrectly entered pagination tags can be found, for example, in deezee.co.uk store. On the second page of the category “Boots” in the header <head> code indicates:

 

While the primary address of the first page is: https://deezee.pl/botki.

We can find an analogous error in the pagination menu in the link to the first page. An example of this is the ebutik.co.uk store, which correctly indicated links to the previous and next pages in the header <head>:

 

.

However, in the pagination menu, the link to the first page contains a parameter:

.

4. You’re_having_to_migrate_from_subdomain_to_domain? Remember to redirect both http:// and https:// addresses

.
When you migrate from a subdomain to a domain (e.g., from sklep.domena.pl to domain.pl), you usually perform a 301 redirect so that all addresses direct to their counterparts. Sometimes, however, with sites without SSL webmasters forget to redirect addresses with the secure protocol https://..

Since SSL-certified addresses have not been used so far, often they are not even subject to checking. It is worth remembering that when you decide to install an SSL certificate, the old addresses in the subdomain with the https:// protocol may get indexed by Google and be duplicates of the correct addresses.

Elements of SEO audit that we happen to forget – Summary

.
There are undoubtedly more elements we can forget, but I’ve decided to leave out the most basic ones like “no H1 header” and “duplicate Title”.

It is always a good idea to scan all audited pages with some good tool that will go through all available sub-pages and check such basic aspects for us.

How is the situation with you, what do you think is an important element of SEO that is often overlooked by specialists? Feel free to discuss in the comments.

.

Share this post:  
Jakub Zeid

SEO Specjalista w krakowskiej agencji DevaGroup. Absolwent "Informatyki i Ekonometrii" na Akademii Górniczo-Hutniczej.

Try Senuto for 14 days for free

Try for free

Try Senuto Suite for 14 days for free

Start the 14-day trial for free

Meet Senuto in 1-hour online training. Free.

Choose a date and sign up