SEO specialists auditing websites, stores or portals are aware that there’re no two identical cases. There’re always different problems, inconsistencies and other various obstacles to efficient optimization.
Typical issues which are found across multiple websites and are easier to spot usually follow from website script or template errors. It’s more difficult to identify errors arising from individual actions of the client, webmaster or copywriter. Below are described 4 elements you could forget about during your last website audit or optimization.
External 404 errors
Speaking of errors, 404 errors can’t be overlooked. Naturally, it’s an essential aspect of an SEO audit, though recommendations to change something are based solely on a website scan made using a special tool (Screaming Frog is one of them). Report results are generated only on the basis of pages available as URLs, discovered by the crawler while navigating the website.
In addition to 404 error pages you’ve managed to find this way, look for URLs present within your website in the past. Browse the Internet and look for outdated or incorrect URLs within your website.
To name a few, have a look at the following useful tools:
– Google Search Console; in the “Index -> Status” tab it will indicate errors spotted by Googlebot
– Majestic, specifically the “Pages” tab; you’ll find there all URLs discovered by Majestic – even those returning 3XX redirects or 4XX errors:

The URL returns a 404 error while 19 backlinks on two domains redirect to it. Such URLs with a lot of backlinks are found sometimes; you can use them by redirecting to the right URL or by creating an additional page on a given website. Another example is the URL in the subdomain of the nike.com domain, to which a substantial number of backlinks redirect but which returns a 410 error:

Yet another example in the Polish fraction of the Internet is the URL in the allegro.pl subdomain:

What’s interesting, http://moto.allegro.pl got redirected, but the specialists forgot about the URL with www prefix.
Visibility of website elements – Googlebot vs. an ordinary user
Website creators are more and more prone to using JavaScript in order to display content on websites in a more eye-catching and user-friendly way.
However, occasionally, content available for the user after opening a website is completely invisible for the search engine crawler. Not only is Googlebot unable to properly render it but also such content isn’t even available in an HTML response received by the crawler.
In a certain online store, the developers who had created the template decided to load the list of products on category pages using JavaScript. The code responsible for their display was launched only after the page loaded so the crawler didn’t stumble upon the backlinks to the product pages. It did find them in the sitemap and indexed them, but the use of JavaScript probably rendered it impossible to get higher in search results, since they weren’t “driven” by external linking.
To avoid similar mistakes, when auditing a website always open it in a web browser with JavaScript toggled off. If you use Google Chrome, give JavaScript Toggle On and Off a go.
To what should you pay attention? Well, “it depends” because as I’ve already mentioned, every website is different; nevertheless, check whether the following function properly:
– All the menu/navigation elements
– Lists on category pages (products, entries, etc.)
– Sliders/carousels.
Another error that SEO auditors tend to overlook is the wrong (written in red in the example) URL of the first page displayed in pagination tags. When content is divided into pages, Google recommends one of the three following options:
- “Don’t do anything, the crawler will manage and correctly interpret the content of your website” – sometimes it’s true, sometimes it’s not. That’s why I don’t suggest this solution.
- Add a “Show everything” page at every paginated page. Add also a canonical tag pointing to the page with all the products.
- Use backlinks in the head section or HTTP headings with rel=”prev”, rel=”next” attributes and indicate the previous and next pages in each page of a given category chronologically; this will make it easier for crawlers to crawl the website. I find it the best solution because it’s clear and Googlebot shouldn’t have any problems with interpretation.
Go to this link and read Google guidelines on pagination:
https://developers.google.com/search/docs/advanced/ecommerce/pagination-and-incremental-page-loading
However, the third option bears the risk of wrong presentation of the first page. A pagination parameter is added to a URL by default, which in the case of the main category page will trigger 301 redirect or, which is even worse, will generate a duplicate.
An example:
The main URL of category x (being simultaneously the URL of the first page):
https://domain.pl/category_name_x/
The URL of the second category page:
https://domain.pl/category_name_x/page/2/
Wrong presentation on the second page:
<link rel=”prev” href=”https://domain.pl/category_name_x/page/1/”>
To be correct, the backlink on the second page should display the following as the URL of the first page:
<link rel=”prev” href=”https://domain.pl/category_name_x/”>
Misplaced pagination tags are e.g. on deezee.pl store, where the code of the second page of the “boots” (Polish: ‘botki’) category in the <head> heading looks as follows:

while the main URL of the first page is https://deezee.pl/boots.
An analogous error is observable in the pagination menu in the first page URL. An example might be here ebutik.pl store where the URLs to the previous and next pages in the <head> heading are provided correctly:

but the URL to the first page in the pagination menu contains the following parameter:

Are you dealing with subdomain-domain migration? Remember about redirecting from both http:// and https://
After subdomain-domain migration (e.g. from store.domain.pl to domain.pl), there usually follows a 301 redirect, so that all URLs lead to their corresponding pages. However, in the case of websites without an SSL certificate, webmasters forget about redirecting URLs with the safe https://.
Since SSL-certified URLs have not been used until recently, they’re often overlooked and not even checked. Remember that if you decide to install an SSL certificate, the old URLs within the subdomain with the https:// may index in Google and become duplicates of the right URLs.
Summary
There’re certainly more things that you may forget about, but I chose not to mention the most obvious ones, like for example “H1 heading missing” or “title duplicates”.
It’s worth scanning every website audited with some good tool that goes through all the available pages and checks the most basic things. What are your experiences?
What do you think is an essential SEO aspect often overlooked by experts? Share your opinions in the comment section below!