What SEO issues does Netpeak Spider detect?

Modified on Wed, 5 Jun at 8:23 PM

Broken Pages

Indicates unavailable URLs (e.g. due to connection failure, exceeded response timeout, etc.), or the ones returning 4xx and higher HTTP status codes. To view a special report precisely on broken links, press the 'Issue report' button over the main table.

Threat

Broken pages are the URLs unavailable for users and search engines (e.g. they were deleted, the server cannot process the request, etc.).

When landing on such addresses, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. Additionally, search robots waste crawling resources on broken pages, that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Delete links to broken pages or replace them with links to available addresses. Click the 'Issue report' button above the main table to see all links to broken pages.

If a lot of URLs returning the 429, 5xx status codes, or timeout appear during crawling, they might be unavailable due to a high load on a site. In such case, pause the crawling, decrease the number of threads in settings, set a delay between requests, or use a list of proxies, and after that continue crawling. When crawling is complete, recrawl unavailable URLs: just select the URLs in the table and use the Ctrl+R shortcut.


Useful Links


4xx Error Pages: Client Error

Indicates URLs returning a 4xx HTTP status code.


Threat

URLs returning a 4xx status code are included in the 'Broken pages' report, and also singled out in the '4xx Error Pages: Client Error' report since they are quite widespread. This status code means that an error occurred during server request (e.g. there is no such page on the site, it has been deleted, or the user has no rights to access it).

Landing on such addresses, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. Additionally, search robots waste crawling resources on broken pages, that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Delete links to URLs with 4xx response code or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.

If a lot of URLs returning the 429 status code appear during crawling, they might be unavailable due to a high load on a site. In such case, pause the crawling, decrease the number of threads in settings, set a delay between requests, or use a list of proxies, and after that continue crawling. When crawling is complete, recrawl unavailable URLs: just select the URLs in the table and use the Ctrl+R shortcut.


Useful Links


5xx Error Pages: Server Error

Indicates URLs that return a 5xx HTTP status code.

Threat

URLs returning a 5xx status code are included in the 'Broken pages' report, and also singled out in the '5xx Error Pages: Server Error' report since they are quite widespread. This status code means that the server cannot process the request.

Landing on such addresses, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. Also if a lot of URLs returning 5xx status code appear when search robot crawls a website, the crawling speed can be drastically lowered, therefore important pages may not get into the search index. As a result, the site may lose search traffic.


How to Fix

Determine why the URLs are unavailable: e.g. their status code may be set incorrectly. If that is the case, adjust URL settings so the pages return the 200 OK status code.

If a lot of URLs returning the 5xx status codes appear during crawling, they might be unavailable due to a high load on a site. In such case, pause the crawling, decrease the number of threads in settings, set a delay between requests, and after that continue crawling. When crawling is complete, recrawl unavailable URLs: just select the URLs in the table and use the Ctrl+R shortcut.


Useful Links


Links with Bad URL Format

Indicates pages that contain links with bad URL format. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Both users and search robots cannot open a URL with bad format because the address is unavailable.

Following non-working links, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. As a result, the site may lose search traffic.


How to Fix

Usually, this issue occurs due to spelling mistakes (typos in a web protocol, incorrect slash symbol, etc.) or extra characters in link addresses.

To check what links have bad URL format, click the 'Issue report' button over the main table. These links are to be fixed (so they lead to available addresses) or deleted from the source code.


Useful Links


Duplicate Pages

Indicates duplicate compliant pages by all HTML code of the page. URLs in this report are grouped by the 'Page Hash' parameter.

Threat

Duplicate pages occur when the same page is available by different addresses. For instance, addresses with/without www, ones with different protocols (http/https), with/without the '/' symbol at the end, etc.

It's difficult for search engines to determine which address among duplicates to index and show in search results. As a result, less important pages may appear higher in search results. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.

Big sites may particularly suffer from duplicates: search robots might waste all crawling resources on them and there will be nothing left for important pages. As a result, many important pages may not get into the search index, and the site will lose traffic. If there are lots of duplicate pages, search engines might lower rankings of the whole site (for instance, this is how the Google Panda algorithm works).


How to Fix

Define the main URL among duplicates and set the 301 redirect to this URL. For useless URLs (e.g. /index.php and /index.html) it's also OK to set the 404 or 410 status code. This being said, remember not to use links to redirects and unavailable pages on a site.

If duplicates cannot be fixed using previous methods or these URLs have to be on a site (for instance, addresses with parameters for web analytics), specify the main URL for them using the <link rel="canonical"> tag or the 'Link: rel="canonical"' HTTP response header.


Useful Links


Duplicate Text

Indicates all compliant pages that have the same text content in the <body> section. URLs in this report are grouped by the 'Text Hash' parameter.

Threat

Search engines may consider duplicate the pages with identical text in the <body> section even if the content of the <head> section (e.g. <title>, <meta name='description'>) is different.

It's difficult for search engines to determine which address among duplicates to index and show in search results. As a result, less important pages may appear higher in search results. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.

Big sites may particularly suffer from duplicates: search robots might waste all crawling resources on them and there will be nothing left for important pages. As a result, many important pages may not get into the search index, and the site will lose traffic. If there are lots of duplicate pages, search engines might lower rankings of the whole site (for instance, this is how the Google Panda algorithm works).


How to Fix

Create unique content for all important pages. For less important pages set the 301 redirect to the main URL (recommended) or delete them (set the 404 or 410 status codes). This being said, remember not to use links to redirects and unavailable pages on a site.

If duplicates cannot be fixed using previous methods or these URLs have to be on a site (for instance, addresses with parameters for web analytics), specify the main URL for them using the <link rel="canonical"> tag or the 'Link: rel="canonical"' HTTP response header.


Useful Links


Contains Lorem Ipsum

Indicates pages with text that contains the 'Lorem ipsum' phrase.

Threat

'Lorem ipsum' is the template text commonly used in a page layout at the development stage. Sometimes it is not replaced after the site or individual pages are published. Since this text has no value for users, search engines may consider the page with it low quality, therefore it's ranking in search results may decrease.

How to Fix

Replace the template text with the content relevant to the page.

Useful Links


Duplicate Titles

Indicates all compliant pages with duplicate <title> tag content. URLs in this report are grouped by the 'Title' parameter.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result. Title tags of different pages are duplicate if their content is identical. It usually happens when pages are not optimized yet, and title tag content is generated automatically using low-quality templates.

If the title tag of many pages is identical, search engines find it difficult to decide which page among duplicates to show in search results. When pages containing duplicate title tags are shown next to each other in search results, a user will struggle differentiating pages with identical titles and deciding which one to visit. That's why pages with duplicate title tags might compete with each other for search rankings or even not be shown in search results for important keywords at all.

If the title tag is duplicate and not relevant to the page content, a search engine might also create its own title for a search result which may be unattractive to users. As a result, the site may lose search traffic.


How to Fix

Create a unique (within the site) title tag for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 40 to 70 characters, maximum is up to 140 characters.

Useful Links


Duplicate Descriptions

Indicates all compliant pages with duplicate <meta name="description" /> tag content. URLs in this report are grouped by the 'Description' parameter.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

Meta description tags of different pages are duplicate if their content is identical. It usually happens when pages are not optimized yet, and meta description content is generated automatically using low-quality templates.

When pages containing duplicate meta descriptions are shown next to each other in search results, a user will find it difficult to differentiate pages with identical descriptions. Furthermore, if the description is duplicate and does not correspond to the search query for a current page, search engines may change it themselves according to the page text, and it might be inappropriate and unattractive for users.

This may negatively affect click-through rate of the page in search, thus its traffic.


How to Fix

Create a unique (within the site) description for each page which will concisely describe its content, contain target keywords, and induce users to visit the site. It should be brief and informative: the optimal length is from 100 to 160 characters.

Useful Links


Duplicate H1

Indicates all compliant pages with duplicate <h1> heading tags content. URLs in this report are grouped by the 'H1 Content' parameter.

Threat

An H1 heading is an important element of search engine optimization. It helps users understand the contents of a page while visiting the site.

H1 tags of different pages are duplicate if their content is identical. It usually happens when pages are not optimized yet, and H1 headings are generated automatically using low-quality templates.

If the H1 heading is duplicate, it may not fully reveal the essence of a page. In this case, users and search engines may consider the site low-quality. As a result, it may lose search traffic.


How to Fix

Create a unique (within the site) H1 for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 3 to 7 words.

It's also recommended to have only one H1 on a page, not duplicating contents of the <title> tag.


Useful Links


Missing or Empty Title

Indicates all compliant pages without the <title> tag or with an empty one.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result.

If the title tag is empty or missing, search engines find it more difficult to understand the essence of a page and evaluate its relevance to the target keywords. In this case, a search engine might also create its own title for a search result which may be inappropriate and unattractive to users. As a result, the site may lose search traffic.


How to Fix

If there is no <title> tag in the source code of a page, check the website settings and enable adding of this tag on website pages.

Create a unique (within the site) title for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 40 to 70 characters, maximum is up to 140 characters.


Useful Links


Missing or Empty Description

Indicates all compliant pages without the <meta name="description" /> tag or with an empty one.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

If the description is missing on a page, search engines may change it themselves according to the page text, and it might be inappropriate and unattractive for users. As a result, the site may lose search traffic.


How to Fix

If there is no <meta name="description" /> tag in the source code of a page, check the website settings and enable adding of this tag on website pages.

Create a unique (within the site) description for each page which will concisely describe its content, contain target keywords, and induce users to visit the site. It should be brief and informative: the optimal length is from 100 to 160 characters.


Useful Links


Broken Redirect

Indicates addresses of the pages that redirect to unavailable URLs (e.g. due to a connection failure, timeout, etc.) or URLs returning 4xx and higher HTTP status codes.

Threat

Broken redirect is equivalent to a broken link. It leads to URLs unavailable for users and search engines (e.g. they were deleted, the server cannot process the request, etc.).

Landing on such addresses, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. Additionally, search robots waste crawling resources on broken pages, that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Delete internal links to redirects from your site or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.

After that, recheck settings of every redirect. There might be mistakes in the final redirect URL, which is why it's unavailable. If the redirect is set correctly, the final URL could be deleted. In this case, fix an unavailable address or set the 200 OK status code for it.


Useful Links


Redirects with Bad URL Format

Indicates addresses of the pages that return a redirect with bad URL format in HTTP response headers.

Threat

Redirect with bad URL format is equivalent to a broken link. It leads to wrong URLs unavailable for users and search engines.

Landing on such URLs, users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and decrease its rankings in search results. Additionally, search robots waste resources on broken pages, that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Delete internal links to redirects or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.

After that, recheck settings of every redirect and correct mistakes in the final redirect URL.


Useful Links


Endless Redirect

Indicates page addresses ultimately redirecting to themselves and thereby generating an infinite redirect loop.

Threat

Endless redirect is equivalent to a broken link. It redirects to itself (straight or via redirect chain) which leads to an endless redirect cycle. Browsers and search engines perceive this redirection as an unavailable address.

Landing on such URLs users see an error page instead of desired content, that's why they may consider the site low-quality and leave it.

When the site has lots of links to broken pages, search engines may also consider it low-quality and drop its rankings in search results. Additionally, search robots waste crawling resources on broken pages, that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Delete internal links to redirects or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.

After that, recheck settings of every redirect and remove redirects or redirect chains leading to the initial URL.


Useful Links


Max Redirections

Indicates addresses of the pages that redirect more than 4 times (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Redirect chain occurs when several URLs alternately redirect the user. For example, the chain with two redirects looks like this: URL 1 → URL 2 → URL 3.

Long redirect chains cause slower page loading, that's why users may consider the site low-quality and leave it.

Also, long redirect chains make it difficult for search robots to crawl the site which may cause issues with adding pages to the search index. If the chain contains more than 5 redirects, Googlebot does not crawl the final URL at all. As a result, the site might get low rankings and lose search traffic.


How to Fix

Delete internal links to redirects or replace them with links to available addresses. Use the Shift+F1 shortcut to see all incoming links for such URLs.

Then, set up redirection straight to the final URL without chains: for example, instead of the chain URL 1 → URL 2 → URL 3, set a straight redirection URL 1 → URL 3 and URL 2 → URL 3. The 'Redirect chains' special report will show all such chains: just press 'Export' in the program menu → 'Issue reports' → 'Redirect Chains'.


Useful Links


Redirect Blocked by Robots.txt

Indicates addresses of the pages that return a redirect to a URL blocked by robots.txt. Note that the report will contain each URL from the redirect chain pointing to the blocked address. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

A redirect blocked by robots.txt occurs when the first URL is allowed for crawling, and the final address of the redirect or any other URL in the redirection chain is disallowed. In this case, the search robot follows the link, but cannot continue crawling, because it hits the blocked address.

Such redirects have a negative effect on site crawling and waste link weight. Thereby, important pages may receive less link weight and lower priority for the search robot → therefore, they will be crawled less frequently, get lower search positions, and drive less traffic than they could.


How to Fix

Delete internal links to redirects or replace them with links to available addresses. Use the Shift+F1 shortcut to see all incoming links for such URLs.

After that, recheck redirect settings, there may be an issue in the final redirect URL. If there are no issues, block the initial redirect URL in the robots.txt, in order for the search robot not to crawl it.


Useful Links


Non-Compliant Canonical URL

Indicates pages with a link to non-compliant URL in the <link rel="canonical" /> tag or HTTP response header 'Link: rel="canonical"'.

Threat

When a page is available via several addresses or a site has multiple pages with similar content, search engines recommend to specify which URL is the main and has to be shown in search using the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP header. Thereby search robots will not waste their resources on crawling duplicate or unimportant pages.

If a canonical URL points to a non-compliant page, search engines will ignore it. As a result, unimportant pages might get higher rankings in search. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.

Note that canonical is a hint for search engines, not a strong directive. Also, it should be used only for pages with very similar content. Otherwise, it may be ignored.


How to Fix

Replace non-compliant canonical URLs with addresses of the compliant pages.

Useful Links


Canonical Chain

Indicates pages starting a canonical chain (when a canonical URL contains page address that links to another URL in <link rel="canonical" /> or the 'Link: rel="canonical"' HTTP response header) or taking part in it. To view detailed information, open an additional table 'Canonicals' in the 'Database' menu.

Threat

Canonical URL taking part in a canonical chain (e.g. URL 1 → URL 2 → URL 3) and pointing to a canonicalized URL may not be considered by search robot during website crawling. It may cause duplicates on a website, and therefore traffic loss.

How to Fix

Set addresses in the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP response header that not are not pointing to other URLs in this tag or header.

To detect URLs taking part in canonical chains, press 'Export' in the program menu → 'Issue reports' → 'Canonical chains' and export the report.


Useful Links


Broken Images

Indicates unavailable images (e.g. due to a connection failure or a timeout), as well as the ones returning 4xx and higher HTTP status codes. Note that image checking must be enabled on the 'General' tab of crawling settings to detect this issue. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Broken images might occur when image files are deleted or unavailable (e.g. an image URL returns the 5xx status code or other server errors).

These images are not displayed on a page, and users see blank spaces instead, so they might consider the site low-quality and leave it. Also, it's a negative website quality signal for search engines which can result in lower rankings and traffic loss.


How to Fix

Replace broken images with available ones or delete them from the source code. To detect pages with broken images, press the 'Issue report' button over the main table.

Also, images can be unavailable due to a high load on a website during crawling. In this case, reduce the load (decrease the number of threads and set a delay between requests in crawling settings), finish the crawling, and recrawl all unavailable URLs (select them in the table and press the Ctrl+R shortcut).


Useful Links


Links to Localhost

Shows pages with external links to the 127.0.0.1 address or localhost. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Localhost is a name reserved for a host of a current computer. It is compliant with an IP address 127.0.0.1. Links to such host are used during the website development and then are replaced with the main host.

Links to localhost addresses may remain after the site was published from a local environment. They are unavailable for users and search engines. Landing on such addresses, users see an error page instead of a desired content that's why they may consider the site low-quality and leave it.

When the site has lots of links to unavailable addresses, search engines may also consider it low-quality and decrease its rankings in search results. Additionally, search robots waste resources on unavailable addresses that's why important pages may lack them and not get into the search index. As a result, the site may lose search traffic.


How to Fix

Replace links to the localhost with links with available addresses. To view links to broken pages, press the 'Issue report' button over the main table.

Useful Links


PageRank: Dead End

Indicates HTML pages that were marked by the internal PageRank algorithm as 'dead ends'. They are the pages that have incoming but no outgoing links, or the last ones are blocked by crawling instructions.

Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight). If a page has no outgoing links or they are blocked for search robots, it cannot pass link weight to other pages of the site. As a result, due to the 'dead end' pages useful ones may get less link weight, lower rankings, and therefore less traffic.

How to Fix

Replace or delete links pointing to unavailable URLs (e.g. due to connection failure, exceeded response timeout, etc.), or the ones returning 4xx and higher HTTP status codes.

Reduce the number of links to internal website pages with the 'nofollow' directive in Meta Robots and X-Robots-Tag. Use the Shift+F1 shortcut to see all links pointing to 'dead ends'.


Useful Links


Missing Internal Links

Indicates HTML pages that have incoming links, but do not contain internal outgoing links.

Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight). If the page has no outgoing links, it cannot pass link weight to other website pages. This issue mostly happens when pages are not optimized yet and are generated using low-quality templates.

As a result, due to such 'dead end' pages useful ones may get less link weight, lower rankings, and therefore less traffic.


How to Fix

Add internal outgoing links to all key pages of your site.

It's important to make sure that these HTML documents are complete content pages. Delete all incomplete pages or the ones created by mistake (use the 404 or 410 status codes) and all incoming links pointing to them. Use the Shift+F1 shortcut to see all links pointing to pages without internal outgoing links.


Useful Links


Bad AMP HTML Format

Indicates AMP HTML documents that do not meet the AMP Project documentation standards. Note that there are at least eight markup requirements to each AMP HTML page.

Threat

Pages with bad AMP format will not be processed by search engines, will not be shown in result pages, and will not drive traffic.

How to Fix

Use an appropriate format of AMP pages.

Useful Links


Contains Flash

Indicates pages with HTML containing elements with the .swf extension or the 'SWFObject' scripts.

Threat

Flash is an outdated and insecure technology. Flash elements are not supported and can be blocked by most browsers, so users won't be able to see the content inside these elements.

How to Fix

Delete any code that calls or executes Flash from the pages.

Useful Links


Hreflang: Missing Self-Reference

Indicates HTML pages or PDF files containing links with hreflang attributes to other pages but not to the current URL in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"'.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Search engines recommend to provide a link with hreflang instruction to the current URL. If such link is missing, search engines may fail to identify the language and region properly. It may lead to low search results rankings and traffic loss.


How to Fix

Every page with this issue must contain a link with the hreflang instruction to the current URL. If all the instructions are deliberately implemented without a link to the current URL, ignore this issue.

Useful Links


Hreflang: Incorrect Language Codes

Indicates HTML pages or PDF files containing links in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"' with hreflang language code non-compliant with ISO 639-1, ISO 3166-1 Alpha 2 and ISO 15924. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Search engines will not process a language code instruction (specifying language and region) non-compliant with standards. It may lead to low search results rankings and traffic loss.


How to Fix

Apply language code (it can specify language and region) according to ISO 639-1, ISO 3166-1 Alpha 2, and ISO 15924 standards to the pages with this issue.

Useful Links


Hreflang: Relative Links

Indicates HTML pages or PDF files containing relative links with hreflang attributes in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"'. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Search engines will not process instructions with relative URLs (for example, <link rel="alternate" hreflang="en" href="/page" />). It may lead to low search results rankings and traffic loss.


How to Fix

Replace all relative URLs with absolute ones. For example, replace <link rel="alternate" hreflang="en" href="/page" /> with <link rel="alternate" hreflang="en" href="https://site.com/page" />.

Useful Links


Hreflang: Duplicate Language Codes

Indicates HTML pages or PDF files containing links to several different URLs with the same hreflang value in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"'. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Search engines will most likely ignore several instructions with the same language codes (specifying language and region) and different target URLs. It may lead to low search results rankings and traffic loss.


How to Fix

Define the main target URL for specific language code among duplicates and delete the rest.

Useful Links


Hreflang: Links to Non-Compliant URLs

Indicates HTML pages or PDF files containing links in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"' with hreflang attributes pointing to non-compliant URLs. To view a special report on this issue, press the 'Issue report' button over the main table. Note that to detect this issue you have to enable crawling of Hreflang on the 'Advanced' tab of crawling settings.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Search engines will ignore links to non-compliant URLs in hreflang instruction. It may lead to low search results rankings and traffic loss.


How to Fix

Delete or replace links to non-compliant URLs.

Useful Links


Hreflang: Missing Confirmation Links

Indicates HTML pages or PDF files containing outgoing links with hreflang attributes. However, there are no corresponding incoming links with this attribute. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

If page A points to page B, the second one should contain a confirmation link to the first. Otherwise, search engines might misread or misinterpret the attributes. It may lead to low search results rankings and traffic loss.


How to Fix

Add confirmation links with hreflang instruction to the pages where they are missing.

Useful Links


Hreflang: Inconsistent Language Code in Confirmation Links

Indicates HTML pages or PDF files with the language code ('Hreflang language code' parameter) that is not consistent with the language code of incoming hreflang links. To view a special report on this issue, press the 'Issue report' button over the main table. Note that to detect this issue you have to enable crawling of Hreflang on the 'Advanced' tab of crawling settings.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

If page A points to page B, the second one should contain a confirmation link to the first. At the same time, language code in the confirmation link should correspond with the language code of page A. If language codes of the page and its confirmation links are not the same, a search engine may ignore such instructions. It may lead to low search results rankings and traffic loss.


How to Fix

Fix incoming links with hreflang instruction in which language code is inconsistent with the language code of the target page.

Useful Links


Long Server Response Time

Indicates addresses of the pages with TTFB (time to first byte) exceeding 500 ms (by default). Note that you can change default value on 'Restrictions' tab of crawling settings.

Threat

Load speed is one of the most important ranking factors for search engines: the faster a page loads, the higher position it may get in search.

Fast page load improves user experience and helps to increase conversion rates. If a page takes a long time to load (especially on mobile devices with lower Internet speed), a user may consider the website low-quality and leave it.

Note that Netpeak Spider measures the server response speed — one of the significant components of the total page load speed.


How to Fix

There are a lot of reasons to low server response speed: slow application logic, slow requests to a database, no caching etc.

Optimize server response speed of the page: it should be 500 milliseconds as maximum. It can be solved by the server configuration, switching to a more efficient hosting, or using a CDN to speed up static files loading.


Useful Links


Missing or Empty H1

Indicates compliant pages without the <h1> heading tag or with an empty one.

Threat

An H1 heading is an important element of search engine optimization. It helps users understand the contents of a page while visiting the site.

If the H1 of a page is missing or empty, it's a missed opportunity to optimize the page for target keywords. Thereby, pages may get lower search rankings.


How to Fix

Create a unique (within the site) H1 for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 3 to 7 words.

It's also recommended to have only one H1 on a page, not duplicating contents of the <title> tag.


Useful Links


Spelling Mistakes

Shows URLs with misspelled words in one or several text sections (title, description, headings, images alt attributes, all text in <body></body> section).

Threat

Users and search engines can consider the content low-quality. The text can also become less relevant to the important search queries, and thus search traffic may decrease.

How to Fix

Fix spelling mistakes.

Note that some words can be spelled correctly, but the tool will consider them misspelled since they are not found in the Windows dictionary. Add such words to your Windows dictionary (using 'Add to Windows custom dictionary' option in the context menu of the 'Spelling mistakes' table in the database) or to the ignor-list in crawling settings. Starting with the next crawling these words won't be considered misspelled.


Useful Links


Min Content Size

Indicates compliant pages with less than 500 characters (by default) in the <body> section (excluding HTML tags). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Search engines mainly use text content to decide if a web-page matches the search queries. Pages with bigger amount of text often appear higher in search, because they reveal the topic better.

Usually, pages don't have enough text content when they are not optimized yet and are automatically generated using low-quality templates. They are often very similar or even duplicate and may also not contain enough relevant keywords.

Also, search engines may consider pages low-quality, if most of the text on them is a template layout (menu, header, footer), therefore causing lower positions in search.


How to Fix

It is necessary to fill important pages with a sufficient amount of useful text content which reveals the topic of relevant keywords to the fullest extent.

Useful Links


Images Without Alt Attributes

Indicates compliant pages that contain images without an alt attribute or with an empty one. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

An alt attribute is used for ranking in image search and helps search robots define what is shown on the image. Also, well-optimized image alt attributes positively affect ranking of HTML pages where they are located.

Also, if an image loading failed or disabled in browser, users will see the alt attribute text instead of blank spaces on the page.

Images without alt attributes are missed text optimization opportunities for images themselves and the pages where they are located. Using alt attributes is essential for the pages where images are the main content.


How to Fix

All images on the site need to have alt attributes that will concisely describe what is shown on them and contain relevant keywords.

Click the 'Issue report' button above the main table to see all links to images without alt attributes.


Useful Links


Max Image Size

Indicates addresses of the images with the size exceeding 100 kBs (determined by the Content-Length HTTP response header). Take into account that the 'Check images' option should be enabled on the 'General' tab of crawling settings to detect this issue. Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Heavy images load more slowly and thereby slow down the page load speed. If a page takes a long time to load (especially on mobile devices with lower Internet speed), a user may consider the website low-quality and leave it.

How to Fix

For photos, it is necessary to use the optimal raster format (JPEG, PNG, GIF) and compression without quality loss; for other images (e.g. logos, illustrations) — vector format. For modern browsers, it is recommended to transfer images using such formats as WebP or JPEG XR.

Also, it is necessary to upload an image with the same size as the HTML container in which it is displayed. This way the browser won't waste time to download extra data.

The optimal image size is up to 100 kBs.


Useful Links


3xx Redirected Pages

Indicates URLs that return a 3xx redirection status code.

Threat

Links to URLs with redirects badly affect user experience because they increase the page load time (especially on mobile devices with lower Internet speed). Moreover, if users frequently land on unexpected URLs, they may consider the website low-quality and leave it.

Big sites may particularly suffer from redirects: search robots might waste crawling resources on extra transitions and there will be nothing left for important pages. As a result, many important pages may not get into the search index, and the site will lose traffic.


How to Fix

Delete internal links to redirects from your site or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.

Useful Links


Redirect Chain

Indicates URLs that redirect more than 1 time.

Threat

Redirect chain occurs when several URLs alternately redirect the user. For example, the chain with two redirects looks like this: URL 1 → URL 2 → URL 3.

Long redirect chains cause slower page load (especially on mobile devices with lower Internet speed), that's why users may consider the site low-quality and leave it.

Also, long redirect chains make it difficult for search engines to crawl the site and add pages to the search index. If the chain contains more than 5 redirects, then the Google bot does not crawl the final URL at all. As a result, the site can get low positions in search and lose search traffic.


How to Fix

Delete internal links to redirects or replace them with links to available addresses. Use the Shift+F1 shortcut to see all incoming links for such URLs.

Then, set up redirection straight to the final URL without chains: for example, instead of the chain URL 1 → URL 2 → URL 3, set a straight redirection URL 1 → URL 3 and URL 2 → URL 3. The 'Redirect chains' special report will show all such chains: just press 'Export' in the program menu → 'Issue reports' → 'Redirect Chains'.


Useful Links


Refresh Redirected

Indicates addresses of the pages that redirect to another URL using the refresh directive in the HTTP response header or the <meta http-equip="refresh"> tag in the <head> section of a document.

Threat

Refresh directs a user to a new (or the same) URL after a certain period of time.

It is not recommended to use this meta tag (or a directive) as a redirection method (instead of the 3xx server redirects): it is not supported by all browsers, it substantially slows down the page load speed and may confuse users, thereby they might consider the site low-quality and leave it.


How to Fix

If meta refresh is used on important URLs, remove it or set the 301 server redirect.

However, the meta refresh can be used on not important pages for certain tasks (e.g. on authentication or payment confirmation pages).


Useful Links


External Redirect

Indicates internal URLs that return a 3xx redirect to external website which is not a part of the analyzed one.

Threat

These redirections might be used for hiding links to external sites from search robots. This configuration should be used very carefully: it may be deemed to be a way to manipulate search algorithms.

Also, redirects to external resources may appear after the site was hacked: this way trespassers redirect users to their web resources.


How to Fix

It is important to make sure that redirects from internal links to external sites were configured deliberately and are not the consequence of trespassers' actions.

Otherwise, it is necessary to remove all links to URLs with redirects to external sites or replace them with links to available internal pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.


PageRank: Redirect

Indicates URLs marked by the internal PageRank algorithm as redirecting link weight. It could be page addresses returning a 3xx redirect or having canonical / refresh instructions that point to another URL.

Threat

Mostly, addresses of the pages that redirect or are canonicalized will not appear in search and will not drive traffic.

How to Fix

Make sure that the addresses of important pages are absent among URLs that redirect link weight. Otherwise, remove the redirect from the important pages' addresses to make them appear in search.

Also, delete internal links to redirects or pages with the refresh directive from your site or replace them with links to available pages. Use the Shift+F1 shortcut to see all incoming links for such URLs.


Useful Links


HTTPS → HTTP Redirects

Shows pages with HTTPS protocol that redirect to the address with HTTP protocol. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

If a site with the HTTPS protocol has addresses which lead to the HTTP protocol, this can cause an infinite redirect loop. Also when users land on an insecure page, they may consider the site low-quality and leave it. Search engines may also consider the website insecure and lower its rankings in search. As a result, the site may lose search traffic.

How to Fix

If HTTP is the main site protocol, you can ignore this issue.

In case if the main site protocol is HTTPS, check why some addresses lead to a HTTP protocol, and set the right redirect. Use the Shift+F1 shortcut to see all incoming links for such URLs.


Useful Links


Mixed Content

Shows pages with HTTPS protocol that have resources (JS, CSS or IMG link type) with HTTP protocol. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

If a secure webpage with HTTPS protocol contains resources (script files, stylesheets, or images) with HTTP protocol, this will cause the 'Mixed Content' issue.

In this case, users may see a corresponding warning in browser, consider the site low-quality and leave it. Search engines may also consider it insecure and lower its rankings in search. As a result, the site may lose search traffic.


How to Fix

Replace or delete all links to the resources with HTTP protocol. To view a special report on this issue, press the 'Issue report' button over the main table.

Useful Links


HTTPS → HTTP Hyperlinks

Shows pages with HTTPS protocol that has internal hyperlinks to the HTTP addresses. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

If a secure page with HTTPS protocol has hyperlinks to pages or other documents with HTTP protocol, it will lower the site's credibility.

In case of an HTTPS → HTTP redirect, it increases the link transition time. If there is no redirect, users land on an insecure page, therefore may consider the site low-quality and leave it. Search engines may also consider the website insecure and lower its rankings in search. As a result, the site may lose search traffic.


How to Fix

Replace or delete all outgoing hyperlinks to pages with HTTP protocol. To view hyperlinks to such pages, press the 'Issue report' button over the main table.

Useful Links


Multiple Titles

Indicates compliant pages with more than one <title> tag in the <head> HTML section.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result.

Multiple title tags usually occur on a page due to mistakes in website settings. Since search robots use only one title, the one they choose might be unoptimized and irrelevant to the page. As a result, the site may lose search traffic.


How to Fix

Change website settings so only one <title> tag is displayed in the <head> section of a page.

Create a unique (within the site) title tag for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 40 to 70 characters, maximum is up to 140 characters.


Useful Links


Multiple Descriptions

Indicates compliant pages with more than one <meta name="description" /> tag in <head> HTML section.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

Multiple meta description tags usually occur on a page due to mistakes in website settings. Since search robots use only one description, the one they choose might be unoptimized and irrelevant to the page. As a result, the site may lose search traffic.


How to Fix

Change website settings so only one <meta name="description" /> tag is displayed in the <head> section of a page.

Create a unique (within the site) description for each page which will concisely describe its content, contain target keywords, and induce users to visit the site. It should be brief and informative: the optimal length is from 100 to 160 characters.


Useful Links


Same Title and Description

Shows compliant pages where <meta name="description" /> tag content is identical to the <title> tag content.

Threat

Duplicate title and description represent a missed opportunity to optimize texts that serve as a source for generating snippets in search results. If description copies the title and does not correspond to the search query for a current page, search engines may change it themselves according to the page text, and it might be inappropriate and unattractive for users. This may decrease the click-through rate of the page in search, thus its traffic.

It usually happens when pages are not optimized yet, and meta description and title content is generated automatically using low-quality templates.


How to Fix

Optimize title and description: they have to be unique (within the site), concisely describe pages content, contain target keywords, and induce users to visit the site.

Optimal title length is from 40 to 79 characters, maximum is up to 140 characters. Optimal description length is from 100 to 160 characters.


Useful Links


Query Contains Question Mark

Shows URLs that contain question mark in GET-parameters. For instance, URLs like https://example.com/?name=value?anothername=anothervalue.

Threat

Question mark is often used instead of an ampersand '&' which separates the GET parameters. In this case a search engine can interpret the question mark as a regular character and perceive several parameters as one. It can potentially cause page duplicates, and thus traffic loss and even a removal of these pages from SERP.

How to Fix

Replace question marks with ampersand '&', if they were accidentally used as separators for GET-parameters in URL.

Useful Links


URLs with UTM Parameters

Shows internal URLs with the marks of the analytic systems: utm_source, utm_medium, utm_campaign, utm_term, utm_content, gclid, yclid, _openstat.

Threat

Marks are usually used for a better attribution of visits in analytic systems.

Following the URL with a mark usually executes the start of a new session in a web-analytics system. That is why the data may be greatly misrepresented or even damaged if the internal links with marks are used.


How to Fix

Delete marks from internal links and use them only for links from external resources.

Useful Links


Compliant AMP Pages

Indicates AMP HTML pages that are available for indexing by search engines and have no canonical instructions for the desktop version.

Threat

The AMP HTML pages with the equivalent desktop version and without a canonical instruction to that page may be considered duplicates by search engines, and therefore rank low.

How to Fix

For the AMP HTML pages with the equivalent version for computers, set a canonical instruction to the desktop version. For the independent AMP HTML pages this issue is irrelevant.

Useful Links


Blocked by Robots.txt

Indicates URLs disallowed in the robots.txt file.

Threat

A robots.txt file can disallow search robots from crawling specific website pages. Thus, they will not take part in search.

Robots.txt is mostly used to block pages unimportant for search (e.g. internal search, shopping cart, sign up pages, etc.) to save robot's crawling resources.

If a website contains a lot of links pointing to URLs that are disallowed from crawling, useful pages may get less link weight, lower rankings, thereby less traffic.


How to Fix

Important pages have to be allowed for crawling by search robots.

It's recommended to reduce the number of links pointing to pages disallowed from crawling. Use the Shift+F1 shortcut to see all incoming links for such URLs.

However, if links are useful for user navigation through the website (e.g. links pointing to shopping cart or sign up pages), do not delete them.


Useful Links


Blocked by Meta Robots

Indicates pages that are disallowed from indexing by the 'noindex', or 'none' directives in the <meta name="robots"/>, or <meta name="[bot name]"/> tags in the <head> section where the [bot name] is a name of a certain search robot.

Threat

Meta Robots instructions help to disallow search engines from showing website pages in search results. The Meta Robots tag is mostly used for blocking unimportant pages (e.g. shopping cart, login, sign up pages, etc.). For these directives to be followed by search robots, the URL must not be blocked by a robots.txt file.

How to Fix

Make sure that search engines are allowed to show important pages in search, while unimportant are the only ones that are disallowed.

Useful Links


Blocked by X-Robots-Tag

Indicates URLs that contain the 'noindex' or 'none' directive in the X-Robots-Tag HTTP header.

Threat

Instructions in the X-Robots-Tag header help to disallow search engines from showing website pages in search results, but they are not supported by all search engines.

The X-Robots-Tag is useful to disallow indexing of non-HTML documents because it's located in the server response header. Nevertheless, it can also be used for HTML pages.

It's not recommended to use both Meta Robots and X-Robots-Tag for HTML pages. If different directives appear in these settings by any mistake, important pages may lose traffic.

For these directives to be followed by search robots, the URL must not be blocked by a robots.txt file.


How to Fix

Make sure that search engines are allowed to show important pages in search, while unimportant are the only ones that are disallowed.

If both Meta Robots and X-Robots-Tag are used simultaneously for HTML pages, make sure they match or better leave only one of them: either the meta tag or HTTP header.


Useful Links


GA: Non-Compliant Pages with Traffic

Shows addresses of the pages that are not available for indexing by search engines but did drive traffic for the specified period (according to Google Analytics). Note that the issue is only relevant to the segments including organic search traffic.

Threat

Non-compliant pages that drive organic traffic probably appear in search by mistake. They may steal traffic from target pages and decrease website conversion rate.

How to Fix

First, optimize the target pages that might get less search traffic due to insufficient optimization and competition with unimportant pages. The goal is to reassign traffic from unimportant to target pages.

Then use webmaster tools to check if a search robot handled indexing instructions of unimportant pages correctly and ask to recrawl the URLs. If needed, request to remove non-compliant pages from the index.


Useful Links


YM: Non-Compliant Pages with Traffic

Shows addresses of the pages that are not available for indexing by search engines but did drive traffic for the specified period (according to Yandex.Metrica). Note that the issue is only relevant to the segments including organic search traffic.

Threat

Non-compliant pages that drive organic traffic probably appear in search by mistake. They may steal traffic from target pages and decrease website conversion rate.

How to Fix

First, optimize the target pages that might get less search traffic due to insufficient optimization and competition with unimportant pages. The goal is to reassign traffic from unimportant to target pages.

Then use webmaster tools to check if a search robot handled indexing instructions of unimportant pages correctly and ask to recrawl the URLs. If needed, request to remove non-compliant pages from the index.


GA: Max Bounce Rate

Shows addresses of compliant pages which bounce rate exceeds 70% (by default) according to Google Analytics. Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

High bounce rate is a bad sign for pages that have to lead visitors to other pages of the website. For example, home or category pages.

How to Fix

Check whether the pages serve the intent of users landing on them from search or ads. Pages that are irrelevant to search queries or ads usually get more bounces. Also, check if the pages are user-friendly and whether users understand how to navigate through the site: whether all menu elements and CTA buttons are visible, etc.

Useful Links


YM: Max Bounce Rate

Shows addresses of compliant pages which bounce rate exceeds 70% (by default) according to Yandex.Metrica. Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

High bounce rate is a bad sign for pages that have to lead visitors to other pages of the website. For example, home or category pages.

How to Fix

Check whether the pages serve the intent of users landing on them from search or ads. Pages that are irrelevant to search queries or ads usually get more bounces. Also, check if the pages are user-friendly and whether users understand how to navigate through the site: whether all menu elements and CTA buttons are visible, etc.


GSC: Non-Compliant Pages with Impressions

Shows addresses of pages that are not available for indexing by search engines but did have impressions in organic Google search results for the specified period (according to Google Search Console).

Threat

Non-compliant pages that have impressions probably appear in search by mistake. They may steal traffic from target pages and decrease website conversion rate.

How to Fix

First, optimize the target pages that might get less impressions due to insufficient optimization and competition with unimportant pages. The goal is to make sure only target pages have impressions.

Then use Search Console to check if a search robot handled indexing instructions of unimportant pages correctly and ask to recrawl the URLs. If needed, ask to remove non-compliant pages from the index.


Useful Links


Non-HTTPS Protocol

Indicates URLs without secure HTTPS protocol.

Threat

If a secure website with HTTPS protocol has HTML pages, images, or other files with HTTP protocol, it may cause the 'Mixed content' issue.

In this case, users may see a corresponding warning in browser, consider the site low-quality and leave it. Search engines may also consider it insecure and lower its rankings in search. As a result, the site may lose search traffic.


How to Fix

If a website uses an insecure HTTP protocol, ignore this issue.

If the main website protocol is HTTPS, check why specific pages use the HTTP protocol. Maybe redirects from the HTTP versions of URLs to the HTTPS ones have stopped working.

After the check, edit or delete all links pointing to pages with the HTTP protocol. Use the Shift+F1 shortcut to see all incoming links for such URLs.


Useful Links


Percent-Encoded URLs

Indicates URLs that contain percent-encoded (non-ASCII) characters and spaces. For instance, the URL https://example.com/例 is encoded as https://example.com/%E4%BE%8B.

Threat

Browsers and search engines allow using encoded URLs.

Nevertheless, these URLs may not look attractive to users outside of the browser address bar. Addresses created using multiple URL generation methods (e.g. transliteration, words in a different language, and other encoded character sets) may not be perceived well by users. An example of a bad URL: http://site.com/家電/keitai-denwa/samsung/.

Also, if configured incorrectly, the server may not process the encoded addresses, which may result in broken pages and other errors.


How to Fix

If it was decided to use non-ASCII characters in the URL on the site, check if the characters are escaped and processed by the server correctly.

If the website URLs should contain only ASCII-symbols, it is necessary to establish how the encoded addresses appear and eliminate the reasons of their occurrence. It is possible that the URL may accidentally contain special symbols, or characters from other character sets instead of Latin ones (e.g., Cyrillic c, o, p, e, y, instead of Latin characters). Use the Shift+F1 shortcut to see all incoming links for encoded URLs.

It's also better to avoid spaces in the URL as separators, since they will be encoded with '%20' characters and look unattractive.


Useful Links


URLs with Wrong Hyphenation

Shows URLs with wrong hyphenation: right after the slash '/', in the end of a URL, two hyphenations in a row. For instance, URLs like https://example.com/category/-subcategory/test--page-

Threat

Wrong hyphenation does not affect ranking but makes URLs less attractive for users.

Such issues usually occur due to automatic transliteration or URL generation from text strings when a punctuation or other characters are replaced with hyphens.


How to Fix

Fix the URL, set a redirect from an old address to a new one, and replace all incoming links to it. Use the Shift+F1 shortcut to see all incoming links for such URLs.

Useful Links


URLs with Unwanted Special Characters

Shows compliant pages addresses with unwanted characters (,_$@* and other). For instance, URLs like https://example.com/category/page-name[5]/?$a=123

Threat

Special characters are processed well by both browsers and search engines. However such URLs may look unattractive for users.

The issue usually occurs in case of typos in the URL of the link or if an underline is used instead of hyphenation to separate the words in URL.


How to Fix

Replace or delete special characters from the target page addresses (except /-+?=&#) to make them more attractive and readable.

After the improvement, set the redirects from old URLs to the new ones and replace them in all incoming links. Use the Shift+F1 shortcut to see all incoming links for such addresses.


Useful Links


URLs with Uppercase Characters

Shows URLs with uppercase letters. For instance, URLs like https://example.com/Category/Subcategory

Threat

Using uppercase letters does not affect ranking and user experience. However, such URLs could potentially cause page duplicates if correct redirects from the same URLs with all lowercase letters are not set.

How to Fix

Make sure that the correct redirect is set for the URLs with lowercase letters to the current URLs. For example, from https://site.com/Category/subcategory, https://site.com/category/Subcategory, https://site.com/category/subcategory there should be a redirect to https://site.com/Category/Subcategory.

Useful Links


Same Title and H1

Indicates all pages that have identical <title> and <h1> heading tags.

Threat

Duplicating title and H1 is a missed opportunity to use more relevant keywords on a page. It usually happens when pages are not optimized yet, and the title and H1 texts are generated automatically using low-quality templates.

How to Fix

Optimize the title and H1 in order for them to be unique (within the site), concisely describe the essence of a page, and contain target keywords.

The optimal title length is from 40 to 70 characters, maximum is up to 140 characters. The optimal H1 length is from 3 to 7 words.


Useful Links


Short Title

Indicates compliant pages that have less than 10 characters (by default) in the <title> tag. Take into account that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result.

If the title is short, search engines find it more difficult to understand the essence of a page and evaluate its relevance to the target keywords. In this case, a search engine might also create its own title for a search result which may be inappropriate and unattractive to users. As a result, the site may lose search traffic.

Usually, short titles are displayed on pages when they are not optimized yet, and the title content is generated automatically using low-quality templates.


How to Fix

Create a unique (within the site) title tag for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 40 to 70 characters, maximum is up to 140 characters.

Useful Links


Max Title Length

Indicates compliant pages with the <title> tag exceeding 70 characters (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result.

Search engines display approximately 70 characters of the <title> tag in search results, and the rest is truncated. As a result, a truncated title may be unattractive to users, and the site might lose search traffic.

At the same time, search engines consider up to 12 words from the title. That's why it can be longer than 70 characters to cover more keywords relevant to the page.


How to Fix

Create a unique (within the site) title tag for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 40 to 70 characters, maximum is up to 12 words or 140 characters.

Useful Links


Emojis or Special Characters in Title

Indicates pages with the emoji symbols or other special characters in <title> that can attract users attention.

Threat

A title tag is an essential element of search engine optimization, and its content is often used in the headline of a search result.

Graphic Unicode characters and emoji do not affect the positions in SERP, but they may impact the page's clickthrough rate in the search results. These symbols are displayed in various browsers and devices in different ways, and search engines may not show some of them in search results.

If used excessively or displayed incorrectly, graphic characters may decrease the page's clickthrough rate in SERP which respectively affects the traffic.


How to Fix

Make sure that graphic characters in the <title> tag are appropriate, complement the text well, and are correctly displayed in priority browsers and devices.

Useful Links


Short Description

Indicates compliant pages that have less than 50 characters (by default) in the <meta name="description" /> tag. Take into account that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

If page description is short, search engines may change it themselves according to the page text, and it might be inappropriate and unattractive for users. As a result, the site may lose search traffic.

Short descriptions mostly appear when pages aren't optimized yet and the <meta name="description" /> texts are generated automatically using low-quality templates.


How to Fix

Create a unique (within the site) description for each page which will concisely describe its content, contain target keywords, and induce users to visit the site. It should be brief and informative: the optimal length is from 100 to 160 characters.

Useful Links


Max Description Length

Indicates compliant pages with the <meta name="description" /> tag exceeding 320 characters (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

Search engines do not specify the minimum and maximum number of characters for descriptions. However, for effective optimization, it's better to rely on the maximum length shown in search results which is around 160 characters.


How to Fix

Create a unique (within the site) description for each page which will concisely describe its content, contain target keywords, and induce users to visit the site. It should be brief and informative: the optimal length is from 100 to 160 characters.

Useful Links


Emojis or Special Characters in Description

Indicates pages with the emoji symbols or other special characters in <meta name="description" /> that can attract users attention.

Threat

A meta description tag is an essential element of search engine optimization. Its content is often used as the description of a search result, so users could understand the essence of a page better.

Graphic Unicode characters and emoji do not affect the positions in SERP, but they may impact the page's clickthrough rate in the search results. These symbols are displayed in various browsers and devices in different ways, and search engines may not show some of them in search results.

If used excessively or displayed incorrectly, graphic characters may decrease the page's clickthrough rate in SERP which respectively affects the traffic.


How to Fix

Make sure that graphic characters in the description meta tag are appropriate, complement the text well, and are correctly displayed in priority browsers and devices.

Useful Links


Multiple H1

Indicates compliant pages with more than one <h1> heading tag.

Threat

An H1 heading is an important element of search engine optimization. It helps users understand the contents of a page while visiting the site.

It's allowed to use multiple H1 headings on a page in HTML5. However, search engines may perceive it as a signal of poorly structured content. As a result, the site may lose search traffic.


How to Fix

It's recommended to have only one H1 on a page, not duplicating contents of the <title> tag. Also, if H1 tags are used only for text formatting, delete them and style the text using CSS.

It's important to have a unique (within the site) H1 on each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 3 to 7 words.


Useful Links


Max H1 Length

Indicates compliant pages with the <h1> heading tag exceeding 65 characters (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

An H1 heading is an important element of search engine optimization. It helps users understand the contents of a page while visiting the site.

Since users find it difficult to read and perceive long headings, an H1 (just like a title) should contain short and concise page description.


How to Fix

It's recommended to shorten long headings so that they will only describe the main essence of a page.

Create a unique (within the site) H1 for each page which will concisely describe its content and contain target keywords. It should be brief and informative: the optimal length is from 3 to 7 words.

It's also recommended to have only one H1 on a page, not duplicating contents of the <title> tag.


Useful Links


Max HTML Size

Indicates compliant pages with more than 200K characters (by default) in the <html> section (including HTML tags). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Big pages usually contain extra code which can be reduced.

The code size affects the page load speed. If a page is too heavy and loads long (especially on mobile devices with lower Internet speed), a user may consider the website low-quality and leave it. Such slow pages usually get low search rankings, therefore the site may lose search traffic.


How to Fix

Reduce extra HTML code: for example, remove extra HTML tags, move scripts and styles to separate files. It's also recommended to divide too big pages into several smaller ones.


Max Content Size

Indicates compliant pages with more than 50K characters (by default) in the <body> section (excluding HTML tags). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Users may find an extremely long text inconvenient to read, so they will most likely leave the page.

Also, pages with long text usually load longer (especially on mobile devices with lower Internet speed), thereby a user may consider the website low-quality and leave it. Such slow pages usually get low search rankings, therefore the the site may lose search traffic.


How to Fix

It's recommended to check if all pages with a big amount of text are created correctly. If it's caused by an issue, it needs to be fixed. If it's not, divide too big pages into several smaller ones.

Useful Links


Min Text/HTML Ratio

Indicates compliant pages with less than 10% ratio (by default) of the text (the 'Content Size' parameter) to HTML (the 'HTML Size' parameter). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Search engines may consider pages low-quality, if most of the text on them is a template layout (menu, header, footer), therefore causing lower positions in search.

Also, if a page has much less text than HTML code, this extra code most likely can be reduced as it increases page size and slows its loading. If a page is too heavy and loads long (especially on mobile devices with lower Internet speed), a user may consider the website low-quality and leave it. Such slow pages usually get low search rankings, therefore the the site may lose search traffic.


How to Fix

First, pay attention to the pages where Text/HTML is much lower than the average value of this metric within the site.

It's recommended to reduce extra HTML code on such pages: for example, remove extra HTML tags, move scripts and styles to separate files.


Canonicalized Pages

Indicates canonicalized pages where a URL in the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP response header differs from the page URL.

Threat

When a page is available via several addresses or a site has multiple pages with similar content, search engines recommend to specify which URL is the main and has to be shown in search using the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP header. Thereby search robots will not waste their resources on crawling duplicate or unimportant pages.

If a canonical URL is not set for several duplicate pages, a search engine may choose the main URL itself or consider multiple addresses equal. As a result, unimportant pages might get higher rankings in search. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.

Note that canonical is a hint for search engines, not a strong directive. Also, it should be used only for pages with very similar content. Otherwise, it may be ignored.


How to Fix

Make sure that the recommendations are set correctly and there are no addresses of important pages among canonicalized URLs.

Useful Links


Identical Canonical URLs

Indicates all pages with identical canonical URLs in the <link rel="canonical" /> tags in the <head> section or 'Link: rel="canonical"' HTTP response header. URLs in this report are grouped by the 'Canonical URL' parameter.

Threat

When a page is available via several addresses or a site has multiple pages with similar content, search engines recommend to specify which URL is the main and has to be shown in search using the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP header. Thereby search robots will not waste their resources on crawling duplicate or unimportant pages.

If a canonical URL is not set for several duplicate pages, a search engine may choose the main URL itself or consider multiple addresses equal. As a result, unimportant pages might get higher rankings in search. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.

Note that canonical is a hint for search engines, not a strong directive. Also, it should be used only for pages with very similar content. Otherwise, it may be ignored.


How to Fix

Make sure the same canonical URL is set on pages with similar content and is relevant to them.

Useful Links


Canonical URL Contains Another Host

Indicates pages with another host in URL in the <link rel="canonical" /> tag or HTTP response header 'Link: rel="alternate"'.

Threat

When a page is available via several addresses or a site has multiple pages with similar content, search engines recommend to specify which URL is the main and has to be shown in search using the <link rel="canonical" /> tag or the 'Link: rel="canonical"' HTTP header. Thereby search robots will not waste their resources on crawling duplicate or unimportant pages.

URL with a host, different from the host of the current page, might be mistakenly set in canonical instruction. It most frequently happens on www subdomain when a canonical URL points to the main domain. Also, some search engines may ignore such inter-host instructions. As a result, unimportant pages might get higher rankings in search. It may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.


How to Fix

Make sure that canonical URLs containing another host are implemented correctly.

Useful Links


Missing or Empty Robots.txt File

Indicates compliant URLs that belong to a host with an empty or missing robots.txt file. Note that different hosts (subdomains or http/https protocols) may contain different robots.txt files.

Threat

A robots.txt file informs search robots which pages and files on the site can and cannot be crawled. Search engines consider it important for robots.txt file to be present on a site. If it's missing, website crawling may be inefficient.

Note that robots.txt works for the host where it's located. That's why the file located at http://site.com/robots.txt does not affect crawling of https://site.com.


How to Fix

Add the robots.txt file with crawling recommendations to the root of all hosts on your site.

Useful Links


Nofollowed by Meta Robots

Indicates HTML pages that contain a 'nofollow' or 'none' directive in the <meta name="robots" /> tags or the <meta name="[bot name]" /> in the <head> section where the [bot name] is a name of a certain search robot.

Threat

A nofollow directive in the Meta Robots helps to disallow search robots from following links on a page. As a result, such documents don't pass link weight to other pages. That's why this directive should be used only on pages where links do not matter. For example, on a personal account page or admin panel.

It's not recommended to use both Meta Robots and X-Robots-Tag for HTML pages. If different directives appear in these settings by any mistake, important pages may lose traffic.

For these directives to be followed by search robots, the URL must not be blocked by a robots.txt file.


How to Fix

Make sure that the nofollow attribute is really necessary on a page and does not obstruct link weight distribution on the site.

If both Meta Robots and X-Robots-Tag are used simultaneously for HTML pages, make sure they match or better leave only one of them: either the meta tag or HTTP header.


Useful Links


Nofollowed by X-Robots-Tag

Indicates HTML pages that contain a 'nofollow' or 'none' directive in the X-Robots-Tag field of the HTTP response header.

Threat

A 'nofollow' directive in the X-Robots-Tag helps to disallow search robots from following links on a page. As a result, such documents don't pass link weight to other pages. That's why this instruction should be used only on pages where links do not matter. For example, on a personal account page or admin panel.

It's not recommended to use both Meta Robots and X-Robots-Tag for HTML pages. If different directives appear in these settings by any mistake, important pages may lose traffic.

For these directives to be followed by search robots, the URL must not be blocked by a robots.txt file.


How to Fix

Make sure that the 'nofollow' directive is really necessary on a page and does not obstruct link weight distribution on the site.

If both Meta Robots and X-Robots-Tag are used simultaneously for HTML pages, make sure they match or better leave only one of them: either the meta tag or HTTP header.


Useful Links


PageRank: Orphan

Indicates URLs that were marked by the internal PageRank algorithm as inaccessible. It means the algorithm hasn't found any incoming links to these pages.

Such links may appear:

— during website crawling with disabled crawling and indexing instructions (robots.txt, сanonical, refresh, X-Robots-Tag, Meta Robots, a rel="nofollow" link attribute) → note that if these instructions are disabled, Netpeak Spider does not crawl the site in the same way as search robots do. However, the PageRank algorithm always considers them, so some links found during crawling might be inaccessible for it;

— during crawling of a list of URLs — the links that are not connected.


Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight).

If a page has no incoming links from other website pages, search engines may consider it unimportant. Thereby, it will be crawled less frequently, get lower search rankings, and drive less traffic than it could.


How to Fix

Create at least 10 internal links to important website pages. For instance, include them in an HTML sitemap, locate links to them in interlinking blocks on relevant pages or in the article text.

Useful Links


PageRank: Missing Outgoing Links

Indicates addresses of the pages with no outgoing links found after calculating internal PageRank. It usually happens when outgoing links on a page had not been crawled yet.

Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight).

If a page has no outgoing links or they are blocked for search robots, it cannot pass link weight to other pages of the site. This issue mostly happens when pages are not optimized yet and are generated using low-quality templates.

As a result, due to such 'dead end' documents useful pages may get less link weight, lower rankings, and therefore less traffic.


How to Fix

It's recommended to wait till the end of crawling if it's paused. If it's finished but the issue is still there, recheck all URLs where it was detected and add internal outgoing links to key pages of the site.

It's also important to make sure that these HTML documents are complete content pages. Delete all incomplete pages or the ones created by mistake (use the 404 or 410 status codes) and all incoming links pointing to them. Use the Shift+F1 shortcut to see all links pointing to pages without internal outgoing links.


Useful Links


Hyperlinks with Empty Anchor Text

Shows pages with outcoming hyperlinks with empty anchors. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Anchor is a visible clickable text of a hyperlink. It helps users and search engines to understand the content of the page the hyperlink points to. In image hyperlinks an anchor is also an image alt attribute text.

If the hyperlink has an empty anchor, it is a wasted opportunity to optimize it with target keywords. Thus, pages may have low positions in SERP. Also, such hyperlinks may be inexplicit or invisible for users.


How to Fix

All hyperlinks on the site need to contain a text that concisely represents the target page content and even contains relevant keywords.

Images in a hyperlink need to have alt attributes that will also concisely describe what is shown on them and contain appropriate search queries.


Useful Links


Self-Referencing Hyperlinks

Shows pages with hyperlinks to the current page address. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Self-referencing hyperlink is a link to the current page. Such links are often used on websites, and they have no immediate negative impact on ranking in SERP. But sometimes they can complicate the user experience: for instance, links from the page content, 'breadcrumbs', pagination items, or other elements where they are not required.

How to Fix

Self-referencing links from a logo or an active menu item may be useful for users, so it is better to keep them.

Delete links that can distract users and worsen their experience with the page. For example: a link from the last 'breadcrumb' element, from an active pagination number, or from content (except anchor links to page sections).


Useful Links


Max Internal Links

Indicates pages with more than 100 outgoing internal links (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight).

If a page has too many outgoing links, it transfers link weight to important pages less effectively. As a result, useful pages may get less link weight, lower rankings, and therefore less traffic.

At the same time, there are no restrictions on the number of links on a page. In some cases (for pages at the top of the website structure and for so-called hub pages) a big number of links is justified. For example, some Wikipedia pages have more than 500 of them.

Take into account that when a page has too many links, a user will find it more difficult to perceive the main content of the page.


How to Fix

Make sure that a large number of links on the certain website pages is justified.

Useful Links


Max External Links

Indicates compliant pages containing more than 10 external outgoing links (by default). We have set 10 as the average number of external links on the majority of websites: approximately 5 links to social media and a couple of external links to other sites. Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

A lot of spammy links often appear in a comment section of a page: they are left by SEOs to manipulate search algorithms. Also, many external links may appear after the site was hacked: this way trespassers redirect users to their web resource.

That's why it's better to regularly check the site for unnecessary links, which is easy to do with this report.


How to Fix

Make sure all external outgoing links are useful for users and are not spammy.

Useful Links


Internal Nofollow Links

Indicates compliant pages that contain outgoing internal links with the rel="nofollow" attribute. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Internal linking is the most efficient method to show search engines a relative importance of website pages (link weight).

The rel="nofollow" attribute helps to disallow search robots from following certain links to unimportant pages in order to save crawling resources. That said, if the attribute is set to a link pointing to an important page, a link weight will not be passed. As a result, important pages may lose search rankings, thus traffic.


How to Fix

Make sure the rel="nofollow" attribute is used correctly and does not obstruct link weight distribution to important pages.

Useful Links


External Nofollow Links

Indicates compliant pages that contain external outgoing links with the rel="nofollow" attribute. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Links are the most efficient method to show search engines a relative importance of website pages (link weight).

The rel="nofollow" attribute is recommended to use for links to unreliable content on other resources or for promotional external links in order not to pass link weight to such pages.

How to Fix

Check if all external links from the report require the rel="nofollow" attribute, and vice versa — whether all promotional links have it.

It's recommended to add rel="nofollow" automatically to all external links in UGC website sections (e.g. comments).


Useful Links


Hreflang: Missing Alternate URLs

Indicates HTML pages or PDF files without hreflang links in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"'. At the same time, other website pages do contain such links.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

If you have localized versions of your page but hreflang is not properly set, it may lead to low search results rankings and traffic loss.


How to Fix

Make sure that hreflang instructions are set on all necessary pages.

Useful Links


Hreflang: Duplicate Alternate URLs

Indicates HTML pages or PDF files containing several links to one URL with different hreflang value in the <link rel="alternate" /> tag or HTTP response header 'Link: rel="alternate"'. To view a special report on this issue, press the 'Issue report' button over the main table.

Threat

Hreflang instruction allows you to point the search engines to the localized versions of your pages. It is one of the signals for local ranking in the search results.

Several hreflang instructions with different language codes (specifying language and region) and same target URLs can be used to specify the region. However, there's a chance to make a mistake, which may lead to low search results rankings and traffic loss.


How to Fix

Make sure that the region specification for the same target URL is necessary and configured correctly.

Useful Links


Missing Structured Data

Indicates compliant pages without structured data.

Threat

Structured data (or markup) helps a search engine to understand page content.

If a page does not contain structured data, it's a missed optimization opportunity to get more attractive snippets in search results.


How to Fix

Implement data markup on the page using JSON-LD, Microdata, or RDFa formats, and Schema.org vocabulary.

Useful Links


Bad Base Tag Format

Indicates pages that contain the <base> tag in bad format.

Threat

Due to incorrect <base> tag setting, relative links on a page may form incorrectly, thereby broken links may occur.

How to Fix

Correct all mistakes in the <base> tag: the href attribute of this tag must contain a valid relative or absolute URL.

Also, there must be a slash at the end of the address, otherwise, the URL will be processed as a file address not the folder one. If the page contains anchor links or links to URLs with GET parameters (#link or ?query=link type URLs), use absolute addresses for them.

It's recommended to not use the <base> tag at all. It's useful in rare cases, and may cause broken links and waste of crawl budget.


Useful Links


Compliant URLs with Query

Indicates compliant URLs with query string. For instance, URLs like https://example.com/?name=value.

Threat

Search engines recommend to use easy-to-understand URL structure. Parameters in URLs usually poorly describe the content of the page and website visitors might fail to perceive them well.

Parameters in URLs are a potential source of duplicates that may lead to low rankings of important pages, traffic loss, and even removal of these pages from search results.


How to Fix

Make sure that URLs with query string are compliant. To the extent possible, use static, easy-to-understand URLs instead of dynamic ones.

Useful Links


Max URL Length

Indicates pages with more than 2000 characters in URL (by default). Note that you can change the default value on the 'Restrictions' tab of crawling settings.

Threat

Some browsers and servers may not process URLs longer than 2000 characters.

How to Fix

Make all URLs on the site shorter than 2000 characters.

The optimal URL length for important pages is up to 100 characters. In this case they are much more easy to perceive for users.


Useful Links


GA: Compliant Pages w/o Traffic

Shows addresses of pages available for indexing by search engines that didn't drive any traffic for the specified period (according to Google Analytics). Note that the issue is only relevant to the segments including organic search traffic.

Threat

The target pages may be unoptimized, thus don't drive traffic.

How to Fix

First, check if there are any errors and warnings on the target pages and fix them. Then analyze data on page searches and positions in webmaster tools. If pages don't get impressions in SERP, it's possible that search engines couldn't handle their content correctly or marked it low-quality.


YM: Compliant Pages w/o Traffic

Shows addresses of pages available for indexing by search engines that didn't drive any traffic for the specified period (according to Yandex.Metrica). Note that the issue is only relevant to the segments including organic search traffic.

Threat

The target pages may be unoptimized, thus don't drive traffic.

How to Fix

First, check if there are any errors and warnings on the target pages and fix them. Then analyze data on page searches and positions in webmaster tools. If pages don't get impressions in SERP, it's possible that search engines couldn't handle their content correctly or marked it low-quality.


GSC: Compliant Pages w/o Impressions

Shows addresses of pages that are available for indexing by search engines but didn't get any impressions in organic Google search results for the specified period (according to Google Search Console data).

Threat

The target pages may be unoptimized, thus don't get impressions in search.

How to Fix

First, check if there are any errors and warnings on the target pages and fix them. Then check the pages for issues in Search Console. It's possible that Googlebot couldn't handle their content correctly or marked it low-quality.

Useful Links


GSC: Compliant Pages w/o Clicks

Shows addresses of the pages that are available for indexing by search engines but didn't get any clicks from organic Google search results for the specified period (according to Google Search Console).

Threat

The target pages may be unoptimized, thus don't get clicks from search.

How to Fix

First, check if there are any errors and warnings on the target pages and fix them. Then analyze data on page searches and positions in Search Console. If pages don't get impressions in SERP, it's possible that search engines couldn't handle their content correctly or marked it low-quality.

Useful Links

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article