How to Fix Discovered – Currently Not Indexed

Author
Category
Time to read
0 minutes
Date

Introduction

How to fix Discovered – currently not indexed

Discovered – currently not indexed means that google knows of the existence of the page but has chosen to not crawl it yet. This typically happens with very large websites (thousands of pages) when the google crawler that analyses your page runs of crawl budget. 

Every website has a certain crawl budget. The less established the domain (young, with few backlinks), the smaller the crawl budget. Crawl budget here is defined as minutes or hours googlebot spends on crawling through your website. 

Analyzing ‘Discovered – Currently Not Indexed’ Status

When a URL is marked as ‘Discovered – Currently Not Indexed’ in Google Search Console, it indicates that while Google is aware of the page, it hasn’t been indexed yet. To analyze this status, one should consider:

  • Crawl budget: Are there too many pages on the site, causing Google to stop crawling (at some point) and leaving the rest ‘Discovered – Currently Not Indexed’?
  • Site structure: Are these pages high in site structure or are they extremely hard to reach (blog post on page 89) or maybe even unreachable (Oprhan pages)? Googlebot only knows orphan pages through the sitemap but doesn’t assign a lot of importance to them since they have no internal links.

Improving site structure to increase Website Indexability

Optimizing Site Structure and Content Quality

A well-organized site structure is essential for improving indexability. “Discovered – currently not indexed” can mean that the page was discovered by Google since it was listed on your sitemap. The page might be unreachable through crawling from your homepage to all subfolders following the internal links.

To combat “Discovered – currently not indexed”, you can optimise the following:

  • Site structure: You need to be strategic about how you build your site, which subfolders you choose and how you interlink everything. 
  • Internal Linking: Utilize strategic internal linking to establish a network within the website, which not only helps users navigate but also allows crawlers to reach and index deep-linked pages.

  • Duplicate Content: Minimize duplicate content to prevent dilution of indexing focus and to signal unique value to search engines.

A healthy site structure ensures that all pages can be discovered. The “Force Directed Tree Diagram” from Screaming Frog imitates googlebot crawler and shows you if your page can be discovered. You can see that the bot has no problems crawling through the whole page and discovering what’s important.

 

Increase Crawl Efficiency to get the most out of your

Efficient crawling by search engine bots is fundamental for timely indexation. In this regard:

  • Crawl Budget Issues: A site should be optimized to make the most of its crawl budget. This includes fixing broken links and reducing server errors that waste search engine resources.

  • Server Issues: The website’s server must respond quickly to crawler requests to avoid timeouts or errors, which could negatively impact the number of indexed pages.

Google Search Console (click Settings, then Crawl stats) gives you an idea how long googlebot takes to crawl one page. The default googlebot that crawls your page is the mobile version. The page below is quite fast to crawl (360 MS per page). 

I have seen other pages that take seconds to crawl since they had a lot of third-party javascript libraries that the bot needed to load. As you can imagine, the lower your crawl budget, the slower your pages to load, the fewer pages get crawled.

PageSpeed gives you insights how fast a page loads and which elements take how long to load. On mobile, it’s very hard to reach a very high score in general (and a score of 60+ is already a good start). PageSpeed confirms that it’s fast to crawl this page. 

By addressing these technical SEO aspects, one can enhance a website’s accessibility for search engines, thus improving its potential to be indexed and ranked effectively.

Increase your crawl budget

Building up your domain will give you a higher crawl budget over time. You become an authority in your niche by regularly publishing relevant content on your page and by having backlinks from relevant websites in your niche. In turn, your authority and crawl budget increase.

Leveraging Google Search Console Features

Google Search Console (GSC) offers several features that can be used to diagnose and fix indexing issues. Webmasters should regularly check the Coverage report to identify URLs that are ‘Discovered – currently not indexed’. Once identified, they can use the URL Inspection tool to submit individual URLs for indexing instead of waiting for Google to naturally crawl them again. Furthermore, ensuring that the sitemap.xml is updated and submitted through GSC can facilitate better crawling and indexing by Google.

Frequently Asked Questions

Addressing the ‘Discovered but not indexed’ status or similar issues in Google Search Console is critical for ensuring your web pages are visible to search engines. Following established procedures can effectively resolve these indexing problems.

What are the steps to resolve ‘Discovered but not indexed’ issues in Google Search Console?

First, one should request indexing via Google Search Console. If the issue persists, it may be necessary to improve the page’s content quality to ensure it meets Google’s guidelines for indexation.

How can I address ‘Crawled – currently not indexed’ status for my website’s pages?

When a URL is ‘Crawled – currently not indexed’, it’s important to check for content duplication, improve content quality, and ensure that the site’s robots.txt file isn’t blocking crawlers.

What should I do if my webpage passes validation but is still not indexed?

If a webpage passes validation checks but remains unindexed, try submitting a sitemap update and consider enhancing the content with unique, high-quality information that provides value to users.

Why are my WordPress site’s pages ‘Discovered but not indexed’, and how can I fix it?

For WordPress sites experiencing this issue, enhancing on-site SEO by optimizing titles, descriptions, and content, as well as ensuring proper sitemap submission, can be effective strategies.

What causes a ‘Discovered but not indexed’ error on Reddit threads and how can it be corrected?

Reddit threads may not be indexed due to low community engagement or non-unique content. Active participation and providing unique, valuable insights within threads may encourage indexing.

How can Wix website pages that are ‘Discovered but currently not indexed’ be effectively resolved?

To fix indexing issues on Wix pages, ensure the website is well-structured with a clear hierarchy, high-quality content, and that all technical SEO aspects are correctly implemented.

Having website indexing issues?

Check out our blogs on the most common indexing issues and how to fix them. Fix your page indexing issues

Looking for an SEO Consultant?

Find the best SEO Consultant in Singapore (and worldwide). Best SEO Consultant

Is this you?

💸 You have been spending thousands of dollars on buying backlinks in the last months. Your rankings are only growing slowly.


❌You have been writing more and more blog posts, but traffic is not really growing.


😱You are stuck. Something is wrong with your website, but you don`t know what.



Let the SEO Copilot give you the clicks you deserve.