Discovered - currently not indexed. What it is, and how to fix it?

Written by Tiago Silva. Published on 20, April 2022

'Discovered - currently not indexed' is a status on the Google Search Console coverage report.  This status is used when Google knows a page exists but hasn't crawled or indexed it yet. Google can discover pages via XML sitemaps, internal and external links.

In this guide, you'll see the reasons for this status. In most cases, pages ‘Discovered - currently not indexed’ eventually get crawled without manually requesting. 

Use the URL inspection tool and index coverage report to find these pages from your website.

See ‘Crawled - currently not indexed’ if Google has already crawled the page on your site but hasn’t indexed it.

Why is it taking time to get my pages crawled by Google?

The frequency that Google crawls a website varies, explaining why it might take more time to crawl your pages.

Some factors that influence how frequently Google crawls a website are:

  • how relevant Google considers the website;
  • how frequent the website publishes new content;
  • the speed of the website and servers;
  • you have too many URLs to crawl;
  • you have errors on the site, wasting crawl budget. 

Over time, Google will adjust how frequently they crawl pages on your site depending on these signals. 

Why does Google exclude some pages from indexing?

It's impractical or even impossible to index every page on the web, so you can't necessarily expect Google to index all the pages of any website, including yours.

Over the years, Google has developed content guidelines to deal with the ever-growing number of pages. They work as a way for Google to determine what they should index.

Here is a list of common situations causing Google to not index pages:

  • Technical reasons: pages that aren't accessible by humans get removed from the index. Including pages with errors (4xx codes) and redirects (301 and 302);
  • Lack of crawl budget: large websites face a limit on the number of pages Google can crawl. This means some pages go to a waiting list;
  • Excluded by design: sometimes, a website owner doesn't want a page indexed and use robots.txt or a noindex tag. This also happens to pages with a canonical tag that refers to a different URL;
  • Poor website structure: Google doesn't crawl pages if the internal link structure is poor;
  • Google saving resources: Google can decide a page isn't worth the effort to crawl.

In the words of Google Search Advocate John Mueller:

"There's no objective way to crawl the web properly. It's theoretically impossible to crawl it all, since the number of actual URLs is effectively infinite."

How to fix ‘Discovered - currently not indexed’

As has been seen above, there are many reasons why Google might not be crawling pages on your site. But now it's time to focus on what you can do to fix this dreaded ‘Discovered - currently not indexed’ status.

Manually ask for Google to crawl the page

If you published a page some time ago and Google hasn’t crawled it, it's time to manually ask them to crawl the page.

To request Google to index a page, follow these steps:

  1. Use the URL inspection tool on Google Search Console (on the sidebar or top of the page);
  2. Put the URL you want Google to crawl;
  3. Press Enter (or Return) and wait for the URL report;
  4. Click on "Request Indexing" for Google to put this URL on their crawl queue.

An important reminder is that you should only do the process 1 time. Repeatedly pressing "Request Indexing" won't make Google crawl the page faster.

Doing the steps mentioned below in this guide is important as Google should be finding and crawling pages on your site without you having to ask manually every time. If they don't, there's likely something wrong with your site, or it needs improvement.

Check server capacity

Check if your website servers are handling Google crawlers without getting overloaded.

Check the crawl stats on Google Search Console or the crawl logs on your hosting server. 

To access server health, look at the average response time and 5xx error codes (overloaded server). You don't have to do anything if the server is not experiencing these errors. But if you find those 5xx errors, consider upgrading your web hosting infrastructure or improving the website performance.

Check if the page is in the XML sitemap

Google can discover and index pages that aren't in XML sitemap files, but it is recommended to include them anyway. This way, you signal it's a relevant page that you want to index and make it easier for crawlers to find it.

A potential solution for WordPress users is to use a WordPress plugin that updates XML sitemap files automatically when new pages are published. Popular options are Yoast, Rank Math, and SEOPress.

For other website builders and CMSs, check if there is an existing XML sitemap or if you need to create one. Typically, you can find a sitemap if you add "sitemap.xml" after the root domain. For example, Domain.com/sitemap.xml.

Create a temporary sitemap

Sometimes even if you already have an XML sitemap, it may help with the crawling and indexing of new pages to create a temporary XML sitemap containing just the URL you wish to get into the search results. This is one of the tips in our guide about getting content index quickly by Google. Check it to improve your website indexing speed.

Optimize Google crawl budget

The lack of crawl budget is another factor that affects pages getting crawled. Usually, only big websites with tens of thousands of pages have to worry about crawl budget.

However, if your site is in that category or you are facing issues getting content indexed, these are the steps to optimize crawl budget:

  • Fix crawl errors such as broken internal pages that produce 404 errors.
  • Fix internal links that 301 redirect to another internal page.
  • Remove redirect chains (also known as redirect loops) because they eat crawl budget;
  • Block parts of your website to avoid Google from crawling pages you consider less relevant (only for advanced SEOs);
  • Optimize website speed by reducing image sizes, minimizing HTTP requests, and minifying CSS and JavaScript.

Even if your website doesn't suffer from crawl budget issues, it's worth improving the website speed as it is a Google ranking factor.