Written by Ryan Jones. Updated on 23, October 2024
Within this article, we are going to talk about an incredibly common error found within Google Search Console, the Blocked Due to Access Forbidden (403) error.
We are going to talk about what the error actually is, what causes the error to appear in Google Search Console, where to find any impacted pages within your GSC dashboard and, of course, how to fix the pages that the error is appearing on.
The Blocked Due to Access Forbidden (403) error within Google Search Console indicates that Googlebot has tried to crawl a URL on your website but was denied access to that URL. This means that when Googlebot tried to crawl the URL, it was greeted with a 403 Forbidden HTTP status code, signalling to the crawler that it does not have permission to access the content.
There are several reasons why this error occurs, which we will dive into in the next section of the article, but it could be down to specific rules on your site or server-level restrictions.
Image Credit: https://support.google.com/
There are a multitude of things that can cause you to get a Blocked Due to Access Forbidden (403) error within your Google Search Console account. Below, we’ll dive into some of the main causes and a little about what they mean.
Perhaps one of the main reasons for seeing a Blocked Due to Access Forbidden (403) error within your Google Search Console account is down to your site’s robots.txt file.
Within your robots.txt file, you might have rules that actually disallow Googlebot (or other user-agents) from accessing certain URLs or directories within your website. As discussed above, if Googlebot cannot access a URL on a website, it is very likely to throw this error.
You might also be seeing this error down to your server settings.
Your website’s server might be configured to block certain IP addresses. This is primarily done for website security reasons. But if your server is restricting any IP addresses that Googlebot uses, this is going to throw a 403 error to Googlebot and cause it to show this error for your URL or URLs.
Of course, some parts of your website may require login details. Googlebot, obviously, does not have these login credentials and so cannot access URLs behind these walls. To Googlebot, then, they cannot access (and are forbidden from accessing) this URL, so they are going to throw this error into GSC.
Some CMS platforms such as WordPress, may have built-in settings or plugins that block search engine crawlers from accessing specific parts of a website. If this is happening with your CMS, you may find that this is causing Googlebot to throw this error your way within your Google Search Console dashboard.
If your website is using an Apache server, there may be the possibility that you are using incorrect rules within your .htaccess file that might result in Googlebot being denied access to some of your website’s URLs.
If you are using certain hosting providers, especially hosting providers that operate at a lower cost point to you and your website, you might find that these providers block certain requests, including those from Googlebot, if they exceed any bandwidth limits that they have set as a company.
This is especially likely to happen if you are using a shared hosting provider, or your website experiences a sudden uptick in website traffic for whatever reason.
You may find that your server blocks requests from certain geographic locations. This can inadvertently block Googlebot if it is crawling from one of those regions that your server has blocked.
Someone managing your website, such as a webmaster or SEO professional, may have intentionally blocked search engine bots from accessing certain parts of your website. If this happens, Googlebot (and, of course, other search engine bots) will throw these Access Forbidden errors your way and you will have to decide whether these pages need to be blocking search engine bots or not.
Finding out which pages (if any) are impacted by this error is very easy within Google Search Console. Below we’ll list out a step-by-step guide on how to access a list of pages that have been impacted and need looking at by your webmaster or SEO executive.
As your first step, head to your Google Search Console dashboard and click on the “Pages” link underneath the “Indexing” section on the left-hand side toolbar:
You’ll then be greeted with your Page Indexing Report, it will look something like this:
Once here, you can scroll down on your Page Indexing Report and you will be greeted with a list of all the indexing issues Google Search Console has identified for your website.
Please note that this list will be different for every website as not all websites will have the same indexing issues.
The list will look something like this:
Once here, you may see a link that contains the error message Blocked Due to Access Forbidden (403):
Image Credit: https://www.onely.com/
From here, you can investigate these URLs individually and decide if they need to be fixed or not, and then work on fiixng them if the error is harming your SEO.
Luckily, fixing this issue within Google Search Console is relatively straightforward for SEO professionals and webmasters alike.
Firstly, you will need to find out the reason Googlebot is being blocked from certain pages on your site. Check your website’s .htaccess file, server configuration files, and any security plugins or firewalls you have in place. Look for any rules that are explicitly blocking Google’s crawlers. You should also ensure that the affected URLs are not disallowed by your robots.txt file.
Once you have identified the cause, correct the configuration that is blocking access. If it’s a .htaccess or server configuration issue, you need to adjust the rules to allow Googlebot to access the impacted URLs. If it’s due to a plugin, or any particular setting on your website’s firewall, you may need to update the settings within those tools to whitelist Google’s crawler.
Always be cautious when making these changes. It’s important to maintain site security whilst also allowing search engines to crawl your website.
Lastly, once you have made any needed changes, you can inspect your URLs within Google Search Console to ensure Googlebot can crawl and index your URLs without issue. Monitor your GSC dashboard for a few days to ensure that the 403 erors do not reappear. If errors persist, you may need to conduct a more thorough review of your site’s settings, or consult with your web development team or server help team.
Whether you want to fix 403 errors or not should be based on the content and intended accessibility of those pages.
For pages that are meant to be public and are valuable to users, fixing these errors is a must to allow Googlebot to crawl, index and rank them.
If pages are intentionally restricted because they contain sensitive, private, or content that is reserved for special access, these 403 errors are serving their indended purpose as they are not supposed to be indexed. Additionally, these URLs should be disallowed in your website’s robots.rxt file to prevent search engines from attempting to crawl them.
In conclusion, the Blocked Due to Access Forbidden (403) Error in Google Search Console is a common hurdle that, in some cases, can hinder a website’s SEO performance by preventing Googlebot from accessing and indexing crucial IRLs.
Addressing this error involves a careful examination of various potential causes. Anything from server configurations to CMS settings. Fixing this issue requires a precise approach to modify the settings that are blocking Googlebot, ensuring that the solution meets the dual needs of accessibility for search engines and security for the website.
Whilst resolving these errors is critical for public-facing pages to ensure their visibility in search results, it is equally important to recognise when such blocks are intentional for private and restricted content.
Looking to make better use of your Google Search Console data? SEOTesting allows you to archive your website’s performance much longer than the default 16 months. It also has a wide range of useful reports to help you identify content cannibalisation, quick win keywords, and much more. We are currently running a 14-day free trial with no credit card required to sign-up. So sign-up today!