Fortunately, there is an easy way to allow Googlebot to crawl all CSS and JavaScript files while still allowing other areas to remain blocked.
Simply add the following to your robots.txt and it will allow Googlebot to crawl all the CSS and JavaScript files on your site.
User-Agent: Googlebot Allow: .js Allow: .css
Also remember that Google does not index .css and .js in their search results, so you do not need to worry about these files causing any unnecessary SEO issues. In fact, the opposite will happen since blocking .css and .js files “can result in suboptimal rankings.”
Once you add this, you can confirm using the Google Search Console “Fetch as Google” to ensure the resources were successfully unblocked with the new robots.txt file.
While some commenters pointed out a similar robots.txt in comments here, Gary Illyes from Google confirmed the use of this in robots.txt at stackoverflow.
…the simplest form of allow rule to allow crawling javascript and css resources:
User-Agent: Googlebot Allow: .js Allow: .css
This will allow anything like
https://example.com/deep/style.css?something=1
orhttps://example.com/deep/javascript.js
, and leaves no much space for interpretation for other search engines.Once you have this, you can test your setup in Search Console – Blocked resources feature.
We have more details on the “Googlebot cannot access CSS and JS files” warning here, and here are instructions so you can find exactly which resources Google thinks you are blocking, even if you are certain it is a false positive (it probably isn’t).
Jennifer Slegg
Latest posts by Jennifer Slegg (see all)
- 2022 Update for Google Quality Rater Guidelines – Big YMYL Updates - August 1, 2022
- Google Quality Rater Guidelines: The Low Quality 2021 Update - October 19, 2021
- Rethinking Affiliate Sites With Google’s Product Review Update - April 23, 2021
- New Google Quality Rater Guidelines, Update Adds Emphasis on Needs Met - October 16, 2020
- Google Updates Experiment Statistics for Quality Raters - October 6, 2020