Crawl-Delay in robots.txt Explained

What the Crawl-delay directive does, which search engines support it, and when you should (and shouldn't) use it.

What Crawl-delay does

The Crawl-delay directive tells a crawler to wait a specified number of seconds between consecutive requests to your server. It's a throttling mechanism — a way to prevent aggressive crawlers from overwhelming your infrastructure.

User-agent: *
Crawl-delay: 10

This tells crawlers to wait at least 10 seconds between each page request. Instead of hammering your server with dozens of requests per second, the crawler spaces them out.

The value is in seconds. Crawl-delay: 1 means one request per second. Crawl-delay: 30 means one request every 30 seconds. Some implementations support decimal values like Crawl-delay: 0.5 for two requests per second, but support for this varies.

The syntax

Crawl-delay is placed inside a User-agent block, just like Disallow and Allow:

User-agent: Bingbot
Crawl-delay: 5
Disallow: /admin/

User-agent: *
Crawl-delay: 10
Disallow: /admin/

In this example, Bingbot waits 5 seconds between requests while all other crawlers wait 10 seconds. You can set different delays for different crawlers, which is useful when some bots are more aggressive than others.

Not part of the original spec

Crawl-delay was not included in the original 1994 Robots Exclusion Protocol, and it's not part of RFC 9309. It's a de facto extension that some crawlers support and others ignore. Always check whether a specific crawler honors this directive before relying on it.

Which search engines support Crawl-delay

This is where things get important. Support is inconsistent across major crawlers:

CrawlerSupports Crawl-delay
GooglebotNo
BingbotYes
YandexYes
BaiduspiderYes
DuckDuckBotNo (uses Bing's index)
ApplebotYes
AhrefsBotYes
SemrushBotYes

The biggest name missing from the "Yes" column is Google. Googlebot does not support Crawl-delay at all. It completely ignores the directive.

What Google offers instead

Google handles crawl rate through Google Search Console, not robots.txt. There are two relevant mechanisms:

Automatic crawl rate adjustment

Googlebot automatically adjusts its crawl rate based on your server's response times. If your server starts responding slowly or returning 500 errors, Googlebot backs off. When your server recovers, crawl rate increases again. This happens without any configuration on your part.

Search Console crawl rate setting

In Google Search Console under Settings > Crawl rate, you can reduce Googlebot's maximum crawl rate for your property. This is a ceiling, not a floor — Google may still crawl slower than the limit you set. This setting only reduces the rate; you can't increase it beyond Google's default.

Google's reasoning: a static Crawl-delay value in robots.txt can't adapt to real-time server conditions. Their automated system does. If your server can handle 50 requests per second during off-peak hours but only 5 during peak, a fixed Crawl-delay value is too conservative in one scenario and too aggressive in the other.

Check your Crawl-delay configuration

Test whether your robots.txt Crawl-delay directives are correctly formatted and applied to the right user agents.

When Crawl-delay is useful

There are legitimate scenarios where Crawl-delay helps:

Small servers under heavy crawl load

If you're running a modest VPS or shared hosting plan and multiple bots are crawling simultaneously, the combined load can degrade performance for real users. A Crawl-delay for non-essential crawlers is reasonable:

# Allow search engines to crawl at their preferred rate
User-agent: Googlebot
Disallow: /admin/

User-agent: Bingbot
Crawl-delay: 2
Disallow: /admin/

# Throttle SEO tool crawlers more aggressively
User-agent: AhrefsBot
Crawl-delay: 15

User-agent: SemrushBot
Crawl-delay: 15

# Conservative default for unknown bots
User-agent: *
Crawl-delay: 10
Disallow: /admin/

This prioritizes Googlebot (which ignores Crawl-delay anyway) and real search engines while limiting the impact of SEO tool crawlers and unknown bots.

Rate-limited APIs or dynamic pages

If your site generates pages dynamically and each request is expensive (database queries, API calls, server-side rendering), throttling crawlers prevents them from saturating your backend:

User-agent: *
Crawl-delay: 5
Disallow: /admin/

Managing multiple aggressive crawlers

Some crawlers — particularly SEO tools and analytics bots — can be surprisingly aggressive. If you see AhrefsBot or similar crawlers making thousands of requests per hour, Crawl-delay is the polite way to tell them to slow down.

When Crawl-delay hurts

Crawl-delay is not free. There are real costs to slowing crawlers down.

Slowing indexing of large sites

If your site has 100,000 pages and you set Crawl-delay: 10 for Bingbot, that's one page every 10 seconds. At that rate, crawling every page takes over 11 days of continuous crawling — assuming the crawler works 24/7 with no interruptions. For a site that updates frequently, Bing would never catch up.

# Math on large sites:
# 100,000 pages / (1 page per 10 seconds) = 1,000,000 seconds = ~11.5 days
# Add new pages daily and Bing falls further behind

Preventing discovery of new content

Search engines discover new pages by following links on pages they've already crawled. If you throttle crawl rate too aggressively, the crawler visits fewer pages per session and follows fewer links. New content takes longer to appear in search results.

Don't set Crawl-delay too high

A Crawl-delay of 30 seconds or more will severely limit how many pages a crawler can visit. For most sites, values between 1 and 10 seconds are sufficient. If you need more than 10 seconds, the problem might be your server capacity, not crawler behavior.

Reducing SEO visibility on Bing

While Google ignores Crawl-delay, Bing respects it strictly. If you set a high Crawl-delay for Bingbot (or in a wildcard block), you're directly reducing your visibility in Bing search results. Pages are discovered slower, changes are reflected slower, and your content is indexed less completely.

Alternatives to Crawl-delay

If Crawl-delay doesn't quite fit your needs, there are other approaches:

Google Search Console crawl rate

For Googlebot specifically, use the Search Console crawl rate limiter. It's more flexible and responds to Google's actual crawling patterns.

Server-side rate limiting

Use your web server or CDN to rate-limit bot traffic. Nginx, Cloudflare, and most CDNs offer bot-specific rate limiting that works for all crawlers, including those that ignore robots.txt.

HTTP 429 responses

Return a 429 Too Many Requests status code when a bot exceeds your preferred rate. Well-behaved crawlers (including Googlebot) will back off when they receive 429 responses. Include a Retry-After header to specify how long the crawler should wait.

Caching and CDN

If crawler load is a server performance concern, put a CDN or caching layer in front of your site. Cached responses cost almost nothing to serve, making Crawl-delay unnecessary in many cases.

Validate your robots.txt setup

Check that your Crawl-delay values make sense for your site size and that your directives are syntactically correct.

A practical Crawl-delay strategy

For most sites, here's a sensible approach:

# Don't set Crawl-delay for Google (it's ignored anyway)
User-agent: Googlebot
Disallow: /admin/

# Moderate delay for Bing
User-agent: Bingbot
Crawl-delay: 2
Disallow: /admin/

# Yandex can be aggressive — throttle it
User-agent: Yandex
Crawl-delay: 5
Disallow: /admin/

# SEO tools don't need fast crawl rates
User-agent: AhrefsBot
Crawl-delay: 10

User-agent: SemrushBot
Crawl-delay: 10

# Default for unknown crawlers
User-agent: *
Crawl-delay: 5
Disallow: /admin/

Sitemap: https://example.com/sitemap.xml

The logic: prioritize search engine crawlers, moderately throttle secondary bots, and set a conservative default for anything unknown. Adjust the specific values based on your server capacity and site size.

Monitor before you set

Check your server logs to see actual crawl rates before setting Crawl-delay values. You might find that most crawlers are already crawling at an acceptable rate, and Crawl-delay isn't needed at all.


Crawl-delay is a blunt instrument. Make sure it's the right tool before you use it.

Test your robots.txt for free

Validate your robots.txt file instantly. Check directives, find crawling issues, and ensure search engines can access your site.