Seoxpert.io
highCrawl & Links

Pages with Timeout Errors

Pages timed out during crawling — the server either took too long to respond or returned HTTP 408. Crawlers and users experience these as broken pages.

By Seoxpert Editorial · Published

Why it matters

Googlebot has a per-page time budget. Pages that exceed it are abandoned for the crawl round. Persistent timeouts cause Google to deprioritise the section entirely — scheduled re-crawls become rare, new content stays unindexed longer, and existing indexed URLs can eventually be dropped.

Impact

Timeouts block indexing. Users landing on slow pages typically abandon before paint, counting as 100% bounce with zero value delivered. At scale, timeout-prone sections correlate with progressive deindexing of the affected subtree.

How it's detected

Scanner classifies every response: pages returning HTTP 408, pages where the TCP/TLS handshake exceeded a threshold, and pages where the full HTML did not arrive within 30 seconds all flag this issue.

Common causes

  • Unoptimised database queries — missing indexes, N+1 patterns, full-table scans
  • Synchronous external API calls in the render path blocking until the remote responds
  • Cold-start penalties on serverless or auto-scaling hosts
  • Server under sustained load with no cache layer
  • Third-party script or font blocking render past a proxy timeout

How to fix it

Profile before optimising. Turn on slow-query logging and find the worst offenders. Cache server-side at the first layer that can tolerate it — route, fragment, or full page. Move synchronous external calls out of the render path (pre-fetch, background job, or SWR-style stale-cache). For content that doesn't change often, switch to static generation or an edge cache. Set explicit upstream timeouts so a slow dependency fails fast instead of hanging the whole request.

Code examples

Node: add a hard timeout to an upstream fetch

// A slow API should not stall the whole render path.
async function fetchWithTimeout(url, ms = 2000) {
  const ctrl = new AbortController();
  const t = setTimeout(() => ctrl.abort(), ms);
  try {
    return await fetch(url, { signal: ctrl.signal });
  } finally {
    clearTimeout(t);
  }
}

// Wrap the call and fall back to a cached or empty value on timeout.
const data = await fetchWithTimeout(API_URL, 2000).catch(() => cached);

Nginx: tight upstream timeouts

# A backend that is slow on average is a backend that is about to time out under load.
proxy_connect_timeout 2s;
proxy_send_timeout 5s;
proxy_read_timeout 10s;

FAQ

What is the difference between slow response and timeout?

Slow response means the server eventually replied but took too long (e.g. 3-10 seconds). Timeout means the server never replied before the client gave up. Both hurt SEO, but timeouts are worse because the page is effectively unavailable.

Can I increase the timeout to hide the issue?

You can raise the upstream timeout at the edge, but Googlebot has its own per-page budget you cannot control. Fixing the root cause — slow queries, blocking API calls — is the only durable solution.

Found this issue on your site?

Run a scan to see if Pages with Timeout Errors affects your pages.

Scan my website →