Performance issues are the most measurable category — each one maps directly to a millisecond delay, and milliseconds map to ranking signals. Core Web Vitals failures in particular can suppress otherwise well-optimised pages.
Check if your site has these issues — free, no install required.
Time to First Byte (TTFB) is the time between a browser sending a request and receiving the first byte of the response. Pages exceeding 3 seconds are flagged as slow. TTFB is the foundation of all page speed — a slow server delays every subsequent resource load, worsens Largest Contentful Paint (LCP), and reduces crawl efficiency.
The most common causes are unoptimised database queries (full-table scans on every page load), no server-side caching, underpowered shared hosting, and the absence of a CDN. Each 100ms of additional latency reduces conversions by up to 1% — a compounding effect at scale. Google uses page speed as a ranking signal across both mobile and desktop.
Why this hurts: slow TTFB delays the entire critical rendering path. Pages crawled less frequently receive ranking updates more slowly.
How to detect it: the scanner measures time from request initiation to first byte. Pages exceeding 3000ms are flagged.
Each <script src> tag is an HTTP request that the browser must queue, download, parse, and execute. Pages with more than 15 external JavaScript files generate enough request overhead to measurably increase Time to Interactive and add seconds of render-blocking delay. The problem compounds when scripts are loaded in the <head> without async or defer.
The fix is bundling: consolidating multiple scripts into a single file served from a CDN with aggressive caching. Tag managers, analytics snippets, and A/B test scripts are the usual culprits — each added independently without auditing the total count. Using async for analytics and defer for non-critical scripts prevents blocking the main thread.
Why this hurts: render-blocking scripts stall the browser's critical rendering path, adding measurable latency to the first meaningful paint.
How to detect it: the scanner counts <script src> tags. Pages with more than 15 external JS references are flagged.
The HTML document is the first resource the browser parses. At 200KB or more, the initial download stalls rendering and delays First Contentful Paint (FCP) and LCP. Unlike images — which browsers can defer loading — the HTML must arrive completely before the DOM can be constructed.
Large HTML is usually caused by inline CSS or JavaScript that should be in external cached files, large server-side rendering hydration payloads, or uncompressed responses. Enabling GZIP or Brotli compression at the server level typically reduces HTML size by 60–80% with zero code changes. Inline <style> and <script> blocks over 5KB should be moved to external files.
Why this hurts: large HTML increases bandwidth costs, worsens Core Web Vitals scores, and slows Googlebot's crawl throughput.
How to detect it: the scanner checks the Content-Length of HTML responses. Pages exceeding 200KB are flagged.
View full fix guide →Each image on a page is an HTTP request, a chunk of bandwidth, and a potential LCP contributor. Pages with more than 30 images drive up both request count and total transfer size. Critically, images that are below the fold but eagerly loaded consume bandwidth before the browser has rendered anything the user can see.
The fix is lazy loading: adding loading="lazy" to all below-the-fold images defers their download until the user scrolls toward them. For icon-heavy pages, CSS sprites consolidate dozens of small images into a single request. Product listing pages should defer off-screen variant images to the moment they become visible.
Why this hurts: excessive eager image loading wastes bandwidth, worsens LCP for above-the-fold content, and slows mobile page loads on slower connections.
How to detect it: the scanner counts <img> references. Pages with more than 30 images are flagged.
When an <img> tag lacks explicit width and heightattributes, the browser cannot reserve space for the image before it loads. This causes layout shifts — the page reflowing as images arrive — which directly hurts the Cumulative Layout Shift (CLS) Core Web Vital. Google requires a CLS score below 0.1 for a "Good" rating.
Adding width and height attributes to every <img> element enables the browser to calculate the correct aspect ratio and reserve layout space before the image file downloads. This is one of the highest-impact, lowest-effort CLS fixes available. Read the full Core Web Vitals impact guide for threshold explanations and other CLS causes.
Why this hurts: CLS failures above 0.1 contribute to poor Page Experience signals, which Google weighs against otherwise well-optimised pages.
How to detect it: the scanner checks all <img> elements for missing or zero-value width and height attributes.
Scripts in the <head> without async or defer attributes are "render-blocking": the browser halts HTML parsing until the script is fully downloaded and executed. A single 200KB analytics script in the <head> can add 500–1500ms of blocked render time on a mobile connection.
This issue is part of the broader JavaScript file count problem. Add defer to all non-critical scripts (analytics, chat widgets, social buttons). Use asyncfor scripts that don't depend on DOM readiness. Only truly critical scripts — like polyfills required for layout — justify blocking render.
Why this hurts: render-blocking scripts directly delay FCP and LCP, degrading Core Web Vitals and user experience on every page load.
How to detect it: covered by the JavaScript file count check — scanner counts synchronous <script src> tags in the document head.
Text-based assets — HTML, CSS, JavaScript, and JSON — compress dramatically with gzip or Brotli. A 300KB HTML page typically compresses to under 60KB with Brotli at level 6. Serving uncompressed text assets wastes 60–80% of bandwidth on every request and increases TTFB proportionally to connection speed.
This is part of the large HTML payload finding. Most hosting platforms and CDNs enable compression by default — but server configuration changes, custom proxies, or misconfigured middleware can disable it silently. Verify compression is active by checking the Content-Encoding response header for any page.
Why this hurts: uncompressed assets significantly increase total transfer size, worsening TTFB and total page load time — especially on mobile connections.
How to detect it: the scanner checks the Content-Length of HTML responses and the Content-Encoding header. Large uncompressed responses are surfaced via the HTML payload finding.
View full fix guide →Browse all performance findings in the performance issues library.
The Performance Scanner checks response times, script counts, HTML payload sizes, and image loading patterns in a single crawl. Enter your URL to see which of these issues your site has.
Or sign up to use your free scan credit. View plans for ongoing monitoring.