The frequency rankings below come from running Seoxpert against the ~500 sites I've tested it on so far — a mix of WordPress, Shopify, Webflow, custom Next.js, and a handful of static-site generators. Some issues are CMS defaults nobody changed (Yoast leaving meta-description empty for new posts is the #1 culprit there). Others are deploy-time accidents that accumulated over years (canonical pointing at staging, sitemap missing the entries created since 2023, a noindex from a long-forgotten content migration).
Each issue links to a longer fix recipe with the specific code or config change needed. The free scan checks for all 10 against your site in under two minutes — I built it so the same audit a paying Agency-tier customer gets is what you see on the first run, not a stripped-down preview.
Check if your site has these issues — free, no install required.
Title tags are one of the most important on-page SEO signals. They appear as the clickable headline in search results and in browser tabs. Without a title tag, Google generates its own from the page's content — which is rarely as effective as a deliberate, keyword-targeted title written for search intent.
The most common cause is a CMS template that lacks a title in its <head>, or CMS pages where the title field was left blank. On JavaScript-rendered sites, titles added client-side may not be visible to the crawler. Aim for 50–60 characters with the primary keyword near the start.
Why this hurts: missing title tags significantly reduce click-through rates and weaken keyword relevance signals. Google may generate a misleading substitute title.
How to detect it: the scanner extracts the <title> from each page. Pages without a non-empty title are flagged.
Meta descriptions are the primary snippet text in search results. They are not a direct ranking factor, but a well-written description directly influences click-through rate — which is an indirect signal of page quality. When a description is missing, Google generates one from whatever page text appears near the search query, often producing incomplete or out-of-context snippets.
Every page that can appear in search results should have a unique meta description of 120–155 characters with a clear call to action. Identical descriptions on multiple pages are treated as duplicate and should be individualised. See our full guide on what makes a good meta description.
Why this hurts: missing meta descriptions typically reduce SERP CTR by 5–10% compared to pages with compelling descriptions.
How to detect it: the scanner reads <meta name="description">. Pages where this tag is absent or empty are flagged.
Multiple pages sharing the same title tag signal to Google that the pages cover the same topic. This creates a ranking conflict: Google cannot determine which page to show for a query and may choose the wrong one, split ranking signals between the pages, or filter duplicates out of results entirely.
The most common sources are CMS default titles applied to all pages, pagination pages sharing the parent page's title, and tag or archive pages inheriting category names. Each indexable page needs a unique title that describes its specific content. Dynamic title generation in CMS templates — using page-specific fields — prevents this at scale.
Why this hurts: duplicate titles dilute ranking signals and reduce SERP CTR differentiation across affected pages.
How to detect it: the scanner groups pages by normalised title. Any title shared by 2+ pages is flagged.
View full fix guide →A noindex directive tells search engines not to include the page in their index. On content pages that should rank, this is catastrophic — the page is invisible to organic search regardless of its quality or link equity. Unintentional noindex is one of the most common causes of missing organic traffic, particularly after CMS migrations and template changes.
WordPress is the most common culprit: the "Discourage search engines from indexing this site" setting in Settings → Reading adds noindex to every page. It is frequently left enabled after development and staging. Theme updates can also inadvertently introduce noindex through template changes to the header.
Why this hurts: incorrectly noindexed pages eliminate all organic traffic potential permanently — until the directive is removed and the page is re-crawled.
How to detect it: the scanner reads <meta name="robots">and X-Robots-Tag headers. Pages containing "noindex" are flagged.
A canonical tag declares the preferred URL for a page. Without it, search engines must guess which URL variant to index when the same content is accessible via multiple URLs — with or without trailing slashes, with or without www, with query parameters. This splits link equity across URL variants and can cause the wrong variant to rank.
Every indexable page should carry a self-referencing canonical: <link rel="canonical" href="https://example.com/page/" />. Most CMS platforms generate this automatically when an SEO plugin is configured — but the tag is often missing on custom-built pages, landing pages, or pages generated outside the main CMS.
Why this hurts: missing canonicals can split link equity across URL variants, weakening all versions of the page.
How to detect it: the scanner reads <link rel="canonical"> from each page head. Pages without this tag are flagged.
A redirect chain occurs when URL A redirects to URL B which redirects to URL C. Each hop adds network latency and dilutes link equity passed through the chain. Chains typically accumulate across site migrations — when a site moves from HTTP to HTTPS, then to a new domain, the old redirect rules stack.
The fix is straightforward: update all internal links to point directly to the final destination URL and collapse redirect chains to a single 301 hop. For external links pointing into chains, the canonical tag on the final page helps consolidate signals even when you cannot control the originating link.
Why this hurts: redirect chains add latency, dilute link equity, and can cause browsers to abandon navigation after too many hops.
How to detect it: the scanner tracks redirect chains across all crawled URLs and flags paths with more than one intermediate hop.
View full fix guide →Open Graph tags control how pages appear when shared on Facebook, LinkedIn, Slack, and other platforms that read OG metadata. Without og:title, og:description, and og:image, shared links appear as plain text with no visual preview — dramatically reducing click-through from social shares.
OG tags require minimal effort to implement correctly: four tags per page, added once to the template. The og:image should be at least 1200×630px for maximum compatibility. Pages without OG tags are particularly common on utility pages, landing pages, and pages added outside the main CMS.
Why this hurts: poor social previews reduce click-through from social shares, reducing referral traffic and organic visibility signals.
How to detect it: the scanner checks for presence and non-empty content of og:title, og:description, og:image on each page.
View full fix guide →Thin content pages — those with fewer than 300 characters of text — fail to provide substantive value to users or search engines. Google's quality raters explicitly flag thin content as a signal of low quality, and a high proportion of thin pages across a domain can trigger a site-wide quality downgrade that suppresses all pages.
The most common sources are auto-generated category pages with no descriptive text, product pages with one-sentence supplier descriptions, and location pages that follow a "[Service] in [City]" template with minimal unique content. The fix is to either expand content with genuinely useful information or to consolidate thin pages via 301 redirects and noindex the remainder.
Why this hurts: thin content at scale can suppress the entire domain's ranking potential, not just the individual thin pages.
How to detect it: the scanner measures plain text character length. Pages under 300 characters are flagged.
View full fix guide →Structured data in JSON-LD format enables rich SERP features: star ratings for products, FAQ dropdowns, HowTo steps, article breadcrumbs, and more. Pages without any structured data are ineligible for these enhancements, which typically achieve 20–30% higher click-through rates than standard results.
JSON-LD is the recommended format — it is injected in a <script type="application/ld+json"> block and requires no changes to visible HTML. Add at minimum: Organization on the homepage, Article on blog posts, Product on product pages, and FAQPage on any Q&A content. This is a quick-win fix — one template change can add schema across thousands of pages.
Why this hurts: missing schema reduces eligibility for rich results, which consistently achieve higher CTR than plain blue-link results.
How to detect it: the scanner checks for presence of any <script type="application/ld+json"> block.
Multiple pages sharing the same meta description signal that no page-specific differentiation has been applied. Google typically generates its own snippet rather than using a duplicate description, which means the effort of writing the description was wasted — and the auto-generated alternative is usually worse.
The most common cause is a CMS template with a global description field that gets used across all pages. The fix is dynamic description generation: each template should pull from a page-specific description field and fall back to a computed excerpt based on the page's unique content — never a site-wide default.
Why this hurts: duplicate descriptions reduce SERP CTR by failing to communicate page-specific value.
How to detect it: the scanner normalises meta description text and groups pages sharing the same value.
View full fix guide →Browse issue libraries: Technical SEO · On-Page SEO · Browse all issues
Run a free scan to see which of these 10 issues your site has. The scanner checks every page in a single pass and returns a prioritised findings report in under 2 minutes. See also the new website SEO checklist for a pre-launch verification workflow.
Or sign up to use your free scan credit. View plans for ongoing monitoring.
Noindex errors (#4) and missing title tags (#1) are the most directly damaging. A noindex directive removes a page from Google's index entirely, eliminating all organic traffic to that URL. Missing title tags weaken keyword relevance for the page's primary queries.
Run a free site audit with Seoxpert. The scanner checks all 10 of these issues in a single pass across every crawled page, returning a prioritised findings report with severity labels and fix recommendations.
Yes. WordPress is prone to issues #1 and #2 from pages with no SEO plugin configured, #3 from default templates, and #4 from the "Discourage search engines" setting left on from development. A dedicated SEO audit catches these before they affect traffic.
New issues appear after every deployment that touches templates, CMS settings, or URL structure. A single template change can introduce duplicate titles across thousands of pages. Use scheduled scans to detect regressions within hours of deployment.
For deeper coverage of CMS defaults and outdated tactics causing these issues, see the guide: What outdated SEO is costing your business.