Home/Docs

Webhooks

Get a signed POST to your URL the moment a scan completes, fails, drops in score, or finds a new critical issue. Stripe-style HMAC signatures, six event types, automatic retries.

Pro & Agency

What webhooks do

A webhook is a URL you give us. When something happens — a scan completes, a health score drops, a new critical issue appears — we POST a JSON event to that URL. Your server reads it and reacts (post to Slack, open a Jira ticket, fail a CI build, restart a deploy, …).

Compared to polling the API, webhooks are push not pull: you don't pay for empty checks, you don't lag, and you don't burn rate-limit quota. The trade-off is that you need a publicly-reachable HTTPS endpoint and you have to verify signatures (every webhook ecosystem has the "random POST from a stranger" problem; see Signature verification below).

Event types

Six event types as of April 2026. Each fires once per occurrence — never on a worker retry, never duplicated across receivers (we deduplicate before dispatch).

EventFires when
scan.startedThe worker picks up a queued scan and begins crawling.
scan.completedA scan finishes successfully (status moves to completed).
scan.failedA scan terminates without completing (network failure, timeout, agent throw).
score.droppedA finished scan's overall health score dropped 5+ points vs the previous scan of the same domain. Doesn't fire on first scan (no baseline).
issue.new_criticalA scan finds a critical-severity issue that wasn't in the previous scan. One event per new finding (so 5 new criticals = 5 events).
credits.lowYour monthly scan balance hits 3, 1, or 0 — three escalating warnings per cycle.

When you create a webhook route, pick one event type — or use the wildcard * to receive every event. A wildcard route is handy when one URL wants to triage everything (e.g. a Slack channel that fans out by event type internally).

How to set one up

  1. Open Settings → API → Webhooks.
  2. Click Reveal signing secret the first time so we generate your workspace's signing secret (sxw_…). Copy it into your server's env vars — you'll need it to verify signatures.
  3. Click Add webhook. Enter the target URL (must be https:// in production), pick an event type or *, save.
  4. Hit the Send test event button to fire a synthetic scan.completed at your URL so you can verify your handler works before a real scan triggers it.

Signature verification

We sign every webhook with a per-workspace secret. The signature is Stripe-compatible— if you've verified Stripe webhooks before, you can copy the same code with one tweak (different header name, different secret).

The header is:

X-Seoxpert-Signature: t=1714512000,v1=4f0a9c5b8e2d6a1c…

t is the Unix epoch timestamp at signing time. v1 is the HMAC-SHA256 of ${t}.${rawBody} using your workspace signing secret as the key, hex-encoded.

To verify in Node.js:

import crypto from 'node:crypto';

function verifySeoxpert(req, secret) {
  const header = req.headers['x-seoxpert-signature'];
  const parts = Object.fromEntries(
    header.split(',').map(p => p.split('=')),
  );
  const ts = Number(parts.t);
  if (!ts || Math.abs(Date.now() / 1000 - ts) > 300) {
    throw new Error('Stale or missing timestamp');
  }
  const expected = crypto
    .createHmac('sha256', secret)
    .update(`${ts}.${req.rawBody}`, 'utf8')
    .digest('hex');
  if (
    expected.length !== parts.v1.length ||
    !crypto.timingSafeEqual(Buffer.from(expected), Buffer.from(parts.v1))
  ) {
    throw new Error('Bad signature');
  }
}

Reject any request older than 5 minutes (the Math.abs(now - ts) > 300 check) — this blocks replay attacks where someone records a valid request and re-sends it later. Use a constant-time comparison (timingSafeEqual) — string equality leaks the signature byte-by-byte through timing.

Payload shape

Every event uses the same envelope:

{
  "id": "<unique event id, dedup-safe>",
  "type": "scan.completed",
  "createdAt": "2026-04-30T12:00:00.000Z",
  "workspaceId": "<your workspace id>",
  "data": {
    "scanId": "abc-123",
    "rootUrl": "https://example.com",
    "status": "completed",
    "overallHealthScore": 84,
    "findingsCount": 23,
    "pagesCrawled": 47,
    "completedAt": "2026-04-30T11:59:55.000Z"
  }
}

The data shape varies by event type. scan.failed includes errorMessage; score.dropped includes previousScore and delta; credits.low includes balance and threshold; issue.new_critical includes the canonical issue id, title, and affected URL.

Retries and delivery guarantees

We attempt delivery up to 3 times. If your endpoint returns a 2xx, the event is marked delivered. If it returns 4xx (other than 429) we don't retry — those are likely permanent (bad URL, wrong format). If it returns 5xx, 429, or times out, we retry with exponential backoff capped at 30 minutes between attempts.

Events are deduplicated at the dispatch layer using a hash of (scanId, eventType)— so a worker retry that re-fires the same event collapses to one delivery per route. Add idempotency on YOUR side too if your handler isn't naturally idempotent (e.g. don't open two Jira tickets if we send the same event twice during a partial outage).

Cooldown

Event-typed webhooks have no cooldown— one event = one delivery. The legacy severity-based notification routes (in Settings → Notifications) use a cooldown to debounce noisy alerts; that doesn't apply here. Programmatic consumers expect every event.

Use cases — copy, paste, ship

Slack alert when health score drops after a deploy

The classic CI loop: a deploy lands, scan runs, score drops, Slack pings the team within minutes. Subscribe to score.dropped; the handler posts to a Slack incoming webhook.

// app/api/seoxpert-webhook/route.ts (Next.js example)
import { NextRequest, NextResponse } from 'next/server';
import crypto from 'node:crypto';

const SEOXPERT_SECRET = process.env.SEOXPERT_WEBHOOK_SECRET!; // sxw_…
const SLACK_URL = process.env.SLACK_INCOMING_WEBHOOK_URL!;

export async function POST(req: NextRequest) {
  const raw = await req.text();
  const header = req.headers.get('x-seoxpert-signature') ?? '';
  const parts = Object.fromEntries(header.split(',').map(p => p.split('=')));
  const ts = Number(parts.t);
  if (!ts || Math.abs(Date.now() / 1000 - ts) > 300) {
    return NextResponse.json({ error: 'stale' }, { status: 400 });
  }
  const expected = crypto.createHmac('sha256', SEOXPERT_SECRET)
    .update(`${ts}.${raw}`, 'utf8').digest('hex');
  if (
    !parts.v1 || expected.length !== parts.v1.length ||
    !crypto.timingSafeEqual(Buffer.from(expected), Buffer.from(parts.v1))
  ) {
    return NextResponse.json({ error: 'bad signature' }, { status: 401 });
  }
  const event = JSON.parse(raw);
  if (event.type !== 'score.dropped') return NextResponse.json({ ok: true });

  const { rootUrl, previousScore, overallHealthScore, delta, scanId } = event.data;
  await fetch(SLACK_URL, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      text: `:warning: ${rootUrl} dropped ${Math.abs(delta)} points (${previousScore} → ${overallHealthScore}). <https://seoxpert.io/scans/${scanId}|See report>`,
    }),
  });
  return NextResponse.json({ ok: true });
}

Open a Jira ticket for every new critical finding

Subscribe to issue.new_critical. One event fires per new critical finding (deduplicated server-side by canonical issue id), so each becomes its own ticket. The data block carries the issue title + canonical id + affected URL — enough to write a useful ticket title and description.

if (event.type === 'issue.new_critical') {
  const { rootUrl, canonicalIssueId, title, affectedUrl, scanId } = event.data;
  await fetch('https://your-jira.atlassian.net/rest/api/3/issue', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Basic ${Buffer.from(`${EMAIL}:${API_TOKEN}`).toString('base64')}`,
    },
    body: JSON.stringify({
      fields: {
        project: { key: 'WEB' },
        summary: `[Seoxpert] ${title} on ${rootUrl}`,
        issuetype: { name: 'Bug' },
        description: {
          type: 'doc', version: 1,
          content: [{
            type: 'paragraph',
            content: [{
              type: 'text',
              text: `Affected: ${affectedUrl}\nFix guide: https://seoxpert.io/issues/${canonicalIssueId}\nScan: https://seoxpert.io/scans/${scanId}`,
            }],
          }],
        },
      },
    }),
  });
}

Fail the next CI build when a scan regresses

Pair a deploy hook with a webhook. The deploy fires the scan; if the score drops or a critical issue lands, the webhook flips a feature flag (or writes a marker file to S3 / DynamoDB / your CI's API) that the next CI job reads to mark itself failed. See the deploy hooks docs for the trigger half of this loop.

// Webhook handler — flip a flag in DynamoDB on regression.
if (event.type === 'score.dropped' || event.type === 'issue.new_critical') {
  await dynamo.put({
    TableName: 'ci-flags',
    Item: { key: `regression:${event.data.rootUrl}`, ts: Date.now() },
  });
}

// In your CI workflow (next deploy job):
//   $ aws dynamodb get-item --table-name ci-flags ...
//   $ if [ ... ]; then echo "::error::Seoxpert regression"; exit 1; fi

Daily score digest into a spreadsheet

Subscribe to scan.completed with the wildcard * (or just that one type). The handler appends one row per completed scan to a Google Sheet via Apps Script — managers see weekly trends without logging in.

if (event.type === 'scan.completed') {
  const { rootUrl, overallHealthScore, findingsCount, completedAt } = event.data;
  await fetch(GOOGLE_APPS_SCRIPT_URL, {
    method: 'POST',
    body: JSON.stringify({
      sheet: 'scans',
      row: [completedAt, rootUrl, overallHealthScore, findingsCount],
    }),
  });
}

Top up scans before you run out

Subscribe to credits.low. We fire on the way down at thresholds 3, 1, and 0 — the first one is the early-warning. Have your handler hit the billing portal API or just post to Slack so the workspace owner can upgrade before the next scan is rejected with HTTP 402.

Plan availability

Webhooks unlock on Pro and Agency. Free-tier users see an upsell card; events still fire internally for those workspaces but no routes are configured to receive them.