Back to Home

Practical Schema for Dynamic AI Responses: Actions, ClaimReview & Provenance

Intricate wireframe with dynamic ribbons in an abstract 3D composition.

Introduction — Why combine Actions, ClaimReview and provenance?

Generative answer engines (assistant-style search results and AEO experiences) increasingly depend on structured signals that describe not only what a page says, but what it allows an assistant to do, how claims are evaluated, and where content came from. Marking up Actions (interactive capabilities), ClaimReview (fact-checks), and provenance metadata together makes multi-turn, agentic responses more trustworthy, auditable, and easier for platforms to attribute correctly.

At its core this pattern answers three publisher needs: enable taskability (agents can perform verified actions), provide transparent fact-checking summaries, and expose provenance so downstream models or interfaces can show source chains and re-check evidence.

Key schema components and how they map to generative engines

1) Actions (taskability)

Use schema.org's Action (and related annotations such as potentialAction and EntryPoint input/output specs) to describe what tasks a page supports (e.g., book a table, request a quote, start a subscription flow). This tells engines what an assistant could attempt on behalf of the user and how to construct the call to your endpoint.

2) ClaimReview (fact checks)

If your page evaluates or annotates third-party claims, include ClaimReview so a generative engine can surface a concise verdict and link back to your analysis. Note: eligibility and display rules vary by platform; Google documents current Fact Check markup guidance and eligibility constraints.

3) Provenance metadata

Provenance is the collection of fields that explain wherewhosdPublisher/sdDatePublished, isBasedOn, citation, sameAs, and explicit author/organization objects—to create a traceable source chain. Include timestamps and identifiers to support automated verification.

Combined JSON-LD pattern — an example

Below is a compact JSON-LD pattern that demonstrates the practical combination of an Article containing a ClaimReview, explicit provenance fields, and a PotentialAction that describes a verification/reporting endpoint an agent could call. Adapt properties and types to your use case (reserve vs. purchase vs. report actions; Clip vs. VideoObject for media claims).

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Fact check: 'X causes Y'",
  "author": { "@type": "Organization", "name": "Example Fact Lab", "sameAs": "https://example.org" },
  "sdPublisher": { "@type": "Organization", "name": "Example Fact Lab" },
  "sdDatePublished": "2025-11-20",
  "isBasedOn": "https://original-source.example/claim-123",
  "mainEntityOfPage": {
    "@type": "ClaimReview",
    "url": "https://example.org/claims/123-review",
    "datePublished": "2025-11-20",
    "author": { "@type": "Organization", "name": "Example Fact Lab" },
    "claimReviewed": "Claim text here",
    "reviewBody": "Summary of check and evidence.",
    "reviewRating": { "@type": "Rating", "ratingValue": "False" }
  },
  "potentialAction": {
    "@type": "ReportAction",
    "target": {
      "@type": "EntryPoint",
      "urlTemplate": "https://example.org/report?claimId=123",
      "httpMethod": "POST"
    },
    "actionStatus": "PotentialActionStatus",
    "name": "Report an error or request re-review"
  }
}

Notes on the snippet:

  • Use isBasedOn and citation to link to primary sources, documents, or media used as evidence.
  • Expose sdDatePublished and sdPublisher so systems can evaluate freshness and the publisher identity programmatically.
  • Define a clear EntryPoint and input annotations (e.g., claimId-input) when you want platforms to auto-populate fields for tasking.

Implementation checklist & verification

  1. Map content types: identify which pages are articles, fact-checks, media clips, or transaction pages and select the appropriate schema types.
  2. Required vs recommended fields: implement all required properties for eligibility on your target platforms, and add recommended fields to improve quality and trust.
  3. Provenance completeness: include publisher, published date, source URLs (isBasedOn), and author objects with sameAs links where possible.
  4. Action safety: document input validation, authentication, and rate limits on endpoints referenced by EntryPoint URLs; do not expose unsafe or irreversible actions without verification.
  5. Testing & monitoring: run JSON-LD through the Rich Results Test and any provider-specific validators; monitor for structured data errors and manual actions.

Final recommendations

Combining Actions, ClaimReview, and explicit provenance metadata lets publishers serve richer, verifiable signals to generative engines while retaining control over taskability and accountability. Keep markup honest (it must match visible content), include provenance that supports re-checking, and instrument endpoints referenced by Actions so they can be audited. Remember that platform appearance is never guaranteed—structured data improves eligibility and clarity, but algorithms decide final presentation.

Related Articles