Identity + hash
prompt.id, semver version, canonical SHA-256 hash, plus content_uri and content_type (jinja2, mustache, plaintext). A grader can recompute the hash and prove the record binds to specific bytes.
Prompt Provenance is an open JSON specification for versioned, lineaged, reviewable LLM prompt records. SHA-256 content hash. Parent / derivation type / change summary. Author + reviewer + approver chain. Eval results + approval state. The disclosure document that turns "we changed our system prompt" from a Slack message into a reproducible record.
parent + derivation (fork / tune / patch)capabilities.prompts_used[]) and AI Evidence (retrieval.prompt_provenance_uri)LLM prompts are the source code of agentic systems. Yet most production deployments store them as ad-hoc strings in repo files, sometimes in databases, sometimes in deployment configs — with no consistent format for lineage, no required eval gate, no signed approval. When something goes wrong, "which prompt was running?" is a forensic exercise. Prompt Provenance fixes that.
prompt.id, semver version, canonical SHA-256 hash, plus content_uri and content_type (jinja2, mustache, plaintext). A grader can recompute the hash and prove the record binds to specific bytes.
lineage.parent · derivation (fork / tune / patch) · change_summary. Walk the chain to see how a production prompt evolved.
created_by · reviewed_by[] · approved_by · timestamps. The minimum auditable record of who said this prompt is safe to ship.
purpose · in_scope[] · out_of_scope[] · models_supported[]. Forces the author to write down what the prompt is and isn't for — before shipping.
evaluations[] entries: suite name, result URI, score, passed flag, ran-at timestamp. Procurement-grade evidence that the prompt was tested before release.
approval.state ∈ {draft, under_review, approved, rejected, deprecated}. The single most important field — it gates production deploys via approval.policy_uri.
provenance_version — must be "0.1" for this draftprompt — identity, version, SHA-256 hash, content URI, content typelineage (optional) — parent record, derivation type, change summaryauthorship (optional) — created / reviewed / approved by + timestampsintent (optional) — purpose, in_scope, out_of_scope, models_supportedevaluations (optional) — array of { suite, result_uri, score, passed, ran_at }approval (optional) — { state, policy_uri }The canonical hash uses LF line endings with a stripped trailing newline — the same convention as AI Evidence Format and Student AI Disclosure, so a single hashing helper covers every spec in the Suite.
An incident-summary prompt tuned from v1.0.0 → v1.1.0 with a recorded eval pass. The document is small; the receipts are complete.
{
"provenance_version": "0.1",
"prompt": {
"id": "incident-summary-generator",
"name": "Incident Summary Generator",
"version": "1.1.0",
"hash": "sha256:b4c2f0e3d7e5f9b1a8c6d3e0f2b5a7c9d1e3f5a7b9c1d3e5f7a9b1c3d5e7f9b1",
"content_uri": "https://example.com/prompts/incident-summary/v1.1.0/template.j2",
"content_type": "text/jinja2"
},
"lineage": {
"parent": "incident-summary-generator@1.0.0",
"derivation": "tune",
"change_summary": "Tightened the rubric for synthesizing duration; added explicit instruction to omit speculative root cause."
},
"authorship": {
"created_by": "engineer@example.com",
"reviewed_by": ["sre-lead@example.com", "principal-eng@example.com"],
"approved_by": "principal-eng@example.com",
"created_at": "2026-05-12T02:00:00Z",
"approved_at": "2026-05-12T02:45:00Z"
},
"intent": {
"purpose": "Summarize an incident timeline into a 200-word post-mortem opening section.",
"models_supported": ["claude-opus-4-*", "claude-sonnet-4-*"]
},
"evaluations": [
{
"suite": "incident-summary-quality-v3",
"result_uri": "https://eval.example.com/runs/123",
"score": 0.94,
"passed": true,
"ran_at": "2026-05-12T02:15:00Z"
}
],
"approval": { "state": "approved" }
}
Normative spec, JSON Schema 2020-12, canonical examples. AGPL-3.0 for spec text; implementations unrestricted.
View repo →Unified visualizer for all 10 specs. Auto-detects via provenance_version and renders a procurement-grade view with lineage chain + approval state.
3 dedicated tools: prompt_provenance_validate, prompt_provenance_inspect, prompt_provenance_eval_result. Drops into Claude Desktop with one config entry.
Prompt Provenance is one of ten open JSON specifications in the Kinetic Gain Protocol Suite. The Suite spans entity declaration (AEO), agent disclosure (Agent Cards), citation evidence (AI Evidence), tool disclosure (MCP Tool Cards), plus the EdTech trio, the HealthTech extension, and the cross-cutting Incident Card. Front door: suite.kineticgain.com.