From AI governance claims to independently verifiable proof
GAFAIG converts AI governance and human oversight from a private claim into an independently verifiable public trust record. This demo proves the exact sequence from certified record to signed proof to portable trust surface.
This page uses one real GAFAIG record to show how certification is published, verified, and surfaced outside an organization’s platform.
Most AI oversight cannot be independently verified
Organizations can say they have human oversight, internal controls, and responsible AI policies. But for customers, regulators, and the public, there is usually no consistent way to verify whether that oversight is implemented, operational, and producing real oversight outcomes.
A policy statement, internal documentation, or self-declared claim that oversight is in place.
An independently verifiable public trust record backed by signed proof, a verification surface, and portable trust signals.
Follow one record from certification to proof
This walkthrough uses a live GAFAIG record to prove how trust moves through the public surface. Each step proves a real part of the verification process that is already built and publicly accessible.
Locate the certified record in the public registry.
Retrieve the signed verification payload from the API.
Confirm the record using the public verification key.
Display the verified result through a widget or UI.
Resolve the certified registry record
Start with the public trust record. This is the published certification outcome that external parties can review by registry ID.
Fetch the verification proof
Open the verify page to inspect the proof layer behind the certification record, including trust state, key references, and signature validation.
Inspect the signed proof payload
The verification endpoint exposes the machine-readable proof directly. This is what allows external systems to independently verify the record instead of relying on a visual page alone.
Render the external trust surface
The widget preview proves how the same verified trust result can appear outside GAFAIG through a portable, embeddable interface.
GAFAIG is a trust system, not just a registry page
GAFAIG does not ask the public to trust an internal claim. It publishes an independently verifiable public trust record backed by signed proof and external validation surfaces.
Verification must use the exact signed messageString returned by the API. Reconstructing payloads from JSON fields invalidates the proof.
The signals that confirm public proof
Confirm that the record is published in the registry and tied to a real registry ID.
Confirm that the verification surface exposes signed proof and a public key reference.
Confirm that the same result appears in the registry, API, verify page, and widget.
Ready to create a verified record?
You’ve seen how a GAFAIG record appears in the registry, how it is verified through signed proof, how it can be inspected through the public API, and how it can appear outside the platform through a widget. This is the outcome of the GAFAIG verification process.
If your organization operates AI systems and needs independently verifiable proof that human oversight is implemented, operational, and producing real oversight outcomes, you can begin the GAFAIG verification process now.
From governance claims to public proof
GAFAIG replaces unverifiable governance claims with publicly verifiable proof.
Instead of asking the public to trust internal claims about oversight, GAFAIG publishes a record that can be reviewed, verified, validated, and used across external environments.
The demo is intentionally simple: resolve the registry record, fetch the proof, validate the signature, and render the verified trust surface. That sequence is the product.