← Back to Standards

GAFAIG-S-001 — Human Impact Disclosure Standard

GAFAIG-S-001 defines minimum disclosure requirements for how AI systems affect people. It is designed to enable meaningful transparency, reduce ambiguity, and support evidence-based governance evaluation.

Scope

This standard applies to AI systems that materially affect human outcomes, access, opportunities, safety, rights, or welfare. It covers disclosures related to intended use, deployment context, affected populations, and accountability ownership.

Required disclosure domains (overview)

  • System purpose & intended use: what the system is designed to do and where it is used.
  • Deployment context: environment, user groups, and operational constraints.
  • Affected populations: who may be impacted and how impacts are identified.
  • Risk areas: foreseeable harms, misuse vectors, and impact boundaries.
  • Monitoring & controls: oversight, testing, and ongoing evaluation practices.
  • Accountability ownership: who is responsible for decisions, escalation, and remediation.

Evidence expectations

Claims should be supportable with documented artifacts such as system documentation, testing summaries, monitoring procedures, escalation policies, and governance review records. Where disclosures rely on judgment, the rationale should be recorded and attributable.

Interpretation posture

GAFAIG interprets this standard to prevent “checkbox compliance.” Where discretion exists, GAFAIG favors substance over marketing claims, requires traceability of decisions, and evaluates disclosures for completeness, clarity, and real-world applicability.

Versioning

GAFAIG standards are living documents. Material updates are versioned and published with explanatory notes to maintain continuity and public trust.

Release: devGovernance verification engine executed on Snowflake (deterministic scoring, registry snapshots, and public verification views)