GAFAIG Standards
GAFAIG standards define verifiable expectations for human-centered AI governance. They are designed to support transparency, accountability, and institutional trust through clear disclosure requirements, evidence-based evaluation, and consistent interpretation.
Standards posture
GAFAIG standards are written to avoid superficial or checkbox-based compliance. Where discretion exists, GAFAIG favors substance over marketing claims, requires traceability of decisions, and applies procedural consistency across applicants.
Published standards
GAFAIG Human Governance Standard
Defines minimum requirements for accountable human oversight, override authority, auditability, and prohibitions against AI self-governance.
GAFAIG-S-001 — Human Impact Disclosure Standard
Defines minimum disclosure requirements for how AI systems affect people, including intended use, deployment context, risk areas, monitoring practices, and accountability ownership.
GAFAIG-S-002 — AI Incident Disclosure & Reporting Standard
Defines how organizations disclose, classify, and report AI incidents, including severity tiers, confidence scoring, reporting timelines, and update obligations as investigations evolve.
How standards connect to designation
GAFAIG standards may be adopted voluntarily, but public designation requires evidence. Applicants must demonstrate alignment through documented disclosures, operational controls, registry consistency, and ongoing obligations such as renewal and incident updates.
Versioning and updates
Standards evolve as AI capabilities and governance needs change. GAFAIG publishes material updates with version identifiers and explanatory notes to maintain continuity and public trust.