Back to Home

BiDigest AI Governance Policy

Last updated: 03/10/2026

How we govern our own AI systems

This page summarizes how BiDigest applies AI governance to its own platform. The full internal policy lives in docs/AI_GOVERNANCE_POLICY_TEMPLATE.md and is aligned with the EU AI Act, NIST AI RMF, and ISO/IEC 42001.

1. Scope

BiDigest applies this AI Governance Policy to all AI components we use to track, analyze, and report on AI recommendations and citations (ChatGPT, Claude, Gemini, Perplexity), as well as any internal models we introduce.

2. Governance structure

  • Steering: Governance decisions are owned by leadership (CEO) together with the governance/compliance function. They approve policies and accept residual risk.
  • Operational: Product and engineering own day-to-day implementation, logging, and controls, with governance reviewing high-risk changes.

3. Lifecycle controls

For each AI system we use:

  • We register it in our AI inventory and classify it by role and risk tier.
  • We run an AI Risk Assessment using our own form for regulated use cases.
  • We define Human Oversight and logging requirements before production use.
  • We review drift and incidents and update controls when models or context change.

4. Human oversight

When we use AI in ways that could affect customer decisions or compliance, we ensure humans can understand, monitor, override, and halt the system. Oversight design is documented and signed off in line with Article 14 of the EU AI Act.

5. Continuous improvement

This policy is reviewed at least annually and whenever there is a material regulatory or product change. We treat it as a living document and wire key rules into our platform as machine-readable checks (governance-as-code).

Related internal documents

  • AI Risk Assessment Form — Regulated Industry
  • EU AI Act Compliance Checklist
  • Human-in-the-Loop (HITL) Certification Guide
Lost? Need help? Chat with us