Back to Home

AI Risk Assessment — BiDigest

Last updated: 03/10/2026

How we assess AI risk in our own platform

This page explains how BiDigest uses its internal AI Risk Assessment Form — Regulated Industry to evaluate AI use cases, including our governance audit and visibility tracking.

1. What we assess

For each AI system that BiDigest relies on (for example, LLM-based analysis, governance audit queries, or internal decision support), we capture:

  • System identification and business owner
  • Role and tier under the EU AI Act (where applicable)
  • Context and intended use, including regulated activities
  • Risks across bias, privacy, safety, transparency, accountability, and compliance

2. Drift and retraining

We record retraining frequency and how we monitor for model or behavior drift. For BiDigest, this includes monitoring Identity Drift in how AI platforms describe our own brand and products over time.

3. Controls and mitigation

For each identified risk, we document specific mitigations, owners, and status. For higher-risk use, Human Oversight, logging, and escalation procedures are mandatory fields, not optional notes.

4. How this connects to the product

  • Risk assessment fields are designed to map to our governance metadata model.
  • Key controls (logging, oversight, drift monitoring) are wired into BiDigest dashboards.
  • We aim to expose a similar intake for clients as part of Governance Launchpad.

Note: This page is a human-readable summary of an internal process and does not replace formal legal or compliance review. For details, see the internal policy and checklist documents.

Lost? Need help? Chat with us