Resources
Content Workshops: Where Bias Can Appear in the AI Lifecycle
Under the EU AI Act and good practice, bias and unfairness can arise at multiple stages. This page maps where to look and what to document for Human Oversight and impact assessments.
Why this matters
When AI systems recommend or represent your business, bias can affect who gets recommended, how you are described, or how you are compared to others. Documenting where bias could occur supports your impact assessment and Human Oversight evidence (EU AI Act). Use “Flag this result” in the dashboard to report suspected bias or incorrect outputs.
Stages of the AI lifecycle
Bias can enter at several points. Below we outline the main stages and what to check.
1. Data & sourcing
Which sources does the AI use to describe your business? If training or retrieval data over-represents certain regions, sectors, or firm sizes, recommendations can skew. Check: directory coverage, review sources, and whether your sector is fairly represented in the data the model was trained or grounded on.
2. Model & logic
Model design and prompts can favor certain patterns (e.g. length of description, keywords, or format). If the system ranks or filters results, those rules can introduce disproportionate favor or disfavor. Document how your business is represented and whether the logic is transparent and auditable.
3. Deployment & context
Where and how is the AI used? The same model can behave differently by region, language, or product (e.g. ChatGPT vs API). Deployment choices (filters, guardrails, localisation) can amplify or reduce bias. Note which environments you monitor and whether any user group is disproportionately affected.
4. Monitoring & feedback
Continuous monitoring and feedback close the loop. Flagging incorrect or biased outputs (e.g. via “Flag this result”) creates an audit trail and helps prioritise remediation. Track which queries or segments trigger the most issues and review periodically for fairness.
Practical steps
- Use the Draft Policy / Impact Assessment generator in the dashboard to produce a first draft of your impact documentation.
- Report suspected bias or incorrect results via “Flag this result” on compliance violations — choose “Bias or fairness” when relevant.
- Review the Flagged results list periodically and document mitigations in your conformity or impact materials.
- Link this Content Workshops page in your internal documentation and from your governance framework.