AI Transparency Statement
Practicing what we preach | Last updated: 15 February 2026
Our Philosophy
At AI Governance Hub, we help organizations govern their AI systems responsibly. To maintain credibility and trust, we must be equally transparent about our own use of AI—or lack thereof.
This page explains exactly how (and whether) we use artificial intelligence in our platform. We believe in radical transparency because that's what we ask of our customers.
Current AI Usage: None
AI Governance Hub does NOT currently use artificial intelligence or machine learning
As of February 2026, our platform operates using deterministic, rule-based algorithms with full transparency. There are no AI models, neural networks, or machine learning systems in production.
What We Mean by "No AI"
When we say we don't use AI, we mean:
No Machine Learning Models
We do not train, deploy, or use any neural networks, decision trees, or statistical models to process your data.
No Automated Decision-Making
All decisions (risk classifications, compliance scores) are made by you, the user. We calculate scores based on your inputs, but we don't make autonomous decisions.
No Natural Language Processing (NLP)
We don't analyze the content of your documents, AIIA text, or risk assessments using AI. Your text is stored and displayed as-is.
No Predictive Analytics
We don't predict future compliance risks, forecast trends, or make recommendations based on patterns in your data.
No Third-Party AI Services
We do not send your data to OpenAI, Google AI, Anthropic Claude, or any other third-party AI service.
How Our Algorithms Work
Risk Scoring Algorithm
Our risk assessment tool uses a transparent, rule-based scoring system:
- Input: You answer 20 risk questions (multiple choice)
- Processing: Each answer has a predefined point value (0-3 points)
- Weighting: Questions are weighted by category importance (e.g., Bias & Fairness: 1.2x, Data Protection: 1.3x)
- Calculation: Total score = sum of (answer points × category weight)
- Classification: Scores mapped to risk levels using fixed thresholds:
- 0-20: Critical Risk
- 21-35: High Risk
- 36-48: Medium Risk
- 49-60: Low Risk
- Recommendations: Pre-written mitigation text displayed based on low-scoring categories
Key Point: This is arithmetic, not AI. The algorithm is deterministic—the same inputs always produce the same outputs. No learning, no adaptation, no black boxes.
Compliance Scoring
Compliance dashboard scores are calculated using simple percentage completion:
- UK GDPR: (completed items / 15 total items) × 100
- ICO AI Guidance: (completed items / 11 total items) × 100
- Equality Act: (completed items / 7 total items) × 100
- Overall Score: weighted average of the three frameworks
Development Tools (AI-Assisted Coding)
Transparency Disclosure: While the platform itself doesn't use AI, we do use AI-assisted development tools during the coding process:
- Claude Code (Anthropic): AI coding assistant used by the development team
- GitHub Copilot: Code suggestions and autocomplete (occasional use)
Important: These tools help us write code faster, but they do NOT process your data or influence how the platform works. Your data never leaves our infrastructure for AI processing.
Future AI Features (If Any)
We have no immediate plans to introduce AI features. However, if we do in the future, we commit to:
30-Day Advance Notice
We will notify all users via email at least 30 days before deploying any AI feature.
Opt-Out Options
Any AI feature will be optional. You can continue using the platform without AI if you prefer.
Governed Using Our Own Platform
We will document our own AI systems in AI Governance Hub and publish our risk assessments and AIIAs publicly.
Full Transparency
We will disclose the AI model, training data sources, accuracy metrics, limitations, and risks.
Potential Future Use Cases (Hypothetical)
While we have no concrete plans, here are examples of AI features we might consider in the future (with appropriate governance):
- Document Analysis: NLP to extract key information from uploaded policies (e.g., auto-populate AIIA sections from DPIAs)
- Smart Recommendations: Suggest checklist items based on your industry or use case
- Risk Trend Analysis: Identify patterns across your AI systems to highlight common risks
- Chatbot Support: AI assistant to answer compliance questions (with human escalation)
Again, these are hypothetical. None are in development as of 15 February 2026.
Why We Don't Use AI (Yet)
Several reasons inform our current no-AI approach:
- Credibility: We can't advise clients on AI governance if we're not governing our own AI properly. Starting without AI gives us time to mature our governance practices.
- Transparency: Rule-based algorithms are fully explainable. AI models often aren't. For compliance tools, transparency is paramount.
- Data Privacy: Sending customer data to third-party AI services (e.g., OpenAI) introduces GDPR risks. We prefer to keep your data in our own infrastructure.
- Accuracy: AI can hallucinate or provide incorrect recommendations. For legal compliance tools, deterministic logic is safer.
- Cost: AI features (API calls, GPU compute) are expensive. We prefer to keep costs low and pass savings to customers.
Third-Party Services (No AI)
Our infrastructure partners do NOT use AI to process your data:
- Supabase: PostgreSQL database (no AI features enabled)
- Stripe: Payment processing (fraud detection may use ML, but that's Stripe's responsibility)
- Vercel: Hosting and CDN (no AI)
- Resend: Transactional email (no AI content analysis)
- PostHog: Privacy-preserving analytics (no AI-powered insights)
Verification
You don't have to take our word for it. Our platform is transparent by design:
- Open Methodology: Risk scoring and compliance algorithms documented in our GitHub repository
- Audit Trail: All calculations logged and visible to you
- Data Portability: Export your data anytime to verify processing is deterministic
- Third-Party Audits: Planned penetration testing and SOC 2 audits will verify our claims
Questions About AI Usage?
If you have questions about our use of AI (current or future), or if you discover any AI usage not disclosed here, please contact us immediately:
Email: ai-transparency@aigovernancehub.uk
Subject line: "AI Transparency Inquiry"
We will respond within 5 business days with a detailed explanation.
Policy Updates
We will update this AI Transparency Statement if we introduce any AI features or change our approach. Updates will be communicated via:
- Email notification (30 days in advance)
- In-app banner notification
- Updated "Last Updated" date on this page
- Changelog entry in our release notes
Our Commitment
We pledge to maintain this level of transparency about AI usage as long as AI Governance Hub exists. If we ever fail to disclose AI usage honestly, we invite you to hold us accountable.
Transparency isn't just a value—it's our business model. We can't help you govern AI responsibly if we're not doing it ourselves.