Quick Summary
TL;DR - What You Need to Know
- We use Groq (with a "q"), not xAI's "Grok" - Different companies entirely
- No training on your data: Neither Groq nor Anthropic's API use your prompts/outputs to train models by default
- Data retention: Anthropic API deletes within 30 days; Groq doesn't permanently retain prompts (short-lived cache only)
- Regional hosting: EU customers → Azure EU regions; US customers → Azure US regions for optimal performance
- Environmental efficiency: 75% Groq (claims ~10× lower energy than GPUs) + 25% Anthropic (AWS renewable matching) + RAG minimizes tokens
- Corporate responsibility: Most spend goes to venture-backed Groq, not Big Tech platforms
Our AI Provider Choices
Our Current Mix
Groq (G-R-O-q) - 75% of our tokens
Custom LPU™ hardware designed for energy-efficient inference. Claims ~10× lower energy per token than typical GPU stacks (~1-3 J/token vs 10-30 J/token). Positioning capacity in Finland (free cooling, cleaner grid) and exploring Nordic renewable colocations. Venture-backed by BlackRock, Cisco, Samsung Catalyst Fund - not dominated by a single tech mogul.
Anthropic (Claude) - 25% of our tokens
Public Benefit Corporation with Long-Term Benefit Trust. Runs primarily on AWS Trainium/Inferentia for efficiency. AWS achieved 100% renewable electricity matching for 2023 and is >50% toward "water positive by 2030." Strategic investors include Amazon and Google, but PBC structure balances profit with mission.
Environmental Impact
Energy Efficiency by Design
Every AI request has an environmental footprint. We've designed our system to minimize energy consumption through three key strategies: choosing the most efficient inference hardware available, partnering with providers committed to renewable energy, and using retrieval-augmented generation to reduce the tokens needed per conversation.
Groq's LPU Technology
Groq's custom Language Processing Units (LPUs) are designed specifically for AI inference with ~10× lower energy consumption than traditional GPU stacks. Independent benchmarks suggest roughly 1-3 joules per token vs 10-30 joules on GPUs. Groq is also siting capacity in Nordic regions like Finland for free cooling and cleaner grid electricity.
AWS Renewable Energy
Anthropic runs primarily on AWS, which achieved 100% renewable electricity matching for all operations in 2023 through power purchase agreements and renewable energy certificates. AWS is also 53% of the way to becoming "water positive by 2030" and expanding recycled water use for datacenter cooling.
RAG Architecture
Our Retrieval-Augmented Generation system finds relevant information first, then generates concise responses. This means shorter prompts and outputs compared to pure generative approaches - directly reducing total tokens processed and therefore energy consumption, regardless of which AI provider is used.
Data Privacy & Security
How We Handle Your Data
Training Data Policy
No training on your data by default. Neither Groq nor Anthropic's commercial/API services use your prompts or outputs to train models without explicit opt-in. Anthropic's recent 5-year consumer training policy explicitly excludes API usage - that only applies to free consumer chat tiers.
Data Retention
Anthropic API: Deletes inputs/outputs within 30 days (unless special zero-data-retention agreement). Groq: Does not permanently retain prompts/outputs. May keep volatile cache for hours only for performance, then auto-expires.
Regional Data Hosting
Protected under EU-US Data Privacy Framework (upheld by EU General Court Sept 2025). EU customers: Data stored on Microsoft Azure EU regions by default. US customers: Data stored on Azure US regions for optimal performance. All regions use AES-256 encryption at rest, TLS in transit, and Azure's enterprise security controls.
Voice Assistant Features
Voice Technology Stack
When voice assistant features are included in your deployment, we leverage Microsoft's Azure AI services for both speech recognition and synthesis. Our regional deployment strategy ensures optimal performance while maintaining data residency requirements.
Azure OpenAI (Whisper STT)
Speech-to-Text processing using OpenAI's Whisper model via Azure OpenAI Service. Provides accurate transcription with support for multiple languages and accents. Processes audio locally within your selected Azure region for optimal latency and data residency.
Azure AI Speech (TTS)
Text-to-Speech synthesis using Azure's standard GPT-powered TTS models. Delivers natural-sounding voice responses with customizable voice characteristics. Real-time processing ensures responsive voice interactions during events.
Regional Deployment
Sweden (EU regions) for GDPR compliance requirements, or US datacenters for optimal performance when GDPR compliance is not required. Regional selection is determined based on your specific compliance and performance needs.
Corporate Ethics & Ownership
Who Benefits from Your AI Usage?
Many people worry about "enriching billionaires" with AI usage. We've chosen our provider mix specifically to address this concern. Here's where your money actually goes:
Groq Ownership
Venture-backed private company founded by Jonathan Ross (ex-Google TPU team). Recent $640M Series D (2024) from diversified investors: BlackRock Private Equity, Cisco Investments, Samsung Catalyst Fund, Tiger Global, D1 Capital. Not dominated by a single tech mogul - ownership spread across institutional funds and corporates.
Anthropic Structure
Public Benefit Corporation with a Long-Term Benefit Trust that elects some board members to maintain public benefit mission. Strategic investors include Amazon ($4B) and Google, plus large 2025 round from funds like ICONIQ and Fidelity. The PBC structure legally balances profit with mission, though ownership is still largely conventional.
Your Data, Your Control
What You Can Do
Minimize Sensitive Data
Best practice: Avoid including personal details, financial information, or confidential business data in your prompts. Use general examples or anonymized data when possible. Remember that while we don't train on your data, it's processed by our AI providers.
Regional Preferences
Automatic regional hosting: EU customers get Azure EU regions, US customers get Azure US regions by default for optimal performance. Enterprise users: For specific regional requirements, zero-data-retention agreements, or custom compliance needs, contact our team for advanced privacy controls.
Data Requests
Your rights: Request data deletion, access to your information, or clarification about data handling at any time. Contact our privacy team for GDPR requests, data portability, or questions about our retention policies.
Transparency & Limitations
Current Limitations
- Environmental impact precision: While Groq claims ~10× efficiency, industry-standard, independently audited per-token climate metrics don't exist yet. We rely on provider claims and directional estimates.
- Real-time grid carbon: AWS's "100% renewable" is annual matching, not guaranteed 24/7 carbon-free power at every hour/location. Actual emissions vary by region and time.
- Water usage visibility: Neither Groq nor Anthropic publish detailed water consumption data per token, though AWS reports overall water stewardship progress.
- Supply chain impacts: We don't yet track embodied carbon in chip manufacturing or datacenter construction - only operational energy use.
Our Ongoing Commitments
- Provider accountability: We regularly review our provider choices and will switch to more sustainable options as they become available and proven.
- Transparency updates: As industry-standard metrics emerge, we'll publish more granular environmental and privacy data.
- User feedback: We actively seek feedback on privacy concerns and sustainability priorities to guide our technology choices.
- Continuous optimization: We're constantly improving our RAG architecture to reduce token usage and exploring new efficiency techniques.
- Annual review: We commit to reviewing and updating this document annually, or whenever significant changes occur to our provider relationships or industry standards.