Mistral AI is the leading European AI lab, offering both a commercial API and open-weight models under permissive licenses. Their Mistral Large, Medium, and Small variants cover the full cost/quality curve, and they're the go-to choice when EU data residency, GDPR compliance, or on-premise deployment matters more than raw capability.
Mistral's API follows the OpenAI-compatible chat completions format, so dropping it into existing codebases is a one-line change. For on-premise, you download the open-weight variants (Mistral 7B, Mixtral 8x7B, 8x22B) and run them via vLLM, llama.cpp, or similar inference engines. Function calling is supported across all API models. They also offer fine-tuning APIs for customizing their models to domain-specific tasks.
Mistral Large: $2 input / $6 output per M tokens. Mistral Medium: $0.40/$2. Mistral Small: $0.15/$0.45. Mistral Nemo: $0.08/$0.12. Open-weight variants are free to self-host. Dedicated deployment available for enterprise.
Harvey AI, Arthur, Orange, BNP Paribas, ING, and many European enterprises with data residency requirements. Also popular in the open-weight community.