AI Configuration

Configure AI models to enable AI-powered features such as AI search and data governance checks.

Overview

Entropy Data fully works without any AI model. All core functionality operates independently of AI. AI features are optional enhancements that can be enabled or disabled at any time.

When enabled, AI powers the following features:

  • Search: Improved search experience. Natural language search across data products, contracts, teams, and definitions
  • Data Governance AI: Automated policy compliance checks on data products and contracts
  • Access Request Evaluation: AI analyzes access requests against policies and data contract terms
  • Data Contract Assistant: An Data Contract Assistant and inline AI features in the new ODCS Data Contract Editor

See AI Use Cases for detailed descriptions and planned features.

Disabled

AI is fully optional.

Navigate to Organization Settings → AI and ensure that AI is disabled.

Managed (Cloud only)

The Cloud version includes an optional managed AI model. No configuration required—AI features work out of the box. However, you still can optionally bring your own model if preferred.

Bring Your Own Model

For cloud and self-hosted deployments, configure your own AI model to enable AI features to have full control on costs and privacy considerations. You need an OpenAI API-compatible deployment (which most AI providers and gateways support).

Consider a tokens per minute quota of at least 100,000.

Navigate to Organization Settings → AI to configure your AI model.

OpenAI

SettingValue
Base URLhttps://api.openai.com/v1/
API KeyYour OpenAI API key from platform.openai.com
Modelgpt-4o (recommended)

Azure OpenAI

You can deploy OpenAI models in Azure AI Foundry. Note: For EU customers, we recommend to deploy in Sweden region.

You can also use the model-router as a model.

SettingValue
Base URLhttps://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-name}/
API KeyYour Azure OpenAI API key
ModelYour deployed model name

Google Gemini

Gemini models can be used through their OpenAI-compatible API:

SettingValue
Base URLhttps://generativelanguage.googleapis.com/v1beta/openai/
API KeyYour API key from aistudio.google.com → Get API key
Modelgemini-2.5-pro

Gemini 3 is not yet supported.

Other OpenAI-Compatible Endpoints

Any OpenAI-compatible API endpoint can be used, including:

  • Self-hosted models (vLLM, Ollama, etc.)
  • AI Gateways (Portkey, LiteLLM, etc.)
  • Other cloud providers with OpenAI-compatible APIs

Configure the base URL, API key, and model name according to your provider's documentation.

Custom Headers

For AI Gateways or providers that require custom headers (such as Portkey), you can add custom headers in the configuration. This is useful for routing, tracing, or authentication purposes.

Environment Variables (self-hosted)

After configuring your AI model, enable specific features:

Environment VariableValueDescription
APPLICATION_ACCESSREQUEST_AI_ENABLEDtrueEnable AI governance checks on access requests

See Configuration for more details on environment variables.