Skip to main content

AI Providers (BYOK)

CronDB runs every AI feature on your keys, not ours. AI personalization in sequences, the reply classifier, inbox suggest-reply drafts, workflow ai_extract / ai_write actions — all dispatch through providers you've connected here. Same for signal-data integrations (Apollo, PredictLeads, NewsAPI, etc.) used by enrichment workflows.

tip

Connecting at least one LLM provider is required for: AI-personalized sequence steps, the reply classifier, inbox suggest-reply, workflow AI actions, AI scoring rules. Without a key, those features fall back to non-AI behavior or skip entirely.

Supported providers

LLM providers

NameDefault validation modelNotes
openaigpt-4o-miniOpenAI / Azure OpenAI compatible.
anthropicclaude-3-5-haiku-latestClaude API.
deepseekdeepseek-chatCheapest option — recommended default for warmup content + reply classification.
geminigemini-1.5-flashGoogle AI Studio key.
grokgrok-2-latestxAI key.
mistralmistral-small-latestMistral AI key.
openai_compatible(caller supplies)Self-hosted Llama / Ollama / vLLM / any OpenAI-shape endpoint. Requires endpoint_url.

Signal-data providers

apollo, predictleads, newsapi, clearbit, peopledatalabs — used by signal-enrichment workflow actions. Validation hits the provider's own health-check endpoint.


Add a Provider Key

Stores the key encrypted at rest. Validates with a 1-token completion (LLM) or a vendor health-check (signal) before saving — invalid keys never persist.

Endpoint

POST /v1/ai/providers

Body Parameters

ParameterTypeRequiredDescription
providerstringYesOne of the names from the table above.
api_keystringYesThe provider's API key. 8–512 chars.
endpoint_urlstringIf provider=openai_compatibleBase URL of your OpenAI-shape endpoint.
labelstringNoFree-text label for the dashboard (e.g. "personal-openai-key", "team-anthropic").
validation_modelstringNoOverride the default validation model — required when openai_compatible.

Example Request

curl -X POST \
-H "Authorization: Bearer cdb_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"provider": "deepseek",
"api_key": "sk-...",
"label": "main"
}' \
"https://api.crondb.com/v1/ai/providers"

Response

{
"key": {
"id": 12,
"provider": "deepseek",
"label": "main",
"is_active": true,
"key_preview": "sk-1…ab9c",
"last_validated_at": "2026-05-05T08:14:02Z",
"created_at": "2026-05-05T08:14:02Z",
"updated_at": "2026-05-05T08:14:02Z"
},
"validation": {
"provider": "deepseek",
"model": "deepseek-chat",
"latency_ms": 412,
"prompt_tokens": 6,
"completion_tokens": 1,
"echo": "pong"
}
}

key_preview shows the first 4 + last 4 chars (e.g. sk-1…ab9c); the full key is never returned from any endpoint after creation.

Error responses

StatusReason
400Unknown provider, missing endpoint_url for openai_compatible, key validation failed. The detail carries the upstream error from the provider.
500Unexpected validation error (network, etc.).

List Provider Keys

GET /v1/ai/providers
{
"keys": [
{
"id": 12,
"provider": "deepseek",
"label": "main",
"is_active": true,
"key_preview": "sk-1…ab9c",
"last_validated_at": "2026-05-05T08:14:02Z",
"created_at": "2026-05-05T08:14:02Z",
"updated_at": "2026-05-05T08:14:02Z"
}
]
}

The fallback chain prefers is_active=true keys in this order: deepseek → anthropic → openai → gemini → grok → mistral → openai_compatible. Set is_active=false to take a key out of rotation without deleting it.


Update / Rotate / Revalidate

PUT /v1/ai/providers/{key_id}

Body Parameters

ParameterDescription
labelRename.
is_activeToggle whether the engine uses this key.
api_keyProvide to rotate the key. Omit to keep the existing one.
endpoint_urlFor openai_compatible only.
revalidateSet true to run a 1-token test against the (possibly rotated) key.

When revalidate=true, the response includes the same validation block as the create endpoint.


Delete

DELETE /v1/ai/providers/{key_id}

Hard delete — the encrypted key row is removed. Outstanding workflows referencing this key fall through to the next-priority key in the fallback chain. If no other keys are connected, AI features for that user begin returning No BYOK provider connected errors until a new key is added.


Usage & Cost

Token + cost rollup for every LLM call dispatched on the caller's keys. Powers the dashboard cost chart and lets you budget against your own provider bill.

Endpoint

GET /v1/ai/providers/usage?days=30

days accepts 1-365, default 30.

Response

{
"days": 30,
"since": "2026-04-05T08:14:02Z",
"total_calls": 4128,
"total_cost_usd": 2.4502,
"projected_monthly_cost_usd": 3.18,
"by_provider": [
{ "provider": "deepseek", "calls": 3804, "prompt_tokens": 1820000, "completion_tokens": 412000, "cost_usd": 1.842 },
{ "provider": "anthropic", "calls": 324, "prompt_tokens": 184000, "completion_tokens": 48200, "cost_usd": 0.608 }
],
"by_action_type": [
{ "action_type": "sequence_ai_write", "calls": 1842, "cost_usd": 0.984 },
{ "action_type": "reply_classifier", "calls": 1208, "cost_usd": 0.412 },
{ "action_type": "workflow:ai_extract", "calls": 488, "cost_usd": 0.621 },
{ "action_type": "inbox_suggest_reply", "calls": 244, "cost_usd": 0.184 },
{ "action_type": "warmup_pool_generator", "calls": 346, "cost_usd": 0.249 }
],
"by_day": [
{ "day": "2026-04-06", "cost_usd": 0.082, "calls": 142 }
]
}

Response Fields

FieldDescription
total_calls / total_cost_usdSum across the lookback window.
projected_monthly_cost_usdRun-rate projection using the most recent 7-day cost × (30/7). Useful for catching cost spikes before end-of-month.
by_providerPer-provider breakdown — handy when you have multiple keys and want to know which one's burning.
by_action_typeWhat feature is using the budget. Sorted by cost descending. Suffix conventions: sequence_*, workflow:*, inbox_*, warmup_*, reply_classifier.
by_dayTime series for charts.

Known Models (UI Helper)

GET /v1/ai/providers/known-models

Returns the model dropdown choices the dashboard uses, plus per-provider metadata (kind = llm or signal, whether it supports endpoint_url, default validation model). LLM providers expose their PRICE_MAP models; signal providers return an empty model list with kind: "signal" so UIs can hide the model selector + cost estimator.


Notes

  • All keys are encrypted at rest using a transparent SQLAlchemy TypeDecorator. Database dumps don't expose plaintext.
  • The validation step on create costs you ~1 token from the provider — typically ~$0.0001 on the cheapest model.
  • The fallback chain is deterministic: when one provider returns a transient error (rate limit, 5xx), the engine tries the next active key automatically. If every key fails, the calling action returns the original error.
  • sequence_ai_write, inbox_suggest_reply, and workflow:ai_extract are the heaviest action types in practice. If costs spike, sort by_action_type to identify the culprit and either trim the prompt template or move the feature to a cheaper provider.
  • Signal-provider keys (Apollo / PredictLeads / etc.) bypass the LLM cost ledger — their costs are tracked separately by the relevant enrichment workflow actions.

Next Steps

  • Workflowsai_extract, ai_write, auto_reply_template actions all dispatch through your connected keys.
  • Sequences{{ai_write: instruction}} template tokens use the same fallback chain.
  • Unified Inboxsuggest-reply uses your connected keys to draft inbox responses.