Skip to main content

Models

OpenAI-compatible models endpoints. Use these to enumerate the models available to your organization and to check whether a specific model id is usable before calling Chat Completions.

List models

GET https://api.ragen.ai/v1/models

Returns the full list of models your organization is allowed to use. This is the intersection of the underlying LLM catalog and your organization's model allowlist (configured by your Ragen admin).

Response

{
"object": "list",
"data": [
{
"id": "gpt-5.4",
"object": "model",
"created": 1744664400,
"owned_by": "ragen"
},
{
"id": "claude-sonnet-4-6",
"object": "model",
"created": 1744664400,
"owned_by": "ragen"
}
]
}

Example

from openai import OpenAI

client = OpenAI(base_url="https://api.ragen.ai/v1", api_key="YOUR_API_KEY")
for m in client.models.list():
print(m.id)
curl https://api.ragen.ai/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"

Retrieve a model

GET https://api.ragen.ai/v1/models/{id}

Returns one model by id, or 404 if it isn't available to your organization.

Response

{
"id": "gpt-5.4",
"object": "model",
"created": 1744664400,
"owned_by": "ragen"
}

Example

m = client.models.retrieve("gpt-5.4")
print(m.id, m.owned_by)

Notes

  • owned_by is always "ragen" — the underlying provider (OpenAI, Anthropic, Google, etc.) is intentionally abstracted.
  • The list is cached for 60 seconds on the server side. New models added by your admin will appear on the next cache miss.
  • If your organization has an empty model allowlist it means "no restriction" — the full catalog is returned.