API Reference
Complete API documentation for the Lumnis AI REST API
Overview
The Lumnis AI API provides developers with access to advanced AI agent capabilities through a simple REST API. This documentation covers all endpoints, authentication, request/response formats, and best practices.
Base URL
https://api.lumnis.ai
Authentication
All API requests require authentication using a Bearer token in the Authorization header:
Authorization: Bearer your-api-key-hereExample Request
curl -X GET https://api.lumnis.ai/v1/threads \
-H "Authorization: Bearer your-api-key-here"Responses API
Create and manage AI-powered responses asynchronously.
POST /v1/responses
Create a new AI response. Responses are processed asynchronously in the background.
Request Body
{
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"messages": [
{
"role": "user",
"content": "What are the latest trends in AI?"
}
],
"user_id": "user@example.com",
"response_format": {
"type": "object",
"properties": {
"summary": { "type": "string" },
"key_points": { "type": "array", "items": { "type": "string" } }
}
},
"agent_config": {
"planner_model_type": "SMART_MODEL",
"coordinator_model_type": "REASONING_MODEL",
"orchestrator_model_type": "SMART_MODEL",
"planner_model_name": "openai:gpt-4.1",
"coordinator_model_name": "anthropic:claude-3-7-sonnet-20250219",
"use_cognitive_tools": true,
"enable_task_validation": true,
"generate_comprehensive_output": false
}
}Field Descriptions:
thread_id(optional): Existing thread ID to continue conversationmessages(required): Array of message objects withroleandcontentuser_id(optional): User identifier (UUID or email)response_format(optional): Structured output format specificationagent_config(optional): Advanced agent configuration
Agent Config Options:
planner_model_type: Model type for planning (SMART_MODEL,FAST_MODEL,REASONING_MODEL)coordinator_model_type: Model type for coordinationorchestrator_model_type: Model type for orchestrationplanner_model_name: Specific model override (e.g.,openai:gpt-4.1)coordinator_model_name: Specific coordinator modeluse_cognitive_tools: Enable cognitive processing toolsenable_task_validation: Validate tasks before executiongenerate_comprehensive_output: Generate detailed outputs
Headers
Idempotency-Key(optional): Ensure exactly-once processing
Response (202 Accepted)
{
"response_id": "550e8400-e29b-41d4-a716-446655440000",
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "queued",
"created_at": "2025-01-20T10:30:00Z"
}GET /v1/responses
List responses with optional filtering.
Query Parameters:
user_id(optional): Filter by user ID or emailstatus(optional): Filter by status (queued,in_progress,succeeded,failed,cancelled)start_date(optional): Filter by creation date (YYYY-MM-DD, inclusive start)end_date(optional): Filter by creation date (YYYY-MM-DD, inclusive end)limit(optional): Number of responses to return (1-100, default: 50)offset(optional): Number of responses to skip (default: 0)
Response
{
"responses": [
{
"response_id": "550e8400-e29b-41d4-a716-446655440000",
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "succeeded",
"output_text": "Here are the latest AI trends...",
"created_at": "2025-01-20T10:30:00Z",
"updated_at": "2025-01-20T10:31:00Z"
}
],
"total": 25,
"limit": 50,
"offset": 0
}GET /v1/responses/{responseId}
Get response status and content. Supports long-polling for real-time updates.
Query Parameters:
wait(optional): Wait up to N seconds (1-30) for status changeinclude_artifacts(optional): Include artifacts in response (default: false during polling, true when final)
Response
{
"response_id": "550e8400-e29b-41d4-a716-446655440000",
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"tenant_id": "tenant_123",
"user_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "succeeded",
"output_text": "Here are the latest AI trends...",
"structured_response": {
"summary": "AI is evolving rapidly...",
"key_points": ["Point 1", "Point 2"]
},
"progress_percentage": 100,
"current_step": "Response generated",
"artifacts": [],
"error": null,
"created_at": "2025-01-20T10:30:00Z",
"updated_at": "2025-01-20T10:31:00Z",
"completed_at": "2025-01-20T10:31:00Z"
}Status Values:
queued: Waiting to be processedin_progress: Currently being processedsucceeded: Completed successfullyfailed: Processing failedcancelled: Cancelled by user
POST /v1/responses/{responseId}/cancel
Cancel a queued or in-progress response.
Response
{
"status": "cancelled"
}GET /v1/responses/{responseId}/artifacts
List artifacts generated by the response.
Query Parameters:
limit(optional): 1-100 (default: 50)offset(optional): Pagination offset (default: 0)
Response
{
"artifacts": [
{
"artifact_id": "art_123",
"name": "analysis_report.pdf",
"type": "file",
"size_bytes": 102400,
"url": "https://storage.example.com/...",
"created_at": "2025-01-20T10:31:00Z"
}
],
"total": 1,
"limit": 50,
"offset": 0
}Threads API
Manage conversation threads for organizing related responses.
GET /v1/threads
List conversation threads with pagination.
Query Parameters:
user_id(optional): Filter by user ID or emaillimit(optional): 1-100 (default: 50)offset(optional): Pagination offset (default: 0)
Response
{
"threads": [
{
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"tenant_id": "tenant_123",
"user_id": "550e8400-e29b-41d4-a716-446655440000",
"title": "AI Trends Discussion",
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:31:00Z",
"response_count": 5,
"last_response_at": "2025-01-20T10:31:00Z"
}
],
"total": 25,
"limit": 50,
"offset": 0
}GET /v1/threads/{threadId}
Get detailed information about a specific thread.
Response
{
"thread_id": "550e8400-e29b-41d4-a716-446655440000",
"tenant_id": "tenant_123",
"user_id": "550e8400-e29b-41d4-a716-446655440000",
"title": "AI Trends Discussion",
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:31:00Z",
"response_count": 5,
"last_response_at": "2025-01-20T10:31:00Z"
}GET /v1/threads/{threadId}/responses
Get all responses in a thread.
Query Parameters:
limit(optional): 1-100 (default: 50)offset(optional): Pagination offset (default: 0)
Response
Returns an array of response objects (same format as GET /v1/responses/{responseId})
PATCH /v1/threads/{threadId}
Update thread metadata.
Request Body
{
"title": "Updated Thread Title"
}Response
Returns the updated thread object.
DELETE /v1/threads/{threadId}
Delete a thread and all associated responses. This operation is irreversible.
Response: 204 No Content
Users API
Manage users within your tenant context.
POST /v1/users
Create a new user or return existing user if email already exists.
Request Body
{
"email": "user@example.com",
"first_name": "John",
"last_name": "Doe",
"metadata": {
"department": "Engineering"
}
}Response (201 Created or 200 OK)
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"tenant_id": "tenant_123",
"email": "user@example.com",
"first_name": "John",
"last_name": "Doe",
"is_active": true,
"metadata": {
"department": "Engineering"
},
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:00:00Z"
}GET /v1/users
List all users with pagination.
Query Parameters:
page(optional): Page number (≥1, default: 1)page_size(optional): Items per page (1-100, default: 20)
Response
{
"users": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"email": "user@example.com",
"first_name": "John",
"last_name": "Doe",
"is_active": true,
"created_at": "2025-01-20T10:00:00Z"
}
],
"total": 50,
"page": 1,
"page_size": 20
}GET /v1/users/{userIdentifier}
Get user by ID (UUID) or email address.
Response
Returns a user object (same format as POST /v1/users response).
PUT /v1/users/{userIdentifier}
Update user information.
Request Body
{
"first_name": "Jane",
"last_name": "Smith",
"metadata": {
"department": "Marketing"
}
}Response
Returns the updated user object.
DELETE /v1/users/{userIdentifier}
Deactivate a user (soft delete).
Response
{
"message": "User deleted successfully"
}GET /v1/users/{userIdentifier}/responses
Get all responses created by a specific user.
Uses the same query parameters and response format as GET /v1/responses.
GET /v1/users/{userIdentifier}/threads
Get all threads associated with a specific user.
Uses the same query parameters and response format as GET /v1/threads.
Files API
Upload, manage, and search files with semantic capabilities.
POST /v1/files/upload
Upload a file for processing and indexing.
Request (multipart/form-data)
file(required): The file to uploadscope(required):userortenantuser_id(optional): User ID or email (required if scope isuser)tags(optional): Comma-separated tags for categorizationduplicate_handling(optional):error,overwrite, orsuffix(default:suffix)
Response
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"file_name": "document.pdf",
"status": "pending",
"message": "File uploaded successfully. Processing in background."
}POST /v1/files/bulk-upload
Upload multiple files at once.
Request (multipart/form-data)
files(required): Array of files to uploadscope(required):userortenantuser_id(optional): User ID or email (required if scope isuser)tags(optional): Comma-separated tags
Response
{
"uploaded": [
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"file_name": "document1.pdf",
"status": "pending",
"message": "Queued for processing"
}
],
"failed": [
{
"filename": "document2.pdf",
"error": "File too large"
}
],
"total_uploaded": 1,
"total_failed": 1
}GET /v1/files
List files with optional filtering.
Query Parameters:
user_id(optional): Filter by user ID or emailscope(optional): Filter by scope (userortenant)file_type(optional): Filter by file extension (e.g.,pdf,docx)status(optional): Filter by processing status (pending,processing,completed,error)tags(optional): Comma-separated tags to filter bypage(optional): Page number (≥1, default: 1)limit(optional): Items per page (1-100, default: 20)
Response
{
"files": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"file_name": "document.pdf",
"file_type": "pdf",
"file_size": 1024000,
"scope": "tenant",
"user_id": null,
"processing_status": "completed",
"total_chunks": 150,
"chunks_embedded": 150,
"tags": ["documentation", "important"],
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:05:00Z"
}
],
"total_count": 50,
"page": 1,
"limit": 20,
"has_more": true
}GET /v1/files/{fileId}
Get file metadata by ID.
Query Parameters:
user_id(optional): User ID or email for access validation
Response
Returns a file metadata object (same format as items in GET /v1/files response).
GET /v1/files/{fileId}/content
Get file content.
Query Parameters:
content_type(optional):text,summary, ortranscriptstart_line(optional): Starting line number (for text files)end_line(optional): Ending line number (for text files)user_id(optional): User ID or email for access validation
Response
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"content_type": "text",
"text": "File content here...",
"start_line": 1,
"end_line": 100,
"metadata": {}
}GET /v1/files/{fileId}/download
Download the original file.
Query Parameters:
user_id(optional): User ID or email for access validation
Response
Returns a 307 redirect to a signed download URL or streams the file content directly.
GET /v1/files/{fileId}/status
Get file processing status.
Query Parameters:
user_id(optional): User ID or email for access validation
Response
{
"status": "completed",
"progress_percentage": 100.0,
"chunks_embedded": 150,
"total_chunks": 150,
"estimated_time_remaining_seconds": null,
"error_message": null,
"jobs": [
{
"job_type": "embedding",
"status": "completed",
"error_details": null
}
]
}POST /v1/files/search
Semantic search across files.
Request Body
{
"query": "machine learning algorithms",
"user_id": "user@example.com",
"file_types": ["pdf", "md"],
"limit": 10,
"min_score": 0.7
}Response
{
"query": "machine learning algorithms",
"results": [
{
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"file_name": "ml_guide.pdf",
"chunk_text": "Machine learning algorithms can be categorized...",
"score": 0.89,
"chunk_index": 5,
"metadata": {
"page": 3
}
}
],
"total_count": 5
}PATCH /v1/files/{fileId}/scope
Change file access scope.
Request Body
{
"scope": "tenant",
"user_id": "user@example.com"
}Response
Returns the updated file metadata object.
DELETE /v1/files/{fileId}
Delete a file.
Query Parameters:
hard_delete(optional): Permanently delete (true) or soft delete (false). Default: trueuser_id(optional): User ID or email (required for user-scoped files)
Response
{
"message": "File deleted successfully",
"file_id": "550e8400-e29b-41d4-a716-446655440000",
"hard_delete": true
}DELETE /v1/files/bulk
Delete multiple files.
Request Body
{
"file_ids": [
"550e8400-e29b-41d4-a716-446655440000",
"550e8400-e29b-41d4-a716-446655440001"
]
}Query Parameters:
hard_delete(optional): Default: trueuser_id(optional): Required for user-scoped files
Response
{
"deleted": [
"550e8400-e29b-41d4-a716-446655440000"
],
"failed": [
{
"file_id": "550e8400-e29b-41d4-a716-446655440001",
"error": "File not found"
}
],
"hard_delete": true,
"total_requested": 2
}Integrations API
Connect external services via OAuth (GitHub, Slack, Notion, etc.).
POST /v1/integrations/connections/initiate
Initiate an OAuth connection to an external app.
Request Body
{
"user_id": "user@example.com",
"app_name": "GITHUB"
}Supported Apps:
GITHUB,GITLAB,BITBUCKETSLACK,DISCORD,TEAMSNOTION,CONFLUENCELINEAR,JIRA,ASANAGOOGLE_DRIVE,DROPBOX,ONEDRIVE- And 100+ more apps...
Response
{
"redirect_url": "https://github.com/login/oauth/authorize?...",
"status": "pending",
"message": "Connection initiated successfully"
}GET /v1/integrations/connections/{userId}/{appName}
Get connection status for a specific app.
Response
{
"app_name": "GITHUB",
"status": "active",
"connected_at": "2025-01-20T10:00:00Z",
"error_message": null
}Status Values:
not_connected: No connection existspending: OAuth flow initiated but not completedactive: Connection is active and verifiedfailed: Connection failed or expireddisabled: Connection disabled by user
GET /v1/integrations/connections/{userId}
Get all connections for a user.
Query Parameters:
app_filter(optional): Comma-separated list of app names
Response
{
"user_id": "user@example.com",
"connections": [
{
"app_name": "GITHUB",
"status": "active",
"connected_at": "2025-01-20T10:00:00Z",
"error_message": null
}
]
}POST /v1/integrations/tools
Get available tools for a user based on their connections.
Request Body
{
"user_id": "user@example.com",
"app_filter": ["GITHUB", "SLACK"]
}Response
{
"user_id": "user@example.com",
"tools": [
{
"name": "GITHUB_CREATE_ISSUE",
"description": "Create a new GitHub issue",
"app_name": "GITHUB",
"parameters": {
"type": "object",
"properties": {
"title": { "type": "string" },
"body": { "type": "string" }
},
"required": ["title"]
}
}
],
"tool_count": 1
}POST /v1/integrations/connections/disconnect
Disconnect a user from an app.
Request Body
{
"user_id": "user@example.com",
"app_name": "GITHUB"
}Response
{
"success": true,
"message": "Disconnected successfully"
}GET /v1/integrations/apps
List apps enabled for the tenant.
Query Parameters:
include_available(optional): Also return all available apps (default: false)
Response
{
"enabled_apps": ["GITHUB", "SLACK", "NOTION"],
"total_enabled": 3,
"available_apps": ["GITHUB", "SLACK", "NOTION", "LINEAR", "..."],
"total_available": 150
}GET /v1/integrations/apps/{appName}/enabled
Check if an app is enabled for the tenant.
Response
{
"app_name": "GITHUB",
"enabled": true,
"message": "App GITHUB is enabled for this tenant"
}PUT /v1/integrations/apps/{appName}
Enable or disable an app for the tenant.
Request Body
{
"enabled": true
}Response
{
"app_name": "GITHUB",
"enabled": true,
"message": "App GITHUB has been enabled",
"updated_at": "2025-01-20T10:00:00Z"
}MCP Servers API
Manage Model Context Protocol (MCP) server configurations.
POST /v1/mcp-servers
Create an MCP server configuration (upsert).
Request Body
{
"name": "github-tools",
"description": "GitHub API tools",
"transport": "streamable_http",
"scope": "tenant",
"url": "https://github-mcp.example.com/api",
"headers": {
"Authorization": "Bearer token"
},
"user_identifier": "user@example.com"
}Transport Types:
stdio: Execute a local commandstreamable_http: HTTP streaming connectionsse: Server-Sent Events connection
Fields for stdio transport:
{
"transport": "stdio",
"command": "python",
"args": ["mcp_server.py"],
"env": {
"API_KEY": "secret"
}
}Response
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "github-tools",
"description": "GitHub API tools",
"transport": "streamable_http",
"scope": "tenant",
"url": "https://github-mcp.example.com/api",
"is_active": true,
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:00:00Z"
}GET /v1/mcp-servers
List MCP server configurations.
Query Parameters:
scope(optional): Filter by scope (tenant,user, orall, default:all)user_identifier(optional): Filter by user (UUID or email)is_active(optional): Filter by active statusskip(optional): Pagination offset (default: 0)limit(optional): Items per page (1-100, default: 100)
Response
{
"servers": [
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "github-tools",
"description": "GitHub API tools",
"transport": "streamable_http",
"scope": "tenant",
"is_active": true,
"created_at": "2025-01-20T10:00:00Z"
}
],
"total": 1,
"skip": 0,
"limit": 100
}GET /v1/mcp-servers/{serverId}
Get a specific MCP server configuration.
Response
Returns an MCP server object (same format as POST response).
PATCH /v1/mcp-servers/{serverId}
Update an MCP server configuration.
Request Body
{
"description": "Updated description",
"is_active": false,
"headers": {
"Authorization": "Bearer new-token"
}
}Response
Returns the updated MCP server object.
DELETE /v1/mcp-servers/{serverId}
Delete an MCP server configuration.
Response: 204 No Content
POST /v1/mcp-servers/test
Test an MCP server configuration before saving.
Request Body
{
"transport": "stdio",
"command": "python",
"args": ["mcp_server.py"],
"env": {
"API_KEY": "secret"
}
}Response
{
"success": true,
"message": "Successfully connected to MCP server",
"tool_count": 5,
"error_details": null
}POST /v1/mcp-servers/{serverId}/test
Test an existing MCP server connection.
Response
{
"success": true,
"message": "Successfully connected to MCP server 'github-tools'",
"tool_count": 5,
"error_details": null
}GET /v1/mcp-servers/{serverId}/tools
List tools provided by an MCP server.
Response
{
"server_id": "550e8400-e29b-41d4-a716-446655440000",
"server_name": "github-tools",
"tools": [
{
"name": "create_issue",
"description": "Create a new GitHub issue",
"input_schema": {
"type": "object",
"properties": {
"title": { "type": "string" },
"body": { "type": "string" }
},
"required": ["title"]
}
}
],
"total": 1
}Model Preferences API
Configure preferred AI models for different tasks.
GET /v1/model-preferences
Get current model preferences.
Query Parameters:
include_defaults(optional): Include system defaults (default: true)
Response
{
"preferences": [
{
"model_type": "SMART_MODEL",
"provider": "anthropic",
"model_name": "claude-3-opus-20240229",
"is_default": false,
"updated_at": "2025-01-20T10:00:00Z"
},
{
"model_type": "FAST_MODEL",
"provider": "openai",
"model_name": "gpt-4.1-mini",
"is_default": true,
"updated_at": null
}
],
"defaults_applied": ["FAST_MODEL", "REASONING_MODEL"]
}Model Types:
SMART_MODEL: General-purpose intelligent modelFAST_MODEL: Quick, efficient model for simple tasksREASONING_MODEL: Advanced reasoning capabilitiesEMBEDDING_MODEL: Text embedding generationVISION_MODEL: Image and vision tasks
Supported Providers:
openai: OpenAI modelsanthropic: Anthropic Claude modelsgoogle: Google Gemini models
PATCH /v1/model-preferences/{modelType}
Update a specific model preference.
Request Body
{
"model_type": "SMART_MODEL",
"provider": "anthropic",
"model_name": "claude-3-opus-20240229"
}Response
Returns the updated model preference object.
PUT /v1/model-preferences
Update multiple model preferences at once.
Request Body
{
"preferences": {
"SMART_MODEL": {
"model_type": "SMART_MODEL",
"provider": "anthropic",
"model_name": "claude-3-opus-20240229"
},
"FAST_MODEL": {
"model_type": "FAST_MODEL",
"provider": "openai",
"model_name": "gpt-4.1-mini"
}
}
}Response
Returns the full updated preferences object (same format as GET).
DELETE /v1/model-preferences/{modelType}
Delete a model preference to revert to system default.
Response: 204 No Content
POST /v1/model-preferences/check-availability
Check if models are available based on your API key configuration.
Request Body
[
{
"model_type": "SMART_MODEL",
"provider": "openai",
"model_name": "gpt-4.1"
}
]Response
[
{
"model_type": "SMART_MODEL",
"provider": "openai",
"model_name": "gpt-4.1",
"is_available": true,
"reason": null
}
]External API Keys API
Manage external API keys for AI providers (BYO Keys mode).
POST /v1/external-api-keys
Store an external API key.
Request Body
{
"provider": "OPENAI_API_KEY",
"api_key": "sk-..."
}Supported Providers:
OPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEYEXA_API_KEYSERPAPI_API_KEY
Response
{
"key_id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "OPENAI_API_KEY",
"is_active": true,
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:00:00Z",
"created_by": null
}GET /v1/external-api-keys
List all stored external API keys (metadata only, keys are never returned).
Response
[
{
"key_id": "550e8400-e29b-41d4-a716-446655440000",
"provider": "OPENAI_API_KEY",
"is_active": true,
"created_at": "2025-01-20T10:00:00Z",
"updated_at": "2025-01-20T10:00:00Z"
}
]GET /v1/external-api-keys/{keyId}
Get details for a specific external API key.
Response
Returns an external API key object (same format as POST response).
DELETE /v1/external-api-keys/{provider}
Delete an external API key.
Response
{
"message": "External API key deleted successfully"
}GET /v1/external-api-keys/mode
Get current API key mode.
Response
{
"api_key_mode": "byo_keys"
}Modes:
byo_keys: Use your own API keys (Bring Your Own Keys)platform: Use Lumnis AI platform keys (not currently supported in P0)
PATCH /v1/external-api-keys/mode
Update API key mode.
Request Body
{
"mode": "byo_keys"
}Response
{
"api_key_mode": "byo_keys"
}Tenant Info API
Read-only access to tenant information.
GET /v1/tenants/{tenantId}
Get tenant information. Users can only access their own tenant.
Response
{
"id": "tenant_123",
"name": "My Organization",
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-20T10:00:00Z"
}Error Handling
All API errors follow a consistent format:
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid request parameters",
"details": {
"field": "messages",
"reason": "Messages array cannot be empty"
}
}
}Common Error Codes
| Code | HTTP Status | Description |
|---|---|---|
VALIDATION_ERROR | 400 | Invalid request parameters |
AUTHENTICATION_ERROR | 401 | Missing or invalid API key |
ACCESS_DENIED | 403 | Insufficient permissions |
NOT_FOUND | 404 | Resource not found |
RATE_LIMIT_ERROR | 429 | Too many requests |
INTERNAL_ERROR | 500 | Internal server error |
SERVICE_UNAVAILABLE | 503 | Service temporarily unavailable |
Rate Limiting
API requests are rate limited to ensure fair usage:
- Default limit: 100 requests per minute per API key
- Response headers:
X-RateLimit-Limit: Maximum requests allowedX-RateLimit-Remaining: Requests remainingX-RateLimit-Reset: Unix timestamp when limit resets
When rate limited, you'll receive a 429 response with a Retry-After header.
Best Practices
1. Use Idempotency Keys
For critical operations, use idempotency keys to prevent duplicate processing:
curl -X POST https://api.lumnis.ai/v1/responses \
-H "Authorization: Bearer your-api-key" \
-H "Idempotency-Key: unique-request-id" \
-d '{...}'2. Implement Exponential Backoff
Handle rate limits and transient errors with exponential backoff:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await fetch(url, options)
if (response.ok) return response
if (response.status !== 429 && response.status < 500) throw new Error()
}
catch (error) {
if (i === maxRetries - 1) throw error
}
await new Promise(resolve => setTimeout(resolve, Math.pow(2, i) * 1000))
}
}3. Use Long Polling
For real-time updates, use the wait parameter:
const response = await fetch(
'https://api.lumnis.ai/v1/responses/resp_123?wait=30',
{ headers: { 'Authorization': 'Bearer your-api-key' } }
)4. Handle Async Processing
Poll for response completion:
async function waitForResponse(responseId, apiKey) {
while (true) {
const response = await fetch(
`https://api.lumnis.ai/v1/responses/${responseId}?wait=10`,
{ headers: { 'Authorization': `Bearer ${apiKey}` } }
)
const data = await response.json()
if (data.status === 'succeeded' || data.status === 'failed') {
return data
}
}
}5. Optimize File Operations
- Upload files with appropriate scope (user vs tenant)
- Use tags for organization
- Enable semantic search by processing files completely
- Use bulk operations for multiple files
6. User Identification
The API accepts both UUIDs and email addresses for user identification:
# Using UUID
curl https://api.lumnis.ai/v1/users/550e8400-e29b-41d4-a716-446655440000
# Using email
curl https://api.lumnis.ai/v1/users/user@example.comSDK Support
Official SDKs are available for:
- Node.js/TypeScript:
npm install lumnisai - Python:
pip install lumnisai
See Node.js SDK documentation and Python SDK documentation for details.
Support
- Documentation: https://docs.lumnis.ai
- Support Email: support@lumnis.ai