Skip to main content
Connect to any service that implements the OpenAI API specification.

Configuration

Set these environment variables:
  • OPENAI_LIKE_API_BASE_URL - Your API endpoint URL
  • OPENAI_LIKE_API_KEY - Authentication token
  • OPENAI_LIKE_API_MODELS (optional) - Manual model list in format: model1:limit;model2:limit

Setup

  1. Identify your OpenAI-compatible API endpoint
  2. Obtain the API key or authentication token
  3. Set environment variables in your deployment
  4. Test the connection
  5. Configure available models

Compatible Services

Local AI Tools:
  • LM Studio
  • Ollama
  • LocalAI
  • Text Generation WebUI
Cloud Alternatives:
  • Together AI
  • Replicate
  • Modal
  • Custom deployments
Self-Hosted:
  • vLLM
  • TGI (Text Generation Inference)
  • FastChat
  • Custom model servers

Use Cases

  • Self-hosted models and services
  • Alternative AI providers
  • Enterprise private deployments
  • Development and testing environments

Notes

  • Verify API implements OpenAI specification correctly
  • Ensure HTTPS for secure communication
  • Test with simple requests first
  • Use manual model configuration if auto-discovery fails