Skip to main contentConnect to any service that implements the OpenAI API specification.
Configuration
Set these environment variables:
OPENAI_LIKE_API_BASE_URL - Your API endpoint URL
OPENAI_LIKE_API_KEY - Authentication token
OPENAI_LIKE_API_MODELS (optional) - Manual model list in format: model1:limit;model2:limit
Setup
- Identify your OpenAI-compatible API endpoint
- Obtain the API key or authentication token
- Set environment variables in your deployment
- Test the connection
- Configure available models
Compatible Services
Local AI Tools:
- LM Studio
- Ollama
- LocalAI
- Text Generation WebUI
Cloud Alternatives:
- Together AI
- Replicate
- Modal
- Custom deployments
Self-Hosted:
- vLLM
- TGI (Text Generation Inference)
- FastChat
- Custom model servers
Use Cases
- Self-hosted models and services
- Alternative AI providers
- Enterprise private deployments
- Development and testing environments
Notes
- Verify API implements OpenAI specification correctly
- Ensure HTTPS for secure communication
- Test with simple requests first
- Use manual model configuration if auto-discovery fails