Enterprise & research AI coding models
Anthropic Claude AI
Claude 3.5 Sonnet LLM with advanced reasoning for AI code generation
OpenAI GPT models
GPT-4o and o1 series AI models for intelligent code completion
Google Gemini AI
Gemini 2.0 Flash LLM via GCP Vertex AI for AI-powered development
DeepSeek AI coding
DeepSeek V3 advanced reasoning models for cost-effective AI code generation
Fast LLM inference & specialized AI coding
Groq fast AI inference
Ultra-fast LPU inference for real-time AI code completion
Together AI models
50+ open-source LLMs for flexible AI code generation
Hyperbolic AI
Optimized open-source LLM inference for AI development
Perplexity AI search
AI coding with integrated web search for context-aware development
XAI Grok models
Grok LLMs with large context windows for AI code generation
Fireworks AI
Fast LLM inference with 40+ AI models for code generation
Open-source AI models & community LLMs
Cohere AI models
Command R series LLMs for AI code generation and development
HuggingFace AI hub
Thousands of open-source community AI models for code generation
Mistral AI coding
Mistral and Codestral LLMs specialized for AI-powered development
Moonshot AI
Kimi series LLMs with Chinese language support for AI coding
Unified & Routing
Cloud & Enterprise
Local AI models & private LLM inference
Ollama local AI
Run open-source LLMs locally with Ollama for private AI code generation
LM Studio local models
Desktop app for running local AI models with private code generation
Choosing an AI coding provider
AI performance & speed:- Ultra-fast LLM inference: Groq, Together AI, Fireworks for real-time code completion
- Best AI reasoning: Anthropic Claude, DeepSeek, OpenAI o1 for complex code generation
- Balanced AI models: OpenAI GPT-4, Google Gemini, Cohere for general development
- Free/Low-cost AI: Local models (Ollama, LM Studio), OpenRouter for budget development
- Budget-friendly LLMs: Together AI, HuggingFace, Hyperbolic for cost-effective coding
- Premium AI models: Anthropic Claude, OpenAI GPT-4, Google Gemini for enterprise
- Maximum privacy: Local LLMs (Ollama, LM Studio) for private code generation
- Enterprise-grade AI: AWS Bedrock, Anthropic for secure development
- Cloud AI security: OpenAI, Google, Cohere with data protection
- Code generation: All LLM providers, specialized: Cohere, Together AI for development
- Multimodal AI: Google Gemini, OpenAI GPT-4 Vision, Moonshot for visual coding
- Long context LLMs: Claude (200K+), Gemini (1M+), GPT-4 (128K) for large codebases
- AI search integration: Perplexity for context-aware code generation
- Multilingual AI: Cohere, Google, Moonshot (Chinese) for international development
Quick Start
1
Choose Provider
Select based on needs: speed, cost, capabilities, or privacy
2
Get Credentials
Sign up and get API keys (cloud) or install software (local)
3
Configure CodinIT
Add credentials in CodinIT settings
4
Select Model
Choose from available models
5
Start Building
Begin using AI in your workflow
Notes
- Multi-provider: Configure multiple providers and switch between them
- API security: Keys stored locally, never transmitted to CodinIT servers
- Rate limits: Each provider has different limits
- Local vs Cloud: Local offers privacy but requires hardware; cloud offers convenience and advanced features
