Setup
- Install Ollama: Download from ollama.com and install
- Start Ollama: Run
ollama servein terminal - Download a model:
- Configure context window:
Configuration in CodinIT
- Click the settings icon (⚙️) in CodinIT
- Select “ollama” as the API Provider
- Enter your saved model name
- (Optional) Set base URL if not using default
http://localhost:11434
Recommended Models
qwen2.5-coder:32b- Excellent for codingcodellama:34b-code- High quality, large sizedeepseek-coder:6.7b-base- Effective for codingllama3:8b-instruct-q5_1- General tasks
Notes
- Context window: Minimum 12,000 tokens recommended, 32,000 ideal
- Resource demands: Large models require significant system resources
- Offline capability: Works without internet after model download
- Performance: May be slow on average hardware
