Skip to main content
Fireworks AI provides optimized inference with up to 4x faster performance than alternatives. Website: https://fireworks.ai/

Getting an API Key

  1. Go to Fireworks AI and sign in
  2. Navigate to API Keys in your dashboard
  3. Create a new API key and name it (e.g., “CodinIT”)
  4. Copy the key immediately

Configuration

  1. Click the settings icon (⚙️) in CodinIT
  2. Select “Fireworks” as the API Provider
  3. Paste your API key
  4. Enter the model ID (e.g., “accounts/fireworks/models/llama-v3p1-70b-instruct”)

Supported Models

  • Llama 3.1 series (8B, 70B, 405B)
  • Mixtral 8x7B and 8x22B
  • Qwen 2.5 series
  • DeepSeek models
  • Code Llama models
  • Vision models (Llama 3.2, Qwen 2-VL)

Key Features

  • Ultra-fast inference: Up to 4x faster than alternatives
  • Custom optimizations: Advanced kernels for maximum performance
  • 40+ models: Wide selection of optimized models
  • Fine-tuning: Available for custom models
  • OpenAI compatible: Standard API format

Notes

  • Pricing: Usage-based, see Fireworks Pricing
  • Compliance: HIPAA and SOC 2 Type II certified