What you need
To connect Coreply to a provider, you need three things:- API URL — the base endpoint for the provider’s inference API
- API Key — your authentication credential from the provider
- Model Name — the identifier of the model you want to use
Choosing a model
Model size has a meaningful impact on output quality with Coreply. Smaller models under 7 billion parameters often struggle to follow Coreply’s output format, which can result in poor or unusable suggestions. Models at 7B parameters and above generally work well. Reasoning models — such as GPT-5, Gemini 2.5, and Claude 3.7 and above — are technically compatible, but they introduce extra latency before each suggestion appears. For a real-time texting experience, non-reasoning models are a better fit. Some providers like OpenRouter disable reasoning by default, which can make these models usable.Supported model families
The following model families have been tested and confirmed to work with Coreply’s prompt format:| Model | Supported |
|---|---|
| Claude 3 & 4 Family | ✅ |
| Llama 3 & 4 Family | ✅ |
| Gemma 2 & 3 Family | ✅ |
| OpenAI GPT Family | ✅ |
| Google Gemini Family | ✅ |
Provider setup guides
Google AI Studio
Use Gemini models via the free AI Studio API.
Groq
Fast inference on open-source models with a free tier.
OpenRouter
Access hundreds of models from a single API key.
OpenAI
Use GPT-4.1 and GPT-4o models directly from OpenAI.
Mistral
Use Mistral models, including FIM mode with Codestral.
Custom provider
Any OpenAI-compatible endpoint works. Find the API URL, API Key, and Model Name in your provider’s docs.
