Overview
OpenOperator supports various model providers. Here’s how to configure and use the most popular ones.Model Recommendations
We have yet to test performance across all models. Obviously the more capabable the model the better the performance. Thus it’s recommended to use any ofopenai/gpt-4o, deepseek/deepseek-chat, deepseek/deepseek-reasoner, gemini/gemini-2.0-flash-thinking-exp-01-21, gemini/gemini-2.0-flash-exp or anthopic/claude-3-5-sonnet-20241022.
We also support local models via ollama, like ollama/qwen2.5, ollama/deepseek-r1 or ollama/llama3.3, but be aware that small models often return the wrong output structure, which leads to parsing errors. We believe that local models will improve significantly this year.
All models require their respective API keys. Make sure to set them in your
environment variables before running the agent.
Supported Models
Here’s a non exhaustive list of supported model providers. For the full list of supported model providers see https://github.com/j0yk1ll/openoperator/blob/main/openoperator/llm.pyOpenAI
OpenAI’s GPT-4o models are recommended for best performance..env
Anthropic
.env
Azure OpenAI
.env
Groq
.env
Gemini
.env
DeepSeek-V3
The community likes DeepSeek-V3 for its low price, no rate limits, open-source nature, and good performance. The example is available here..env
DeepSeek-R1
We support DeepSeek-R1. Its not fully tested yet, more and more functionality will be added, like e.g. the output of it’s reasoning content. The example is available here. It does not support vision. The model is open-source so you could also use it with Ollama, but we have not tested it..env
Ollama
You can use Ollama to easily run local models.- Download Ollama from here
- Run
ollama pull model_name. Preferably pick a model which supports tool-calling from here - Run
ollama start