Choose your model
Viv can be powered by the language model of your choice. The Silogy team has tested models extensively to find the most capable models for design verification, which are our recommendations for default models below. Most teams can and should use our suggested defaults. However, vllm users must explicitly choose a model because we don't know what models any given vLLM server provides.
How to update the model
The models used by Viv are set in the config file at $HOME/.viv/config.json. The Viv debugging agent and the chat functionality use the fields llm.models.agent and llm.models.chat respectively. For example:
{
"llm": {
// ... other fields ...
"provider": "vllm",
"models": {
"agent": "meta-llama/llama-4-maverick",
"chat": "meta-llama/llama-4-maverick"
}
}
}
After you change the model in this file and save it, your new model choice will take effect in subsequent Viv runs. This file is fully documented in the configuration reference.
Default models by provider
All information is current as of Viv version 0.0.26.
| Provider | List of available models | Default agent model if not specified | Default chat model if not specified |
|---|---|---|---|
openai | All snapshots and aliases listed at https://platform.openai.com/docs/models which support the Responses API. For example: gpt-5.2 or gpt-5.2-2025-12-11. | gpt-5.2 | gpt-5.2 |
claude | All model API IDs and aliases listed at https://platform.claude.com/docs/en/about-claude/models/overview. For example: claude-opus-4-5-20251101 or claude-sonnet-4-5. | claude-sonnet-4-5-20250929 | claude-sonnet-4-5-20250929 |
gemini | All non-image model codes listed at https://ai.google.dev/gemini-api/docs/models. For example: gemini-2.5-flash-lite, gemini-3-pro-preview. | gemini-3-pro-preview | gemini-3-pro-preview |
vllm | Depends on which provider you've chosen, which you can do via base_url. The model name is provided as the model key in the JSON payload sent to the chat completion API. For example, if you've started the server with vllm serve NousResearch/Meta-Llama-3-8B-Instruct, then use NousResearch/Meta-Llama-3-8B-Instruct. | N/A | N/A |