This guide explains how to connect Large Language Models hosted and ready to use on Oracle Cloud Infrastructure AI service to OpenClaw using LiteLLM (selfhosted) as a proxy layer without the need to deploy a dedicated cluster. In fact, Oracle provides a range of ready-to-use models on demand.
Logical flow:
OpenClaw → LiteLLM → Oracle OCI Generative AI
OpenClaw never talks directly to OCI. It speaks OpenAI compatible APIs to LiteLLM. LiteLLM translates and forwards requests to Oracle’s Generative AI service.

1. Architecture Overview
OpenClaw
OpenClaw acts as the orchestration layer. It handles agents, routing, tools, and model selection.
LiteLLM
LiteLLM works as an OpenAI compatible proxy. It normalizes different providers behind a unified API.
Oracle OCI Generative AI
Oracle Cloud Infrastructure Generative AI hosts foundation models such as OPENAI / COHERE / META / GROK variants and other enterprise models exposed through OCI endpoints.
The key idea: OpenClaw only knows how to talk “OpenAI style.” LiteLLM converts that into OCI API calls.
2. Prerequisites
Before configuring anything:
- An OCI tenancy.
- A deployed Generative AI endpoint in OCI.
- LiteLLM running as a proxy.
- LiteLLM require Postgress DB server to run.
- OpenClaw installed and working.
Obtaining OCI Connection Information
To configure LiteLLM to connect to OCI, you need several parameters: tenancy OCID, user OCID, fingerprint, region, private key, and endpoint OCID.
If you have previously installed and configured the OCI CLI on a VM, these values are already available in the CLI configuration files:
~/.oci/config~/.oci/oci_api_key.pem
This makes it convenient to set up the LiteLLM endpoint without additional steps.
If you haven’t used the OCI CLI, you can still obtain the necessary information directly from Oracle documentation:
These resources explain how to generate the required credentials and retrieve all parameters needed for LiteLLM configuration.
3. Configure OpenClaw to Use LiteLLM
OpenClaw must point to LiteLLM as if it were OpenAI endpoint.
Add this sections to openclaw.json. Add your ip address and LITELLM MASTERKEY.
"models": {
"mode": "merge",
"providers": {
"litellm": {
"baseUrl": "http://LiteLLM_IPADDRESS:4000/v1",
"apiKey": "LITELLM_MASTERKEY",
"api": "openai-completions",
"models": [
{
"id": "oci/openai.gpt-oss-120b",
"name": "OCI Llama 3.3 70B",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 4096
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "litellm/oci/openai.gpt-oss-120b"
},
"models": {
"litellm/oci/openai.gpt-oss-120b": {
"alias": "oci-llama"
}
}
}
}
Key points:
baseUrlmust point to your LiteLLM proxy.apiKeyis the LiteLLM master key.- The
idmust match the model identifier configured inside LiteLLM and connected to Oracle OCI. - The agent’s primary model must reference the provider prefix
litellm/.
At this stage OpenClaw talks only to LiteLLM. OCI is still invisible from its perspective.

5. Configure LiteLLM for OCI
Inside LiteLLM, define the model entry that maps to the OCI endpoint.
You must also include these additional parameters to ensure compatibility with OpenClaw (Copy and paste right into the parameters section when configure the model endpoint):
{
"use_litellm_proxy": true,
"strip_tool_calls": true,
"supports_tool_choice": false,
"supports_function_calling": false,
"supported_openai_params": [
"stream",
"max_tokens",
"max_completion_tokens",
"temperature",
"frequency_penalty",
"logprobs",
"logit_bias",
"n",
"presence_penalty",
"seed",
"stop",
"top_p",
"response_format"
]
}
Why these settings matter:
use_litellm_proxyensures the request passes through the proxy logic.strip_tool_callsprevents tool call payloads from being forwarded if the model does not support them.supports_function_callingandsupports_tool_choicemust reflect OCI model capabilities.supported_openai_paramswhitelists parameters that OCI accepts through LiteLLM.
Authentication parameters required in LiteLLM:
- tenancy OCID
- user OCID
- fingerprint
- private key content from
oci_api_key.pem - region
- compartment OCID
These are retrieved directly from the OCI CLI configuration and key files.
6. End to End Flow
Once configured:
- OpenClaw agent selects oci/openai.gpt-oss-120b.
- OpenClaw sends an OpenAI compatible request to LiteLLM.
- LiteLLM authenticates with OCI.
- OCI Generative AI processes the request.
- Response flows back through LiteLLM to OpenClaw.
From OpenClaw’s perspective, it is just another OpenAI compatible model. Clean abstraction. No custom code inside OpenClaw.
7. Why Use This Architecture
Advantages:
- Centralized model routing in LiteLLM.
- Ability to mix OCI, OpenAI, Anthropic, or others behind one endpoint.
- No direct OCI coupling inside OpenClaw.
- Easier model switching and failover.
In practice, LiteLLM becomes your LLM control plane. OCI becomes a backend. OpenClaw remains the orchestrator.
Enjoy!