block-quote On this pagechevron-down
GitBook Assistant Ask chevron-down Usage with Codex This guide walks through configuring Codex to use ekai-gateway for unified multi‑provider access and detailed usage analytics.
Why Use the Gateway
Single endpoint for OpenAI, Anthropic, xAI, and OpenRouter
Consistent chat completions API surface
Centralized usage and cost tracking at http://localhost:3000
API keys for any providers you plan to use
Install and Run the Gateway
Copy git clone https://github.com/ekailabs/ekai-gateway.git
cd ekai-gateway
npm install
npm run dev Copy and edit .env with your keys:
Option A: Quick Start via Environment
Point Codex to the gateway’s OpenAI‑compatible endpoint:
Use --model to pick a specific model routed by the gateway:
Option B: Codex config.toml (Recommended)
Set model_provider = "ekai" and define an ekai provider pointing to the gateway chat API.
$CODEX_HOME/config.toml (defaults to ~/.codex/config.toml):
Run Codex with your desired model(s):
Open http://localhost:3000 to view token usage, spend, and trends.
Filter by provider/model to compare costs.
Troubleshooting
401/403 errors: ensure the corresponding provider API key is set in .env and has access to the selected model.
404/Model not found: confirm the model name is supported and correctly spelled.
Network errors: verify npm run dev is running and OPENAI_BASE_URL/base_url points to the correct port.
Last updated 4 months ago