Usage with Codex

This guide walks through configuring Codex to use ekai-gateway for unified multi‑provider access and detailed usage analytics.

Why Use the Gateway

  • Single endpoint for OpenAI, Anthropic, xAI, and OpenRouter

  • Consistent chat completions API surface

  • Centralized usage and cost tracking at http://localhost:3000

Prerequisites

  • Node.js 18+ and npm

  • Codex installed

  • API keys for any providers you plan to use

Install and Run the Gateway

git clone https://github.com/ekailabs/ekai-gateway.git
cd ekai-gateway
npm install
npm run dev

Configure Environment

Copy and edit .env with your keys:

Option A: Quick Start via Environment

Point Codex to the gateway’s OpenAI‑compatible endpoint:

Use --model to pick a specific model routed by the gateway:

Set model_provider = "ekai" and define an ekai provider pointing to the gateway chat API.

$CODEX_HOME/config.toml (defaults to ~/.codex/config.toml):

Run Codex with your desired model(s):

Monitor Usage

  • Open http://localhost:3000 to view token usage, spend, and trends.

  • Filter by provider/model to compare costs.

Troubleshooting

  • 401/403 errors: ensure the corresponding provider API key is set in .env and has access to the selected model.

  • 404/Model not found: confirm the model name is supported and correctly spelled.

  • Network errors: verify npm run dev is running and OPENAI_BASE_URL/base_url points to the correct port.

Last updated