>_

SheLLM

Unified Infrastructure

Most LLM gateways assume you're paying per-token via API key. SheLLM's primary use case is the opposite: you already have CLI subscriptions (Claude Max, Gemini AI Plus, OpenAI Enterprise) and want to expose them as a regular HTTP API to your apps — without burning API quota.

Supported Backends

Native Connectivity

CLI terminal

Claude Code

Local execution bridge for Anthropic's power tools.

claude claude-sonnet claude-opus
CLI google

Gemini CLI

Seamless integration with Google's command line suite.

gemini gemini-pro gemini-flash
CLI code

Codex CLI

OpenAI's enterprise code generation command line.

codex
API bolt

Cerebras

Extreme throughput for large-scale inference tasks.

cerebras-8b cerebras-120b

Quick
Deployment

Start your gateway in seconds. SheLLM handles the routing, you handle the queries.

# Clone and install

$git clone https://github.com/rodacato/SheLLM.git && cd SheLLM

$bash scripts/setup/dev.sh

# Verify environment

$npm run check:env

# Start the gateway

$shellm start

# Send a request

$curl http://localhost:6100/v1/chat/completions \

-d '{"model":"claude","messages":[{"role":"user","content":"Hello"}]}'

How SheLLM compares

Feature SheLLM LiteLLM OpenRouter Portkey
CLI subscriptions as backends
OpenAI-compatible endpoint
Anthropic-compatible endpoint
Built-in admin dashboard
Self-hosted, SQLite only