OpenRouter Client
A Ruby client for the OpenRouter API, providing access to hundreds of AI models through a unified interface.
Installation
Install the gem and add to the application's Gemfile by executing:
bundle add openrouter_clientIf bundler is not being used to manage dependencies, install the gem by executing:
gem install openrouter_clientUsage
Configuration
Configure the client once at boot (e.g., in Rails an initializer) using OpenRouter.configure.
require "openrouter"
OpenRouter.configure do |config|
config.api_key = "your-key" # Optional. Defaults to ENV["OPENROUTER_API_KEY"]
config.api_base = "https://openrouter.ai/api/v1" # Optional (default)
config.request_timeout = 120 # Optional (default: 120 seconds)
config.site_url = "https://myapp.com" # Optional. For app attribution
config.site_name = "My App" # Optional. For app attribution
endCreate a Chat Completion
The simplest way to use OpenRouter is to create a chat completion:
completion = OpenRouter::Completion.create!(
messages: [
{ role: "user", content: "What is the meaning of life?" }
],
model: "openai/gpt-4"
)
puts completion.content # => "The meaning of life..."
puts completion.model # => "openai/gpt-4"
puts completion.id # => "gen-xxxxx"With System Messages
completion = OpenRouter::Completion.create!(
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" }
],
model: "anthropic/claude-3-opus"
)Streaming Responses
Use stream! for SSE streaming. It yields each chunk and returns the final completion:
completion = OpenRouter::Completion.stream!(
messages: [{ role: "user", content: "Tell me a story" }],
model: "openai/gpt-4"
) do |chunk|
# chunk is a Hash with the streamed data
print chunk.dig("choices", 0, "delta", "content")
end
puts "\n---"
puts "Final content: #{completion.content}"Advanced Parameters
completion = OpenRouter::Completion.create!(
messages: [{ role: "user", content: "Write a haiku" }],
model: "openai/gpt-4",
temperature: 0.7,
max_tokens: 100,
top_p: 0.9,
frequency_penalty: 0.5,
presence_penalty: 0.5,
stop: ["\n\n"],
seed: 42
)Tool Calling
completion = OpenRouter::Completion.create!(
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
model: "openai/gpt-4",
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get the weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" }
},
required: ["location"]
}
}
}
]
)
if completion.has_tool_calls?
completion.tool_calls.each do |tool_call|
puts "Function: #{tool_call['function']['name']}"
puts "Arguments: #{tool_call['function']['arguments']}"
end
endModel Fallbacks
completion = OpenRouter::Completion.create!(
messages: [{ role: "user", content: "Hello" }],
models: ["openai/gpt-4", "anthropic/claude-3-opus", "google/gemini-pro"],
route: "fallback"
)Image Generation
Generate images with models that support image output:
completion = OpenRouter::Completion.create!(
model: "google/gemini-2.5-flash-image-preview",
messages: [{ role: "user", content: "Generate a beautiful sunset over mountains" }],
modalities: ["image", "text"]
)
# Check if images were generated
if completion.images?
completion.images.each do |image|
# Images are base64-encoded data URLs
image_url = image.dig("image_url", "url")
puts "Generated image: #{image_url[0..50]}..."
end
end
puts completion.content # Text response (if any)Image Configuration (Gemini models)
completion = OpenRouter::Completion.create!(
model: "google/gemini-2.5-flash-image-preview",
messages: [{ role: "user", content: "Create a futuristic cityscape" }],
modalities: ["image", "text"],
image_config: {
aspect_ratio: "16:9", # 1:1, 2:3, 3:2, 4:3, 9:16, 16:9, etc.
image_size: "4K" # 1K, 2K, or 4K
}
)Additional API Options
Pass any additional OpenRouter API parameters:
completion = OpenRouter::Completion.create!(
messages: [{ role: "user", content: "Hello" }],
model: "openai/gpt-4",
# Reasoning configuration
reasoning: { effort: "high" },
# Web search plugin
plugins: [{ id: "web", enabled: true }],
# Provider preferences
provider: {
order: ["OpenAI", "Anthropic"],
allow_fallbacks: true
}
)Models
List, search, and find models:
# Get all models
models = OpenRouter::Model.all
puts "Total models: #{models.count}"
# Find a specific model (returns nil if not found)
model = OpenRouter::Model.find_by(id: "openai/gpt-4")
# Find a model (raises NotFoundError if not found)
model = OpenRouter::Model.find("openai/gpt-4")
model = OpenRouter::Model.find_by!(id: "openai/gpt-4")
# Model attributes
puts model.name # => "GPT-4"
puts model.context_length # => 8192
puts model.input_price # => 0.00003 (per token)
# Endpoints (available providers for this model)
model.endpoints.each do |endpoint|
puts endpoint.provider_name # => "OpenAI"
puts endpoint.context_length # => 8192
puts endpoint.prompt_price # => 0.00003
puts endpoint.completion_price # => 0.00006
puts endpoint.available? # => true
puts endpoint.supports?(:tools) # => true
end
puts model.output_price # => 0.00006 (per token)
puts model.free? # => false
# Search models
results = OpenRouter::Model.search(query: "claude")
# Find by provider
openai_models = OpenRouter::Model.by_provider(provider: "openai")
# Find free models
free_models = OpenRouter::Model.free
# Use a model to create a completion
model = OpenRouter::Model.find("openai/gpt-4")
completion = model.complete(messages: [{ role: "user", content: "Hello!" }])Embeddings
Generate vector embeddings from text for semantic search, RAG, and more:
# Single text embedding
embedding = OpenRouter::Embedding.create!(
model: "openai/text-embedding-3-small",
input: "The quick brown fox jumps over the lazy dog"
)
puts embedding.vector # => [0.123, -0.456, ...]
puts embedding.dimensions # => 1536
puts embedding.total_tokens # => 9
# Batch embedding (multiple texts)
embedding = OpenRouter::Embedding.create!(
model: "openai/text-embedding-3-small",
input: [
"Machine learning is a subset of AI",
"Deep learning uses neural networks"
]
)
embedding.vectors.each_with_index do |vector, i|
puts "Text #{i}: #{vector.length} dimensions"
end
# Access individual embedding data
embedding.data.each do |item|
puts "Index: #{item.index}, Dimensions: #{item.dimensions}"
end
# List available embedding models
models = OpenRouter::Embedding.models
models.each { |m| puts m["id"] }Generation Stats
Query detailed usage information for a completion:
completion = OpenRouter::Completion.create!(
messages: [{ role: "user", content: "Hello" }],
model: "openai/gpt-4"
)
# Query generation stats (may take a moment to be available)
generation = OpenRouter::Generation.find_by(id: completion.id)
if generation
puts generation.model # => "openai/gpt-4"
puts generation.native_prompt_tokens # => 5
puts generation.native_completion_tokens # => 10
puts generation.total_cost # => 0.00045
puts generation.provider # => "openai"
puts generation.tokens_per_second # => 50.0
endCredits
Check your credit balance:
credit = OpenRouter::Credit.fetch
puts credit.total_credits # => 100.0
puts credit.total_usage # => 25.0
puts credit.remaining # => 75.0
# Convenience method
remaining = OpenRouter::Credit.remaining
# Check credit status
credit.low? # => true if less than 10% remaining
credit.exhausted? # => true if no credits leftAPI Keys
Manage API keys programmatically:
# Get current key info
current_key = OpenRouter::ApiKey.current
puts current_key.name
puts current_key.usage
# List all keys
keys = OpenRouter::ApiKey.all
# Create a new key
new_key = OpenRouter::ApiKey.create!(name: "Production Key", limit: 100.0)
puts new_key.key # Only shown once!
# Update a key
new_key.update!(name: "Updated Name", limit: 200.0)
# Delete a key
new_key.destroy!Error Handling
The gem raises typed exceptions for different error conditions:
begin
OpenRouter::Completion.create!(
messages: [{ role: "user", content: "Hello" }],
model: "openai/gpt-4"
)
rescue OpenRouter::UnauthorizedError
# Invalid or missing API key (401)
rescue OpenRouter::ForbiddenError
# Access denied (403)
rescue OpenRouter::NotFoundError
# Resource not found (404)
rescue OpenRouter::RateLimitError
# Rate limited (429)
rescue OpenRouter::PaymentRequiredError
# Insufficient credits (402)
rescue OpenRouter::BadRequestError
# Invalid request (400)
rescue OpenRouter::ServerError
# Server error (5xx)
endResponse Helpers
Completions have several helper methods:
completion.content # Message content
completion.role # "assistant"
completion.finish_reason # "stop", "length", "tool_calls", etc.
completion.prompt_tokens # Tokens in prompt
completion.completion_tokens # Tokens in response
completion.total_tokens # Total tokens
completion.stopped? # Finished due to stop sequence
completion.truncated? # Finished due to max_tokens
completion.tool_calls? # Contains tool calls
completion.images? # Contains generated images
completion.images # Array of generated images
completion.tool_calls # Array of tool callsDevelopment
After checking out the repo, run bin/setup to install dependencies. You can run bin/console for an interactive prompt that will allow you to experiment.
For local development, copy the example environment file and set your API key so bin/console can load it automatically:
cp .env.example .env
echo 'OPENROUTER_API_KEY=your_api_key_here' >> .envAvailable Scripts
bin/setup # Install dependencies
bin/console # Interactive Ruby console with gem loaded
bin/test # Run tests
bin/vcr # Run tests with VCR recording (creates cassettes)
bin/format # Run RuboCop with auto-correctRunning Tests
bin/test # Run all tests
bin/vcr # Run tests and record VCR cassettesTo install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and the created tag, and push the .gem file to rubygems.org.
License
The gem is available as open source under the terms of the MIT License.