OmniAI::Anthropic
An Anthropic implementation of the OmniAI APIs.
Installation
gem install omniai-anthropicUsage
Client
A client is setup as follows if ENV['ANTHROPIC_API_KEY'] exists:
client = OmniAI::Anthropic::Client.newA client may also be passed the following options:
-
api_key(required - default isENV['ANTHROPIC_API_KEY']) -
host(optional)
Configuration
Global configuration is supported for the following options:
OmniAI::Anthropic.configure do |config|
config.api_key = '...' # default: ENV['ANTHROPIC_API_KEY']
config.host = '...' # default: 'https://api.anthropic.com'
endChat
A chat completion is generated by passing in prompts using any a variety of formats:
completion = client.chat('Tell me a joke!')
completion.text # 'Why did the chicken cross the road? To get to the other side.'completion = client.chat do |prompt|
prompt.system('You are a helpful assistant.')
prompt.user('What is the capital of Canada?')
end
completion.text # 'The capital of Canada is Ottawa.'Model
model takes an optional string (default is claude-3-haiku-20240307):
completion = client.chat('Provide code for fibonacci', model: OmniAI::Anthropic::Chat::Model::CLAUDE_SONNET)
completion.text # 'def fibonacci(n)...end'Temperature
temperature takes an optional float between 0.0 and 1.0 (defaults is
0.7):
completion = client.chat('Pick a number between 1 and 5', temperature: 1.0)
completion.text # '3'Anthropic API Reference temperature
Stream
stream takes an optional a proc to stream responses in real-time chunks
instead of waiting for a complete response:
stream = proc do |chunk|
print(chunk.text) # 'Better', 'three', 'hours', ...
end
client.chat('Be poetic.', stream:)Anthropic API Reference stream
Format
format takes an optional symbol (:json, :text, or OmniAI::Schema::Format) and modifies the system message to include additional context for formatting:
format = OmniAI::Schema.format(name: "Contact", schema: OmniAI::Schema.object(
properties: {
name: OmniAI::Schema.string,
},
required: %i[name]
))
completion = client.chat(format: :json) do |prompt|
prompt.user('What is the name of the drummer for the Beatles?')
end
format.parse(completion.text) # { "name": "Ringo" }Anthropic API Reference control-output-format
Computers
sudo apt-get install convert # screenshots
sudo apt-get install scrot # screenshots
sudo apt-get install xdotool # mouse / keyboardcomputer = OmniAI::Anthropic::Computer.new
completion = client.chat(tools: [computer]) do |prompt|
prompt.user('Please signup for reddit')
endExtended Thinking
Extended thinking allows Claude to show its reasoning process. This is useful for complex problems where you want to see the model's thought process.
# Enable with default budget (10,000 tokens)
response = client.chat("What is 25 * 25?", model: "claude-sonnet-4-20250514", thinking: true)
# Or specify a custom budget
response = client.chat("Solve this complex problem...", model: "claude-sonnet-4-20250514", thinking: { budget_tokens: 20_000 })When thinking is enabled:
- Temperature is automatically set to 1 (required by Anthropic)
-
max_tokensis automatically adjusted to be greater thanbudget_tokens
Accessing Thinking Content
response.choices.first.message.contents.each do |content|
case content
when OmniAI::Chat::Thinking
puts "Thinking: #{content.thinking}"
puts "Signature: #{content.metadata[:signature]}" # Anthropic includes a signature
when OmniAI::Chat::Text
puts "Response: #{content.text}"
end
endStreaming with Thinking
client.chat("What are the prime factors of 1234567?", model: "claude-sonnet-4-20250514", thinking: true, stream: $stdout)The thinking content will stream first, followed by the response.