LiteLLM Ruby Client
A Ruby client for LiteLLM with support for completions, embeddings, and image generation.
Installation
Add this line to your application's Gemfile:
gem 'litellm'
And then execute:
$ bundle install
Or install it yourself as:
$ gem install litellm
Starting Local LiteLLM Server
You can run a liteLLM server locally by placing this sample config.yaml file:
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: text-embedding-3-large
litellm_params:
model: openai/text-embedding-3-large
api_key: os.environ/OPENAI_API_KEY
- model_name: dall-e-2
litellm_params:
model: openai/dall-e-2
api_key: os.environ/OPENAI_API_KEY
- model_name: dall-e-3
litellm_params:
model: openai/dall-e-3
api_key: os.environ/OPENAI_API_KEY
Then you can simply start LiteLLM as a docker container:
$ docker run \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e OPENAI_API_KEY=YOUR-OPENAI-API-KEY-HERE \
-p 8000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
Configuration
Configure the client with your API key and other optional settings:
LiteLLM.configure do |config|
config.api_key = 'your-api-key'
config.base_url = 'http://localhost:8000'
config.timeout = 30
config.model = 'gpt-3.5-turbo'
end
Usage
Chat Completions
Basic completion:
client = LiteLLM::Client.new
response = client.completion(
messages: [{ role: 'user', content: 'Hello, how are you?' }],
)
puts response
Streaming completion:
client = LiteLLM::Client.new
client.completion(
messages: [
{ role: 'user', content: 'Write a story' }
],
stream: true,
) { |chunk| print chunk }
With additional parameters:
response = client.completion(
messages: [
{ role: 'user', content: 'Translate to French: Hello, world!' }
],
model: 'gpt-4',
temperature: 0.7,
max_tokens: 100,
)
With function/tool calling the gem has a ToolDefinition
DSL (inspired by Langchain.rb) to easily use ruby methods as tools for your LLM:
# Define your tool class
class CustomGreetingTool
include LiteLLM::Utils::ToolDefinition
define_function :generate_custom_greeting,
description: "Generate a custom greeting with unique replies" do
property :name, type: "string", description: "The name of the user"
end
def generate_custom_greeting(name:)
[
"Ahoy, #{name}! Ready to conquer the day?",
"Greetings, #{name}! The adventure begins now!",
"Salutations, #{name}! Time to make today epic!"
].sample
end
end
response = client.completion(
messages: [{ role: "user", content: "Give me a greeting, My name is Ahmed!" }],
tools: [CustomGreetingTool.new]
)
puts response
For more examples, check out the examples folder.
Embeddings
Generate embeddings for text:
vector = client.embedding(
input: 'The quick brown fox jumps over the lazy dog',
model: 'text-embedding-ada-002',
)
Image Generation
Generate images from text descriptions:
image_url = client.image_generation(
prompt: 'A beautiful sunset over the ocean',
size: '1024x1024',
n: 1,
)
Additional LiteLLM Features
LiteLLM offers additional features through its API, including Audio, Assistants, Files, Batch, and more.
This gem currently implements the features we needed, and I plan to add more as time permits.
Contributions are always welcome! If you'd like to help expand the gem, please check the Contributing section.
Author
Inspirations
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/Eptikar/litellm-ruby.
License
The gem is available as open source under the terms of the MIT.