Ruby LeonardoAI
Use the LeonardoAI API with Ruby! 🤖❤️
Generate, Get, and Delete images, models, variations, and datasets with Leonardo AI
🚢 Based in the US and want to hire me? Now you can! Email Me
Bundler
Add this line to your application's Gemfile:
gem "ruby-leonardoai"
And then execute:
$ bundle install
Gem install
Or install with:
$ gem install ruby-leonardoai
and require with:
require "leonardoai"
Usage
- Get your API access from https://app.leonardo.ai/api-access
- After subscribing, you will be able to generate an api key from https://app.leonardo.ai/settings
Quickstart
For a quick test you can pass your token directly to a new client:
client = LeonardoAI::Client.new(access_token: "access_token_goes_here")
With Config
For a more robust setup, you can configure the gem with your API keys, for example in an leonardoai.rb
initializer file. Never hardcode secrets into your codebase - instead use something like dotenv to pass the keys safely into your environments.
LeonardoAI.configure do |config|
config.access_token = ENV.fetch("LEONARDOAI_ACCESS_TOKEN")
end
If you use Rails 7, you would probably store your key in credentials.yml, then you can do something like this:
LeonardoAI.configure do |config|
config.access_token = Rails.application.credentials[Rails.env.to_sym].dig(:leonardoai, :api_key)
end
Then you can create a client like this:
client = LeonardoAI::Client.new
You can still override the config defaults when making new clients; any options not included will fall back to any global config set with LeonardoAI.configure. e.g. in this example the uri_base, request_timeout, etc. will fallback to any set globally using LeonardoAI.configure, with only the access_token overridden:
client = LeonardoAI::Client.new(access_token: "access_token_goes_here")
Custom timeout or base URI
The default timeout for any request using this library is 120 seconds. You can change that by passing a number of seconds to the request_timeout
when initializing the client. You can also change the base URI used for all requests.
client = LeonardoAI::Client.new(
access_token: "access_token_goes_here",
uri_base: "https://cloud.leonardo.ai/api/rest/v1/",
request_timeout: 240,
extra_headers: {
"accept" => "application/json",
"content-type": "application/json",
}
)
or when configuring the gem:
LeonardoAI.configure do |config|
config.access_token = ENV.fetch("LEONARDOAI_ACCESS_TOKEN") # Or Rails.application.credentials[Rails.env.to_sym].dig(:leonardoai, :api_key) for Rails 7
config.uri_base = "https://cloud.leonardo.ai/api/rest/v1/",
config.request_timeout = 240 # Optional
config.extra_headers = {
"accept" => "application/json",
"content-type": "application/json",
} # Optional
end
Running in Console
Easily run tests in console with bin/console
# Example
client = LeonardoAI::Client.new(access_token: "your-access-token-here")
params = {
:height=>1024,
:prompt=>"A ad flyer of a ninja cat, heavy texture, moonlit background, circle design, tshirt design, pen and ink style",
:width=>512, :num_images=>1,
:photoReal=>false,
:presetStyle=>"LEONARDO",
:promptMagic=>true,
:promptMagicVersion=>"v2",
:public=>false,
:init_strength=>0.4,
:sd_version=>"v2",
}
response = client.generations.generate(parameters: params)
puts response
Generation
ChatGPT is a model that can be used to generate text in a conversational style. You can use it to generate a response to a sequence of messages:
# https://docs.leonardo.ai/reference/creategeneration
params = {
:height=>1024,
:prompt=>"A ad flyer of a ninja cat, heavy texture, moonlit background, circle design, tshirt design, pen and ink style",
:width=>512, :num_images=>1,
:photoReal=>false,
:presetStyle=>"LEONARDO",
:promptMagic=>true,
:promptMagicVersion=>"v2",
:public=>false,
:init_strength=>0.4,
:sd_version=>"v2",
}
response = client.generations.generate(parameters: params) # {"sdGenerationJob"=>{"generationId"=>"c747522c-91e7-4830-8d1f-1f1ed37efd35"}}
puts response.dig("sdGenerationJob", "generationId")
# => "c747522c-91e7-4830-8d1f-1f1ed37efd35"
Model
# https://docs.leonardo.ai/reference/post_models-3d-upload
params = {
:modelExtension=>"this-is-an-example",
:name=>"some string"
}
response = client.models.upload_3d_model(parameters: params)
# https://docs.leonardo.ai/reference/getmodelbyid
params = {
:modelExtension=>"this-is-an-example",
:name=>"some string"
}
response = client.models.get_custom_models_by_id(id: "your-custom-model-id-here")
For more parameters, please check the API found here
Variation
# https://docs.leonardo.ai/reference/post_variations-unzoom
params = {
:id=>"this-is-some-id",
:isVariation=>"boolean: true|false"
}
response = client.variations.create_unzoom(parameters: params)
# https://docs.leonardo.ai/reference/getvariationbyid
response = client.variations.get_variation_by_id(id: "your-variation-id-here")
Dataset
# https://docs.leonardo.ai/reference/createdataset
params = {
:name=>"this-is-some-string-id",
:description=>"this-is-some-string"
}
response = client.datasets.create_dataset(parameters: params)
# https://docs.leonardo.ai/reference/getdatasetbyid
response = client.datasets.get_dataset_by_id(id: "your-dataset-id-here")
Development
After checking out the repo, run bin/setup
to install dependencies. You can run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
.
Warning
If you have an LEONARDOAI_ACCESS_TOKEN
in your ENV
, running the specs will use this to run the specs against the actual API, which will be slow and cost you money - 2 cents or more! Remove it from your environment with unset
or similar if you just want to run the specs against the stored VCR responses.
Release
First run the specs without VCR so they actually hit the API. This will cost 2 cents or more. Set LEONARDOAI_ACCESS_TOKEN in your environment or pass it in like this:
LEONARDOAI_ACCESS_TOKEN=123abc bundle exec rspec
Then update the version number in version.rb
, update CHANGELOG.md
, run bundle install
to update Gemfile.lock, and then run bundle exec rake release
, which will create a git tag for the version, push git commits and tags, and push the .gem
file to rubygems.org.
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/royalgiant/ruby-leonardoai. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
License
The gem is available as open source under the terms of the MIT License.
Code of Conduct
Everyone interacting in the Ruby LeonardoAI project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
Influences
Project heavily influenced by https://github.com/alexrudall/ruby-openai. Great project, go give them a star!