Rails::Llm::Structured 🤖
Simple and powerful DSL for working with LLM structured outputs in Rails. Supports OpenAI with automatic validation, type checking, and clean Ruby API.
Installation
Add this line to your application's Gemfile:
gem 'rails-llm-structured'And then execute:
bundle installOr install it yourself as:
gem install rails-llm-structuredQuick Start
1. Set your OpenAI API key
export OPENAI_API_KEY='your-api-key-here'2. Create your first LLM class
class DocumentAnalyzer < Rails::Llm::Structured::Base
model "gpt-4o-mini"
structured_output do
field :summary, type: :string
field :sentiment, type: :enum, values: [:positive, :negative, :neutral]
field :score, type: :integer, min: 0, max: 10
field :topics, type: :array, items: :string
end
def analyze(text)
prompt = "Analyze this text and provide structured output: #{text}"
call(prompt)
end
end3. Use it!
analyzer = DocumentAnalyzer.new
result = analyzer.analyze("This is a great product! Highly recommended.")
puts result.summary # => "Positive review of a product with high recommendation"
puts result.sentiment # => :positive
puts result.score # => 9
puts result.topics # => ["product review", "recommendation"]
# Access metadata
puts result.metadata[:tokens] # => 150
puts result.metadata[:model] # => "gpt-4o-mini"Features
✅ Type Safety
All fields are validated automatically:
structured_output do
field :count, type: :integer, min: 0, max: 100
field :status, type: :enum, values: [:active, :inactive]
field :tags, type: :array, items: :string
field :optional_note, type: :string, optional: true
endSupported types:
-
:string- Text fields -
:integer- Whole numbers with optional min/max -
:number- Decimals with optional min/max -
:boolean- true/false -
:enum- One of predefined values -
:array- Lists with typed items
✅ System Prompts
Add consistent behavior across calls:
class SentimentAnalyzer < Rails::Llm::Structured::Base
model "gpt-4o"
system "You are an expert sentiment analyzer. Always be objective and fair."
structured_output do
field :sentiment, type: :enum, values: [:positive, :negative, :neutral]
field :confidence, type: :number, min: 0.0, max: 1.0
end
def analyze(text)
call(text)
end
end✅ Temperature Control
# More creative (temperature: 1.0)
result = analyzer.call(prompt, temperature: 0.9)
# More deterministic (temperature: 0.0)
result = analyzer.call(prompt, temperature: 0.1)✅ Token Limits
result = analyzer.call(prompt, max_tokens: 500)✅ Metadata Access
Every response includes metadata:
result.metadata[:model] # "gpt-4o-mini"
result.metadata[:tokens] # 150
result.metadata[:prompt_tokens] # 100
result.metadata[:completion_tokens] # 50
result.metadata[:created_at] # 2026-02-18 17:45:00 +0200Real-World Examples
Email Classifier
class EmailClassifier < Rails::Llm::Structured::Base
model "gpt-4o-mini"
structured_output do
field :category, type: :enum, values: [:spam, :support, :sales, :general]
field :priority, type: :enum, values: [:low, :medium, :high, :urgent]
field :requires_response, type: :boolean
field :suggested_department, type: :string
end
def classify(email_body, email_subject)
prompt = "Subject: #{email_subject}\n\nBody: #{email_body}"
call(prompt)
end
end
# Usage
classifier = EmailClassifier.new
result = classifier.classify(email.body, email.subject)
if result.requires_response
assign_to_department(result.suggested_department, priority: result.priority)
endProduct Review Analyzer
class ReviewAnalyzer < Rails::Llm::Structured::Base
model "gpt-4o"
system "Extract key insights from product reviews. Focus on actionable feedback."
structured_output do
field :overall_sentiment, type: :enum, values: [:positive, :negative, :mixed]
field :rating_prediction, type: :integer, min: 1, max: 5
field :pros, type: :array, items: :string
field :cons, type: :array, items: :string
field :main_topics, type: :array, items: :string
field :would_recommend, type: :boolean
end
def analyze(review_text)
call("Review: #{review_text}")
end
endContent Moderator
class ContentModerator < Rails::Llm::Structured::Base
model "gpt-4o"
structured_output do
field :safe, type: :boolean
field :categories, type: :array, items: :string
field :severity, type: :enum, values: [:none, :low, :medium, :high]
field :reason, type: :string, optional: true
end
def moderate(content)
call("Check if this content is safe: #{content}")
end
end
# Usage
moderator = ContentModerator.new
result = moderator.moderate(user_comment)
unless result.safe
flag_content(comment, reason: result.reason, severity: result.severity)
endError Handling
begin
result = analyzer.call(prompt)
rescue Rails::Llm::Structured::Error => e
Rails.logger.error("LLM error: #{e.message}")
# Handle error (retry, use fallback, etc.)
endTesting
Use VCR to record and replay API calls:
# spec/spec_helper.rb
require 'vcr'
VCR.configure do |config|
config.cassette_library_dir = "spec/fixtures/vcr_cassettes"
config.hook_into :webmock
config.filter_sensitive_data('<OPENAI_API_KEY>') { ENV['OPENAI_API_KEY'] }
end
# spec/llm/document_analyzer_spec.rb
RSpec.describe DocumentAnalyzer do
it "analyzes documents", :vcr do
analyzer = DocumentAnalyzer.new
result = analyzer.analyze("Great product!")
expect(result.sentiment).to eq(:positive)
expect(result.score).to be > 7
end
endDevelopment
After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install.
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/vvkuzmych/rails-llm-structured. This project is intended to be a safe, welcoming space for collaboration.
License
The gem is available as open source under the terms of the MIT License.
Roadmap
- Anthropic Claude support
- Google Gemini support
- Streaming responses
- Rails caching integration
- ActiveJob integration
- Cost tracking
- Rate limiting
- Retry with exponential backoff
Credits
Created by Volodymyr Kuzmych
Inspired by the need for simple, type-safe LLM integrations in Rails applications.