0.0
The project is in a healthy, maintained state
AiGuardrails is a Ruby gem that helps developers validate, correct, and enforce schemas on AI-generated outputs. It ensures structured data, prevents JSON errors, and provides a foundation for adding safety filters and auto-correction in Rails apps, CLI tools, background jobs, and scrapers. Think of it as Guardrails.AI for Ruby.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
 Dependencies

Development

~> 3.0
~> 1.21

Runtime

 Project Readme

AiGuardrails

Gem Version Build Status Coverage Status

AiGuardrails is a Ruby gem for validating, repairing, and securing AI-generated outputs. It helps ensure responses from LLMs (like OpenAI or Anthropic) are valid, safe, structured, and suitable for production use.


Table of Contents

  1. Overview

  2. Installation

  3. Quick Start

  4. Features

    • Schema Validation
    • Automatic JSON Repair
    • Unit Test Helpers / Mock Model Client
    • Provider-Agnostic API
    • Auto-Correction / Retry Layer
    • Safety & Content Filters
    • Easy DSL / Developer-Friendly API
    • Background Job / CLI Friendly
    • Optional Caching
    • JSON + Schema Auto-Fix Hooks
  5. Error Handling

  6. Development

  7. Contributing

  8. License

  9. Code of Conduct


Overview

AI models often hallucinate, generate invalid JSON, return inconsistent data types, or produce unsafe or policy-violating content.

AiGuardrails helps you detect, constrain, and mitigate hallucinations at the output layer by validating, repairing, filtering, and retrying AI-generated responses automatically.

  • ✅ JSON repair for malformed AI responses
  • ✅ Schema validation to constrain hallucinated fields and types
  • ✅ Safety filters for blocked or harmful content
  • ✅ Retry + correction when hallucinated output fails validation
  • ✅ Easy integration via a single AiGuardrails.run call

flowchart LR
    A[AI Response] --> B[JSON Parse Attempt]
    B -->|Valid| F[Return Output ✅]
    B -->|Invalid| C[Auto-Fix Hook 🔧]
    C --> D[Revalidate Schema]
    D -->|Fixed| F
    D -->|Still Invalid| E[Retry or Fallback 🚨]
    E --> G[Raise AiGuardrails::SchemaError]
Loading

Installation

Add AiGuardrails to your project using Bundler:

bundle add ai_guardrails

Or install manually:

gem install ai_guardrails

Requirements

  • Ruby >= 3.0.0
  • Rails >= 6.0 (if using Rails integration)
  • Optional dependencies:
    • ruby-openai for OpenAI provider
    • ruby-anthropic gem if using Anthropic provider

Quick Start

require "ai_guardrails"

schema = { name: :string, price: :float, tags: [:string] }

# Optional: Helps AI return structured output matching the schema.
# Can be any type: String, Hash, etc.
schema_hint = "JSON should include name, price, and tags"

result = AiGuardrails::DSL.run(
  prompt: "Generate product JSON (it can be messy, that's okay)",
  provider: :openai,
  provider_config: { api_key: ENV["OPENAI_API_KEY"] },
  schema: schema,
  schema_hint: schema_hint
)

puts result
# => { "name" => "Laptop", "price" => 1200.0, :tags=>["electronics", "computer" }

Features

Schema Validation

Validate AI output against a Ruby schema.

require "ai_guardrails"

schema = { name: :string, price: :float, tags: [:string] }
validator = AiGuardrails::SchemaValidator.new(schema)

input = { name: "Laptop", price: 1200.0, tags: ["electronics", "sale"] }
success, result = validator.validate(input)

if success
  puts "Valid output: #{result}"
else
  puts "Validation errors: #{result}"
end

Supported Types

Type Example
:string "Laptop"
:integer 42
:float 19.99
:boolean true
[:string] ["a", "b"]
[{ key: :string }] [{"key" => "value"}]

Example Invalid Input

input = { name: 123, price: "abc", tags: ["electronics", 2] }
success, errors = validator.validate(input)
# => { name: ["must be a string"], price: ["must be a float"], tags: ["element 1 must be a string"] }

Automatic JSON Repair

LLMs often return invalid JSON. AiGuardrails::JsonRepair automatically fixes common issues.

require "ai_guardrails"

raw_json = "{name: 'Laptop' price: 1200, tags: ['electronics' 'sale']}"
fixed = AiGuardrails::JsonRepair.repair(raw_json)

puts fixed
# => { "name" => "Laptop", "price" => 1200, "tags" => ["electronics", "sale"] }

What It Fixes

  • Missing quotes or commas
  • Single → double quotes
  • Trailing commas
  • Unbalanced braces/brackets
  • Nested arrays/objects without separators

Unrepairable JSON Example

begin
  AiGuardrails::JsonRepair.repair("NOT JSON")
rescue AiGuardrails::JsonRepair::RepairError => e
  puts "Could not repair JSON: #{e.message}"
end

Integration with Schema Validation

schema = { name: :string, price: :float }
fixed = AiGuardrails::JsonRepair.repair("{name: 'Laptop', price: '1200'}")
validator = AiGuardrails::SchemaValidator.new(schema)
success, result = validator.validate(fixed)

Unit Test Helpers / Mock Model Client

Simulate AI model responses for testing without API calls.

mock_client = AiGuardrails::MockModelClient.new(
  "Generate product" => '{"name": "Laptop", "price": 1200}'
)

response = mock_client.call(prompt: "Generate product")
puts response
# => '{"name": "Laptop", "price": 1200}'

mock_client.add_response("Generate user", '{"name": "Alice"}')

Simulate API Errors

begin
  mock_client.call(prompt: "error", raise_error: true)
rescue AiGuardrails::MockModelClient::MockError => e
  puts e.message
end

Fallback Example

response = mock_client.call(prompt: "error", default_fallback: "No mock response defined")
puts response
# => "No mock response defined"

Provider-Agnostic API

AiGuardrails supports any LLM provider, with dynamic loading and zero vendor dependencies.

Provider Gem Status
OpenAI ruby-openai ✅ Supported
Anthropic ruby-anthropic 🔜 Planned
Google Gemini gemini-ai 🔜 Planned
Ollama (local) ollama-ai 🔜 Planned
client = AiGuardrails::Provider::Factory.build(
  provider: :openai,
  config: { api_key: ENV["OPENAI_API_KEY"], model: "gpt-4o-mini" }
)

puts client.call_model(prompt: "Hello!")

Auto-Correction / Retry Layer

Automatically repairs and retries AI output until it passes schema validation.

schema = { name: :string, price: :float }

client = AiGuardrails::Provider::Factory.build(
  provider: :openai,
  config: { api_key: ENV["OPENAI_API_KEY"] }
)

auto = AiGuardrails::AutoCorrection.new(provider: client, schema: schema, max_retries: 3, sleep_time: 1)

result = auto.call(prompt: "Generate a product JSON", schema_hint: schema)
puts result
# => { "name" => "Laptop", "price" => 1200.0 }

Optional Parameters

  • max_retries: Maximum retry attempts
  • sleep_time: Delay between retries
  • schema_hint: Optional guide to help AI produce valid output
  • Raises AiGuardrails::AutoCorrection::RetryLimitReached if retries exhausted

Hallucination Mitigation (Output-Level)

LLMs may hallucinate fields, invent values, or return structurally incorrect data. AiGuardrails mitigates hallucinations at the output level, not by trusting the model, but by enforcing strict validation and correction rules.

This includes:

  • Rejecting hallucinated fields not defined in the schema
  • Retrying when required fields are missing or invalid
  • Coercing types safely (e.g., "19.99"19.99)
  • Blocking unsafe or fabricated content via safety filters

AiGuardrails does not claim to eliminate hallucinations entirely, but it ensures only valid, safe, and expected data reaches your application.


Safety & Content Filters

Detect and block unsafe or prohibited content.

filter = AiGuardrails::SafetyFilter.new(blocklist: ["badword", /forbidden/i])

filter.safe?("clean text") # => true
filter.safe?("badword inside") # => false

Raise exception on violation:

begin
  filter.check!("This has badword")
rescue AiGuardrails::SafetyFilter::UnsafeContentError => e
  puts e.message
end

Easy DSL / Developer-Friendly API

AiGuardrails::DSL.run is a single entry point combining:

  • JSON repair
  • Schema validation
  • Auto-correction & retries
  • Safety filters
  • Logging & debugging
  • Optional caching
result = AiGuardrails::DSL.run(
  prompt: "Generate product",
  schema: { name: :string, price: :float },
  schema_hint: schema, # It could be any other data type eg, string, json etc
  provider: :openai,
  provider_config: { api_key: ENV["OPENAI_API_KEY"] },
  blocklist: ["Forbidden"]
)

puts result
# => { "name" => "Laptop", "price" => 1200.0 }

Schema vs Schema Hint

  • schema (required): Full validation for final output.
  • schema_hint (optional): Guides AI generation, can be a subset or modified version.
schema = { name: :string, price: :float, tags: [:string] }
hint = "JSON should contain name, price and tags"

result = AiGuardrails::DSL.run(
  prompt: "Generate product JSON",
  schema: schema,
  schema_hint: hint,
  provider_config: { api_key: ENV["OPENAI_API_KEY"] },
  blocklist: ["Forbidden"],
  max_retries: 3,
  sleep_time: 1
)

puts result
# => { "name" => "Laptop", "price" => 1200.0, "tags" => ["electronics", "sale"] }

Background Job / CLI Friendly

Works seamlessly in Rails background jobs, Sidekiq, or standalone CLI scripts.

Background Job Example

require "logger"

logger = Logger.new($stdout)

AiGuardrails::BackgroundJob.perform(logger: logger, debug: true) do
  AiGuardrails::DSL.run(
    prompt: "Generate product",
    schema: { name: :string, price: :float },
    schema_hint: { name: :string, price: :float },
    provider_config: { api_key: ENV["OPENAI_API_KEY"] }
  )
end

CLI Example (Standalone Script or IRB) Use the CLI for running workflows outside of Rails or for local testing.

# If running as a standalone Ruby script, make sure the gem lib path is loaded:
$LOAD_PATH.unshift(File.expand_path("lib", __dir__))
require "ai_guardrails"

AiGuardrails::CLI.run(debug: true) do
  result = AiGuardrails::DSL.run(
    prompt: "Generate product",
    schema: { name: :string, value: :integer },
    schema_hint: { name: :string, value: :integer },
    provider_config: { api_key: ENV["OPENAI_API_KEY"] }
  )

  puts result
end

Note: End-users of the gem in Rails projects do not need the CLI. It is primarily for gem developers or for running workflows outside a Rails app.

Optional Caching

Cache AI responses for repeated prompts to reduce cost and latency.

AiGuardrails::Cache.configure(enabled: true, store: Rails.cache, expires_in: 300)

schema = { name: :string, price: :float }

result1 = AiGuardrails::DSL.run(prompt: "Generate product", schema: schema, schema_hint: schema, provider_config: { api_key: ENV["OPENAI_API_KEY"] })
result2 = AiGuardrails::DSL.run(prompt: "Generate product", schema: schema, schema_hint: schema, provider_config: { api_key: ENV["OPENAI_API_KEY"] })

puts result1 == result2 # => true

Fetch Examples

key = AiGuardrails::Cache.key("prompt", schema)

# Using default
value = AiGuardrails::Cache.fetch(key, "default_value")

# Using block
value = AiGuardrails::Cache.fetch(key) { "computed_result" }

JSON + Schema Auto-Fix Hooks

Automatically repair and coerce malformed JSON to match schema.

schema = { name: :string, price: :float, available: :boolean }
raw = '{"name": "Shirt", "price": "19.99", "available": "true"}'

fixed = AiGuardrails::AutoFix.fix(raw, schema: schema)
# => {"name"=>"Shirt", "price"=>19.99, "available"=>true}

Custom Hooks

hooks = [proc { |h| h["price"] *= 2 }]
fixed = AiGuardrails::AutoFix.fix(raw, schema: schema, hooks: hooks)
puts fixed["price"] # => 39.98

Hooks allow:

  • Setting default values
  • Transforming or normalizing data
  • Custom calculations or aggregations
  • Injecting metadata before final output

Hooks run after schema validation and JSON repair, ensuring safe, valid, and tailored output.


Error Handling

Exception When It Occurs
AiGuardrails::JsonRepair::RepairError Cannot repair invalid JSON input
AiGuardrails::AutoCorrection::RetryLimitReached Maximum retries reached without valid output
AiGuardrails::SafetyFilter::UnsafeContentError Blocked or unsafe content detected in AI output

Example:

begin
  result = AiGuardrails::DSL.run(
    prompt: "Generate product",
    schema: schema,
    provider_config: { api_key: ENV["OPENAI_API_KEY"] }
  )
rescue AiGuardrails::AutoCorrection::RetryLimitReached => e
  puts "Retries exceeded: #{e.message}"
rescue AiGuardrails::SafetyFilter::UnsafeContentError => e
  puts "Blocked content detected: #{e.message}"
end

Development

Install dependencies:

bin/setup
rake spec

Interactive console:

bin/console

Release a new version:

bundle exec rake release

Contributing

Bug reports and pull requests are welcome at: 👉 https://github.com/logicbunchhq/ai_guardrails

This project follows the Contributor Covenant.


License

Released under the MIT License.


Code of Conduct

Everyone interacting with AiGuardrails project is expected to follow the Code of Conduct.