0.0
No release in over 3 years
Drop-in mock server for testing Rails apps that use OpenAI-compatible APIs. Provides deterministic responses and per-request failure simulation.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
 Dependencies

Development

~> 2.0
~> 3.0
~> 1.0

Runtime

~> 1.0
~> 3.0
~> 2.0
~> 1.8
 Project Readme

MockOpenAI logo

MockOpenAI

Docs Tests Ruby Version License

A local mock server for OpenAI-compatible and Anthropic APIs with deterministic responses and per-request failure simulation for any Ruby application.

MockOpenAI lets you test any Ruby app that calls an LLM without hitting real APIs, spending real money, or waiting on rate limits. It supports the OpenAI Chat Completions API (POST /v1/chat/completions) and the Anthropic Messages API (POST /v1/messages). Works with Rails, Sinatra, CLI tools, background jobs, or plain Ruby scripts.


Why MockOpenAI?

  • No API keys needed: zero token costs, zero network calls in CI
  • Deterministic: control exactly what the LLM "says" for each request
  • Per-request failure modes: simulate timeouts, rate limits, malformed JSON, and more
  • No app changes: no monkey-patching, no client wrapping, no test doubles
  • Fast CI: tests run at local speed, not API speed
  • OpenAI + Anthropic: supports POST /v1/chat/completions and POST /v1/messages

Not sure if MockOpenAI is right for your project? See When not to use MockOpenAI.


Installation

# Gemfile
group :test do
  gem "mock_openai"
end
bundle install

Quick Start

RSpec:

# spec/rails_helper.rb
require "mock_openai/rspec"

# If your code makes real HTTP connections to the LLM API (CLI tools,
# integration tests, background jobs), start the server once here:
MockOpenAI.start_test_server!

RubyLLM.configure do |config|
  config.anthropic_api_base = MockOpenAI.server_url
end
it "returns a canned response", :mock_openai do
  MockOpenAI.set_responses([{ match: "Hello", response: "Hi!" }])
  expect(MyService.call_openai("Hello")).to eq("Hi!")
end

The :mock_openai tag wires everything up and resets state between tests automatically. start_test_server! is idempotent and blocks until the server is ready.

Minitest:

# test/test_helper.rb
require "mock_openai/minitest"

# If your code makes real HTTP connections to the LLM API (CLI tools,
# integration tests, background jobs), start the server once here:
MockOpenAI.start_test_server!

RubyLLM.configure do |config|
  config.anthropic_api_base = MockOpenAI.server_url
end
class MyChatTest < Minitest::Test
  include MockOpenAI::Minitest

  def test_returns_canned_response
    MockOpenAI.set_responses([{ match: "Hello", response: "Hi!" }])
    assert_equal "Hi!", MyService.call_openai("Hello")
  end
end

MockOpenAI::Minitest hooks into before_setup and after_teardown to reset state automatically. start_test_server! is idempotent and blocks until the server is ready.


Documentation

Full documentation is available at grymoire7.github.io/mockopenai:

  • Getting Started: installation, setup, first test
  • Usage: in-process vs. standalone server modes
  • Examples: multi-step conversations, failure modes, templates
  • Reference: full API, RSpec tags, CLI, and configuration

Contributing

PRs welcome. Open an issue to discuss new failure modes, matchers, or integrations.


License

MIT