ruby-mana 🔮
Embed LLM as native Ruby. Write natural language, it just runs.
require "mana"
numbers = [1, "2", "three", "cuatro", "五"]
~"compute the semantic average of <numbers> and store in <result>"
puts result # => 3.0What is this?
Mana turns LLM into a Ruby co-processor. Your natural language strings can read and write Ruby variables, call Ruby functions, manipulate objects, and control program flow — all from a single ~"...".
Not an API wrapper. Not prompt formatting. Mana weaves LLM into your Ruby code as a first-class construct.
Install
gem install ruby-manaOr in your Gemfile:
gem "ruby-mana"Requires Ruby 3.3+ and an API key (Anthropic, OpenAI, or compatible):
export ANTHROPIC_API_KEY=your_key_here
# or
export OPENAI_API_KEY=your_key_hereUsage
Prefix any string with ~ to make it an LLM prompt:
require "mana"
numbers = [1, 2, 3, 4, 5]
~"compute the average of <numbers> and store in <result>"
puts resultVariables
Use <var> to reference variables. Mana figures out read vs write:
- Variable exists in scope → Mana reads it and passes to LLM
- Variable doesn't exist → LLM creates it via
write_var
name = "Alice"
scores = [85, 92, 78, 95, 88]
~"analyze <scores> for <name>, store the mean in <average>, the highest in <best>, and a short comment in <comment>"
puts average # => 87.6
puts best # => 95
puts comment # => "Excellent and consistent performance"Object manipulation
LLM can read and write object attributes:
class Email
attr_accessor :subject, :body, :category, :priority
end
email = Email.new
email.subject = "URGENT: Server down"
email.body = "Database connection pool exhausted..."
~"read <email> subject and body, then set its category and priority"
puts email.category # => "urgent"
puts email.priority # => "high"Calling Ruby functions
LLM can call functions in your scope:
def fetch_price(symbol)
{ "AAPL" => 189.5, "GOOG" => 141.2, "TSLA" => 248.9 }[symbol] || 0
end
def send_alert(msg)
puts "[ALERT] #{msg}"
end
portfolio = ["AAPL", "GOOG", "TSLA", "MSFT"]
~"iterate <portfolio>, call fetch_price for each, send_alert if price > 200, store the sum in <total>"
puts total # => 579.6Mixed control flow
Ruby handles the structure, LLM handles the decisions:
player_hp = 100
enemy_hp = 80
inventory = ["sword", "potion", "shield"]
while player_hp > 0 && enemy_hp > 0
~"player HP=<player_hp>, enemy HP=<enemy_hp>, inventory=<inventory>, choose an action and store in <action>"
case action
when "attack" then enemy_hp -= rand(15..25)
when "defend" then nil
when "use_item"
~"pick a healing item from <inventory> and store its name in <item_name>"
inventory.delete(item_name)
player_hp += 25
end
player_hp -= action == "defend" ? rand(5..10) : rand(10..20)
endConfiguration
Mana.configure do |c|
c.model = "claude-sonnet-4-20250514"
c.temperature = 0
c.api_key = ENV["ANTHROPIC_API_KEY"]
c.max_iterations = 50
# Memory settings
c.namespace = "my-project" # nil = auto-detect from git/pwd
c.context_window = 200_000 # nil = auto-detect from model
c.memory_pressure = 0.7 # compact when tokens exceed 70% of context window
c.memory_keep_recent = 4 # keep last 4 rounds during compaction
c.compact_model = nil # nil = use main model for compaction
c.memory_store = Mana::FileStore.new # default file-based persistence
endMultiple LLM backends
Mana supports Anthropic and OpenAI-compatible APIs (including Ollama, DeepSeek, Groq, etc.):
# Anthropic (default for claude-* models)
Mana.configure do |c|
c.api_key = ENV["ANTHROPIC_API_KEY"]
c.model = "claude-sonnet-4-20250514"
end
# OpenAI
Mana.configure do |c|
c.api_key = ENV["OPENAI_API_KEY"]
c.base_url = "https://api.openai.com"
c.model = "gpt-4o"
end
# Ollama (local, no API key needed)
Mana.configure do |c|
c.api_key = "unused"
c.base_url = "http://localhost:11434"
c.model = "llama3"
end
# Explicit backend override
Mana.configure do |c|
c.backend = :openai # force OpenAI format
c.base_url = "https://api.groq.com/openai"
c.model = "llama-3.3-70b-versatile"
endBackend is auto-detected from model name: claude-* → Anthropic, everything else → OpenAI.
Custom effect handlers
Define your own tools that the LLM can call. Each effect becomes an LLM tool automatically — the block's keyword parameters define the tool's input schema.
# No params
Mana.define_effect :get_time do
Time.now.to_s
end
# With params — keyword args become tool parameters
Mana.define_effect :query_db do |sql:|
ActiveRecord::Base.connection.execute(sql).to_a
end
# With description (optional, recommended)
Mana.define_effect :search_web,
description: "Search the web for information" do |query:, max_results: 5|
WebSearch.search(query, limit: max_results)
end
# Use in prompts
~"get the current time and store in <now>"
~"find recent orders using query_db, store in <orders>"Built-in effects (read_var, write_var, read_attr, write_attr, call_func, done) are reserved and cannot be overridden.
Memory — automatic context sharing
Consecutive ~"..." calls automatically share context. No wrapper block needed:
~"remember: always translate to Japanese, casual tone"
~"translate <text1>, store in <result1>" # uses the preference
~"translate <text2>, store in <result2>" # still remembers
~"which translation was harder? store in <analysis>" # can reference bothMemory is per-thread and auto-created on the first ~"..." call.
Long-term memory
The LLM has a remember tool that persists facts across script executions:
~"remember that the user prefers concise output"
# ... later, in a different script execution ...
~"translate <text>" # LLM sees the preference in its long-term memoryManage long-term memory via Ruby:
Mana.memory.long_term # view all memories
Mana.memory.forget(id: 2) # remove a specific memory
Mana.memory.clear_long_term! # clear all long-term memories
Mana.memory.clear_short_term! # clear conversation history
Mana.memory.clear! # clear everythingIncognito mode
Run without any memory — nothing is loaded or saved:
Mana.incognito do
~"translate <text>" # no memory, no persistence
endPolyglot — Cross-Language Interop
~"..." is a universal operator. It detects whether the code is JavaScript, Python, Ruby, or natural language, and routes to the appropriate engine. Variables bridge automatically.
JavaScript
require "mana"
data = [1, 2, 3, 4, 5]
# JavaScript — auto-detected from syntax
~"const evens = data.filter(n => n % 2 === 0)"
puts evens # => [2, 4]
# Multi-line with heredoc
~<<~JS
const sum = evens.reduce((a, b) => a + b, 0)
const avg = sum / evens.length
JS
puts avg # => 3.0Python
# Python — auto-detected
~"evens = [n for n in data if n % 2 == 0]"
puts evens # => [2, 4]
# Multi-line
~<<~PY
import statistics
mean = statistics.mean(data)
stdev = statistics.stdev(data)
PY
puts mean # => 3.0Natural language (LLM) — existing behavior
~"analyze <data> and find outliers, store in <result>"
puts resultHow detection works
- Auto-detects from code syntax (token patterns)
- Context-aware: consecutive
~"..."calls tend to stay in the same language - Override with
Mana.engine = :javascriptorMana.with(:python) { ... } - Detection rules are defined in
data/lang-rules.yml— transparent, no black box
Variable bridging
- Simple types (numbers, strings, booleans, nil, arrays, hashes) are copied
- Each engine maintains a persistent context (V8 for JS, Python interpreter for Python)
- Variables created in one
~"..."call persist for the next call in the same engine
Setup
# Gemfile
gem "mana"
gem "mini_racer" # for JavaScript support (optional)
gem "pycall" # for Python support (optional)Testing
Use Mana.mock to test code that uses ~"..." without calling any API:
require "mana/test"
RSpec.describe MyApp do
include Mana::TestHelpers
it "analyzes code" do
mock_prompt "analyze", bugs: ["XSS"], score: 8.5
result = MyApp.analyze("user_input")
expect(result[:bugs]).to include("XSS")
end
it "translates with dynamic response" do
mock_prompt(/translate.*to\s+\w+/) do |prompt|
{ output: prompt.include?("Chinese") ? "你好" : "hello" }
end
expect(MyApp.translate("hi", "Chinese")).to eq("你好")
end
endBlock mode for inline tests:
Mana.mock do
prompt "summarize", summary: "A brief overview"
text = "Long article..."
~"summarize <text> and store in <summary>"
puts summary # => "A brief overview"
endUnmatched prompts raise Mana::MockError with a helpful message suggesting the stub to add.
Nested prompts
Functions called by LLM can themselves contain ~"..." prompts:
lint = ->(code) { ~"check #{code} for style issues, store in <issues>" }
# Equivalent to:
# def lint(code)
# ~"check #{code} for style issues, store in <issues>"
# issues
# end
~"review <codebase>, call lint for each file, store report in <report>"Each nested call gets its own conversation context. The outer LLM only sees the function's return value, keeping its context clean.
LLM-compiled methods
mana def lets LLM generate a method implementation on first call. The generated code is cached as a real .rb file — subsequent calls are pure Ruby with zero API overhead.
mana def fizzbuzz(n)
~"return an array of FizzBuzz results from 1 to n"
end
fizzbuzz(15) # first call → LLM generates code → cached → executed
fizzbuzz(20) # pure Ruby from .mana_cache/fizzbuzz.rb
# View the generated source
puts Mana.source(:fizzbuzz)
# def fizzbuzz(n)
# (1..n).map do |i|
# if i % 15 == 0 then "FizzBuzz"
# elsif i % 3 == 0 then "Fizz"
# elsif i % 5 == 0 then "Buzz"
# else i.to_s
# end
# end
# end
# Works in classes too
class Converter
include Mana::Mixin
mana def celsius_to_fahrenheit(c)
~"convert Celsius to Fahrenheit"
end
endGenerated files live in .mana_cache/ (add to .gitignore, or commit them to skip LLM on CI).
How it works
-
~"..."callsString#~@, which captures the caller'sBinding - Mana parses
<var>references and reads existing variables as context - Memory loads long-term facts and prior conversation into the system prompt
- The prompt + context is sent to the LLM with tools:
read_var,write_var,read_attr,write_attr,call_func,remember,done - LLM responds with tool calls → Mana executes them against the live Ruby binding → sends results back
- Loop until LLM calls
doneor returns without tool calls - After completion, memory compaction runs in background if context is getting large
Safety
⚠️ Mana executes LLM-generated operations against your live Ruby state. Use with the same caution as eval.
License
MIT