Tracebook
Cost tracking and review dashboard for RubyLLM conversations.
Tracebook is a Rails engine that sits on top of RubyLLM's acts_as_chat and acts_as_message models. It adds per-message cost calculation, chat-level review workflows, and a Hotwire-powered dashboard — without duplicating any conversation data.
Features
- Cost tracking: Per-message cost calculation based on configurable pricing rules
- Review workflow: Approve or flag entire chat conversations with comments
- Dashboard: Browse chats, view conversation threads, see cost/token summaries
- RubyLLM native: Reads directly from your Chat and Message models — no data duplication
Requirements
- Ruby 3.4+
- Rails 8.1+
-
RubyLLM with
acts_as_chat/acts_as_messagemodels
Installation
bundle add tracebook
bin/rails generate tracebook:install
bin/rails db:migrateMount the engine in config/routes.rb:
mount Tracebook::Engine => "/tracebook"Seed pricing rules for common providers:
bin/rails tracebook:seed_pricingConfiguration
# config/initializers/tracebook.rb
Tracebook.configure do |config|
# Class names for your RubyLLM models
config.chat_class = "Chat" # default
config.message_class = "Message" # default
# Currency for cost calculations
config.default_currency = "USD" # default
# How to display the user in the dashboard
config.actor_display = ->(actor) { actor.try(:name) }
# Items per page
config.per_page = 25 # default
endPII Redaction
Tracebook includes an opt-in PII redaction pipeline for unstructured natural language in LLM conversations. Nothing is redacted unless explicitly configured.
Enabling Patterns
Tracebook.configure do |config|
# Enable individual patterns
config.redact :email, :phone, :ssn, :credit_card
# Or enable a whole group
config.redact :pii, :api_keys
endAvailable Patterns
| Pattern | Detects | Validation |
|---|---|---|
email |
Email addresses | -- |
phone |
Phone numbers (US format) | -- |
ssn |
Social Security Numbers | SSA area-number range check |
credit_card |
Credit card numbers | Luhn algorithm |
openai_key |
OpenAI API keys (sk-...) |
-- |
anthropic_key |
Anthropic API keys (sk-ant-...) |
-- |
aws_key |
AWS access key IDs (AKIA...) |
-- |
stripe_key |
Stripe API keys | -- |
github_token |
GitHub tokens (ghp_, gho_, etc.) |
-- |
ipv4 |
IPv4 addresses | Octet range 0-255 |
bearer_token |
Authorization bearer tokens | -- |
jwt |
JSON Web Tokens | -- |
private_key |
PEM-format private key blocks | -- |
Pattern Groups
| Group | Patterns included |
|---|---|
pii |
email, phone, ssn
|
financial |
credit_card |
api_keys |
openai_key, anthropic_key, aws_key, stripe_key, github_token
|
auth |
bearer_token, jwt
|
network |
ipv4 |
crypto |
private_key |
Custom Patterns
Tracebook.configure do |config|
config.redact_pattern(
/policy[:\s]*\d{10}/i,
"[POLICY_NUMBER]",
name: "policy_number"
)
endCustom Redactors
Provide any callable (proc, lambda, or object responding to call):
Tracebook.configure do |config|
config.custom_redactors << ->(text) {
text.gsub(/MRN-\d{8}/, "[MEDICAL_RECORD]")
}
endUsing Redaction
# Redact text directly
Tracebook.redact("Email user@test.com or call 555-123-4567")
# => "Email [EMAIL] or call [PHONE]"
# Use in your application before saving messages
content = Tracebook.redact(user_input)
chat.ask(content)Planned: LLM-Based Redaction
For context-sensitive PII that regex can't catch (e.g. "my social is seven eight two three three three two"), a future version will support LLM-based redaction using a local model (e.g., Ollama) to detect PII in natural language before persistence.
Tracebook Tables
Tracebook adds four tables — all prefixed with tracebook_ to avoid collisions:
| Table | Purpose |
|---|---|
tracebook_message_costs |
Cost + latency per message (polymorphic join to your Message) |
tracebook_chat_reviews |
Review state per chat (polymorphic join to your Chat) |
tracebook_comments |
Comments on chat reviews |
tracebook_pricing_rules |
Cost per token by provider/model |
Your Chat and Message tables are untouched.
Cost Calculation
After an LLM response, call Tracebook.calculate_cost! to record the cost:
Tracebook.calculate_cost!(
message,
provider: "openai",
model: "gpt-4o",
latency_ms: elapsed_ms
)This looks up the matching pricing rule, calculates input/output costs, and creates a tracebook_message_costs record joined to the message.
Integration Example
In a typical RubyLLM app, hook into the chat response flow:
class ChatResponseJob < ApplicationJob
def perform(chat_id, content)
chat = Chat.find(chat_id)
chat.ask(content) do |chunk|
# stream chunks...
end
# After response, calculate cost for the last assistant message
message = chat.messages.where(role: "assistant").last
model = chat.model
Tracebook.calculate_cost!(
message,
provider: model.provider,
model: model.model_id
)
end
endPricing Rules
Tracebook calculates costs using PricingRule records. Seed defaults for common providers:
bin/rails tracebook:seed_pricingThis creates rules for OpenAI, Anthropic, Gemini, and Ollama models.
Adding Custom Rules
Tracebook::PricingRule.create!(
provider: "xai",
model_glob: "grok-4-1-fast*",
input_cents_per_unit: 20, # per 1k tokens
output_cents_per_unit: 50,
effective_from: Date.new(2025, 7, 1),
currency: "USD"
)Glob Patterns
-
gpt-4o— exact match -
gpt-4o*— matchesgpt-4o,gpt-4o-mini,gpt-4o-2024-08-06 -
claude-3-5-*— matches all Claude 3.5 models -
*— fallback for any model
When multiple rules match, Tracebook prefers the most specific glob (most literal characters), then the most recent effective_from date.
Review Workflow
Reviews happen at the chat level, not per-message. In the dashboard:
- Open a chat to see the full conversation thread
- Click Approve or Flag
- Add comments for context
Programmatic access:
chat = Chat.find(id)
review = Tracebook::ChatReview.for_chat(chat)
review.update!(
review_state: :approved,
reviewed_by: "admin@example.com"
)
review.comments.create!(author: "admin", body: "Looks good")Review States
| State | Meaning |
|---|---|
pending |
Not yet reviewed (default) |
approved |
Reviewed and accepted |
flagged |
Needs attention |
Dashboard
The dashboard is available at /tracebook/chats (or wherever you mount the engine).
Chat List (/tracebook/chats)
- All chats with actor, model, message count, token usage, cost, review state
- KPIs: total chats, messages, cost
Chat Detail (/tracebook/chats/:id)
- Full conversation thread (user and assistant messages)
- Per-message token counts and costs
- Review controls (approve/flag/reset)
- Comment thread
Actor Display
By default, actors are shown as Name or ClassName#id. Customize with:
config.actor_display = ->(actor) {
case actor
when User then actor.email
else "#{actor.class}##{actor.id}"
end
}Securing the Dashboard
The engine inherits from ActionController::Base. Restrict access with route constraints:
# HTTP Basic Auth
mount Tracebook::Engine => "/tracebook",
constraints: ->(req) {
Rack::Auth::Basic::Request.new(req.env).provided? &&
Rack::Auth::Basic::Request.new(req.env).credentials == ["admin", ENV["TRACEBOOK_PASSWORD"]]
}
# Devise
authenticate :user, ->(u) { u.admin? } do
mount Tracebook::Engine => "/tracebook"
endDevelopment & Testing
# Run tests
bin/rails test
# Seed pricing in development
bin/rails tracebook:seed_pricingReset configuration in tests
setup { Tracebook.reset_configuration! }
teardown { Tracebook.reset_configuration! }License
MIT License. See MIT-LICENSE.