No release in over 3 years
A simple Rails engine to log API usage from multiple LLM providers and provide methods for tracking user consumption over time, enabling easy rate-limiting.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
 Dependencies

Runtime

~> 8.0
 Project Readme

OpenRouterUsageTracker

Build Status Gem Version

An effortless Rails engine to track API token usage and cost from multiple LLM providers, including OpenRouter, OpenAI, Google, Anthropic, and xAI. It enables easy rate-limiting and monitoring for your users.

Motivation

Managing Large Language Model (LLM) API costs is crucial for any application that provides AI features to users. This gem provides simple, out-of-the-box tools to log every API call, associate it with a user, and query their usage over time. This allows you to easily implement spending caps, rate limits, or usage-based billing tiers across different providers.

Quick Start

  1. Add the gem to your Gemfile:

    gem 'open_router_usage_tracker', '~> 1.0.4'
  2. Install and run migrations:

    bundle install
    bin/rails g open_router_usage_tracker:install
    bin/rails g open_router_usage_tracker:summary_install
    bin/rails db:migrate
  3. Add the concern to your User model:

    # app/models/user.rb
    class User < ApplicationRecord
      include OpenRouterUsageTracker::Trackable
    end
  4. Log a request:

    # In your controller or service
    OpenRouterUsageTracker.log(
      response: a_parsed_json_response_from_your_llm_provider,
      user: current_user,
      provider: "open_ai"
    )

Installation

Add this line to your application's Gemfile:

gem 'open_router_usage_tracker', '~> 1.0.4'

And then execute:

bundle install

Or install it yourself as:

gem install open_router_usage_tracker

Setup

  1. Run the Install Generator: This will create a migration file in your application to add the open_router_usage_logs table.

    bin/rails g open_router_usage_tracker:install
  2. Run the Summary Table Generator (Required): To enable performant daily rate-limiting, generate the migration for the summary table.

    bin/rails g open_router_usage_tracker:summary_install
  3. Run the Database Migrations:

    bin/rails db:migrate
  4. Include the Trackable Concern: To add the usage tracking methods to your user model, include the concern. This works with any user-like model (e.g., User, Account).

    # app/models/user.rb
    class User < ApplicationRecord
      include OpenRouterUsageTracker::Trackable
    
      # ... rest of your model
    end
  5. (IMPORTANT) Configure Data Retention: The Trackable concern intentionally does not set a dependent option on the usage_logs and daily_summaries associations. This is a critical design choice to prevent accidental data loss. You must decide what happens to a user's usage data when their account is deleted.

    To delete all usage data with the user:

    # app/models/user.rb
    class User < ApplicationRecord
      include OpenRouterUsageTracker::Trackable
    
      has_many :usage_logs, as: :user, class_name: "OpenRouterUsageTracker::UsageLog", dependent: :destroy
      has_many :daily_summaries, as: :user, class_name: "OpenRouterUsageTracker::DailySummary", dependent: :destroy
    end

    To keep all usage data:

    # app/models/user.rb
    class User < ApplicationRecord
      include OpenRouterUsageTracker::Trackable
      # No `dependent` option needed. The records will remain.
    end

Usage

Using the gem involves two parts: logging new requests and tracking existing usage.

Logging a Request

In your application where you receive a successful response from an LLM API, call the log method. It's designed to be simple and fail loudly if the data is invalid.

# Assume `api_response` is the parsed JSON hash from the provider
# and `current_user` is your authenticated user object.

# For OpenRouter (the default provider)
OpenRouterUsageTracker.log(response: open_router_response, user: current_user)

# For OpenAI
OpenRouterUsageTracker.log(response: openai_response, user: current_user, provider: "open_ai")

# For Google
OpenRouterUsageTracker.log(response: google_response, user: current_user, provider: "google")

# You can also prevent storing the raw API response for privacy or storage reasons.
OpenRouterUsageTracker.log(response: api_response, user: current_user, provider: "anthropic", store_raw_response: false)

Supported Providers

The gem currently supports the following providers out-of-the-box:

  • open_router (default)
  • open_ai
  • google
  • anthropic
  • x_ai

The gem will automatically parse the response from each provider to extract the relevant usage data. If a provider does not return a specific field (like cost), it will be saved as 0.

Providing Your Own Cost

For providers that do not return cost information, you can calculate it yourself and add it to the response hash before logging. The gem will automatically detect and use a cost key inside the usage hash.

# Calculate your own cost for an OpenAI call
openai_response["usage"]["cost"] = your_calculated_cost # e.g., 0.0123

# The log method will now use your provided cost
OpenRouterUsageTracker.log(response: openai_response, user: current_user, provider: "open_ai")

Required Keys

For the log method to parse the API response correctly, the model, id, and usage (or similarly named) keys must be present in the response hash. Do not filter them out before logging.

Daily Usage Tracking and Rate-Limiting

For high-performance rate-limiting, the gem provides methods to query the daily summary table. This avoids slow SUM queries on the main log table.

The daily_usage_summary_for(day:, provider:, model:) method provides a near-instantaneous check of a user's usage for a specific model on a given day.

Example: Implementing a daily token limit for a specific model

Imagine you want to prevent users from using more than 100,000 tokens per day for a specific model.

# somewhere in a controller or before_action
def enforce_daily_limit
  # Be sure to handle timezones correctly for your users.
  today = Time.zone.today # =>  Wed, 30 Jul 2025
  
  # This check is extremely fast as it queries the small summary table.
  summary = current_user.daily_usage_summary_for(
    day: today,
    provider: "open_ai",
    model: "gpt-4o"
  )

  if summary && summary.total_tokens > 100_000
    render json: { error: "You have exceeded your daily token limit for this model." }, status: :too_many_requests
    return
  end
end

Querying Costs

The Trackable concern also provides a powerful method to calculate total costs over a date range by querying the performant daily_summaries table.

The total_cost_in_range(range, provider:, model: nil) method allows you to easily calculate costs for a specific provider or a specific model over any period.

Example: Analyzing costs over a period

# In a reporting or analytics service
def generate_cost_report(user)
  # Get the date range for the last 30 days.
  last_30_days = (30.days.ago.to_date)..Date.current

  # Calculate the total cost for all OpenRouter models in the last 30 days.
  # This is meaningful because OpenRouter provides direct cost data.
  open_router_cost = user.total_cost_in_range(last_30_days, provider: "open_router")

  # For providers that don't return cost, this will correctly be 0.
  openai_cost = user.total_cost_in_range(last_30_days, provider: "open_ai")

  puts "Total OpenRouter cost in the last 30 days: $#{open_router_cost.round(2)}"
  puts "Total OpenAI cost in the last 30 days: $#{openai_cost.round(2)}" # Will be $0.00
end

Note that different LLM models have different costs, so it often makes sense to enforce limits on a per-provider and per-model basis. You can always run your own queries on the OpenRouterUsageTracker::DailySummary table for more complex logic.

Contributing

Open an issue first to discuss.

  1. Fork the repository.
  2. Create your feature branch (git checkout -b my-new-feature).
  3. Commit your changes (git commit -am 'Add some feature').
  4. Push to the branch (git push origin my-new-feature).
  5. Create a new Pull Request using your fork

License

The gem is available as open source under the terms of the MIT License.

Architecture Diagrams

Gem Components

graph TD
    subgraph A[Rails Host App]
        U[User Model]
        C[Controller/Service]
    end

    subgraph B[OpenRouterUsageTracker Gem]
        T{Trackable Concern}
        L[Log Method]
        AD(Adapter)
        P[Parsers]
        UL[UsageLog Model]
        DS[DailySummary Model]
    end

    subgraph D[Database]
        T1(open_router_usage_logs Table)
        T2(open_router_daily_summaries Table)
    end

    C -- Calls --> L
    U -- Includes --> T
    T -- has_many --> UL
    T -- has_many --> DS

    L -- Uses --> AD
    AD -- Selects --> P
    P -- Creates --> UL
    AD -- Updates --> DS

    UL -- Persisted in --> T1
    DS -- Persisted in --> T2

    style B fill:#f9f,stroke:#333,stroke-width:2px
    style A fill:#ccf,stroke:#333,stroke-width:2px
Loading

Logging Sequence

sequenceDiagram
    participant App as Rails Host App
    participant Gem as OpenRouterUsageTracker
    participant DB as Database

    App->>+Gem: OpenRouterUsageTracker.log(response: ..., user: ..., provider: 'open_ai')
    Gem->>Gem: Select 'OpenAi' Parser
    Gem->>+DB: BEGIN TRANSACTION
    Gem->>DB: CREATE UsageLog record
    Gem->>DB: FIND OR INITIALIZE DailySummary for today
    Gem->>DB: INCREMENT and SAVE DailySummary record
    DB-->>-Gem: COMMIT TRANSACTION
    Gem-->>-App: return UsageLog object
Loading

Database Schema (ERD)

erDiagram
    USER ||--o{ USAGE_LOG : "has_many"
    USER ||--o{ DAILY_SUMMARY : "has_many"

    USER {
        string name
        string email
    }

    USAGE_LOG {
        string user_type
        bigint user_id
        string provider
        string model
        string request_id
        integer prompt_tokens
        integer completion_tokens
        integer total_tokens
        decimal cost
        json raw_usage_response
    }

    DAILY_SUMMARY {
        string user_type
        bigint user_id
        date day
        string provider
        string model
        integer total_tokens
        integer prompt_tokens
        integer completion_tokens
        decimal cost
    }

Loading