Project

pg_reports

0.02
No release in over 3 years
A comprehensive PostgreSQL monitoring and analysis library that provides insights into query performance, index usage, table statistics, and more. Includes a beautiful web dashboard and Telegram notifications.
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
 Dependencies

Development

~> 1.30
~> 0.9

Runtime

>= 5.0
>= 0
>= 1.0
 Project Readme

PgReports

Gem Version Ruby Rails License: MIT

A comprehensive PostgreSQL monitoring and analysis library for Rails applications. Get insights into query performance, index usage, table statistics, connection health, and more. Includes a beautiful web dashboard and Telegram integration for notifications.

Dashboard Screenshot

Features

  • ๐Ÿ“Š Query Analysis - Identify slow, heavy, and expensive queries using pg_stat_statements
  • ๐Ÿ“‡ Index Analysis - Find unused, duplicate, invalid, and missing indexes
  • ๐Ÿ“‹ Table Statistics - Monitor table sizes, bloat, vacuum needs, and cache hit ratios
  • ๐Ÿ”Œ Connection Monitoring - Track active connections, locks, and blocking queries
  • ๐Ÿ–ฅ๏ธ System Overview - Database sizes, PostgreSQL settings, installed extensions
  • ๐ŸŒ Web Dashboard - Beautiful dark-themed UI with sortable tables and expandable rows
  • ๐Ÿ“จ Telegram Integration - Send reports directly to Telegram
  • ๐Ÿ“ˆ Grafana / Prometheus Exporter - Expose selected reports at /metrics with severity derived from configured thresholds
  • ๐Ÿ“ฅ Export - Download reports in TXT, CSV, or JSON format
  • ๐Ÿ”— IDE Integration - Open source locations in VS Code, Cursor, RubyMine, or IntelliJ (with WSL support)
  • ๐Ÿ“Œ Comparison Mode - Save records to compare before/after optimization
  • ๐Ÿ“Š EXPLAIN ANALYZE - Advanced query plan analyzer with problem detection and recommendations
  • ๐Ÿ” SQL Query Monitoring - Real-time monitoring of all executed SQL queries with source location tracking
  • ๐Ÿ”Œ Connection Pool Analytics - Monitor pool usage, wait times, saturation warnings, and connection churn
  • ๐Ÿค– AI Prompt Export - Copy a ready-to-paste prompt for Claude Code, Cursor, or Codex with problem context and report data
  • ๐Ÿ—‘๏ธ Migration Generator - Generate Rails migrations to drop unused indexes

Installation

# Gemfile
gem "pg_reports"
gem "telegram-bot-ruby"  # optional, for Telegram delivery
bundle install

Mount the dashboard:

# config/routes.rb
Rails.application.routes.draw do
  if Rails.env.development?
    mount PgReports::Engine, at: "/pg_reports"
  end

  # Or with authentication:
  # authenticate :user, ->(u) { u.admin? } do
  #   mount PgReports::Engine, at: "/pg_reports"
  # end
end

Visit http://localhost:3000/pg_reports.

For query analysis, also enable pg_stat_statements โ€” see setup below.

Usage

# In console or code
PgReports.slow_queries.display
PgReports.unused_indexes.each { |row| puts row["index_name"] }

# Export
report = PgReports.expensive_queries
report.to_text
report.to_csv
report.to_a

# Telegram
PgReports.slow_queries.send_to_telegram

Full list of reports โ†’

Configuration

# config/initializers/pg_reports.rb
PgReports.configure do |config|
  # Telegram (optional)
  config.telegram_bot_token = ENV["PG_REPORTS_TELEGRAM_TOKEN"]
  config.telegram_chat_id   = ENV["PG_REPORTS_TELEGRAM_CHAT_ID"]

  # Thresholds
  config.slow_query_threshold_ms      = 100
  config.heavy_query_threshold_calls  = 1000
  config.expensive_query_threshold_ms = 10_000
  config.unused_index_threshold_scans = 50
  config.bloat_threshold_percent      = 20
  config.dead_rows_threshold          = 10_000

  # Output
  config.max_query_length = 200

  # Auth (optional)
  config.dashboard_auth = -> {
    authenticate_or_request_with_http_basic do |user, pass|
      user == ENV["PG_REPORTS_USER"] && pass == ENV["PG_REPORTS_PASSWORD"]
    end
  }

  # Google Fonts (default: false โ€” no external requests)
  config.load_external_fonts = false
end
Locale (EN / RU / UK)

PgReports follows your application's I18n.locale. Set it the way you set it for the rest of the app โ€” there's no PgReports-specific knob. The dashboard supports en, ru, and uk out of the box.

Raw query execution (EXPLAIN ANALYZE / Execute Query)

โš ๏ธ Disabled by default. The dashboard's "Execute Query" and "EXPLAIN ANALYZE" buttons require this opt-in.

PgReports.configure do |config|
  config.allow_raw_query_execution = Rails.env.development? || Rails.env.staging?
end
Query source tracking (Rails query logs)

PgReports parses query annotations to show where queries originated. On Rails 7.0+ use the built-in ActiveRecord::QueryLogs (no extra gem needed). On older Rails, install Marginalia โ€” PgReports auto-detects both formats.

Minimal setup โ€” adds controller/action:

# config/application.rb
config.active_record.query_log_tags_enabled = true
config.active_record.query_log_tags = [:controller, :action]

To also surface file path and line number (so source links jump to the actual call site, not just the controller), add a custom source_location lambda that walks caller_locations and skips gem/framework frames:

# config/application.rb
config.active_record.query_log_tags_enabled = true
config.active_record.query_log_tags = [
  :controller,
  :action,
  :job,
  {
    source_location: -> {
      ignore = %r{/(gems|active_record|active_support|active_model|railties|
                    action_controller|action_view|action_pack|action_dispatch|
                    rack|core_ext|relation|associations|scoping|connection_adapters)/}x
      loc = caller_locations.find { |l| !l.path.match?(ignore) }
      "#{loc.path}:#{loc.lineno}" if loc
    }
  }
]

PgReports recognizes the source_location tag and splits it into file and line for the source column.

pg_stat_statements setup

  1. Edit postgresql.conf:
    shared_preload_libraries = 'pg_stat_statements'
    pg_stat_statements.track = all
    
  2. Restart PostgreSQL: sudo systemctl restart postgresql
  3. Create the extension (via dashboard button or PgReports.enable_pg_stat_statements!).

PgReports does not require the pg_read_all_settings role โ€” extension availability is detected directly. Works with CloudnativePG, managed databases, and other restricted environments.

Report object

Every method returns a PgReports::Report:

report = PgReports.slow_queries

report.title         # "Slow Queries (mean time >= 100ms)"
report.data          # Array of hashes
report.columns       # Column names
report.size          # Row count
report.empty?        # Boolean
report.generated_at  # Timestamp

# Output formats
report.to_text       # Plain text table
report.to_markdown   # Markdown table
report.to_html       # HTML table
report.to_csv        # CSV
report.to_a          # Raw data

# Actions
report.display                  # Print to STDOUT
report.send_to_telegram         # Send as message
report.send_to_telegram_as_file # Send as file attachment

# Enumerable
report.each { |row| puts row }
report.map { |row| row["query"] }
report.select { |row| row["calls"] > 100 }

Dashboard features

The dashboard provides one-click execution, sortable columns, expandable rows, filter parameters, multi-format export, Telegram delivery, and pg_stat_statements management.

EXPLAIN ANALYZE โ€” query plan analyzer

Expand a row with a query, click ๐Ÿ“Š EXPLAIN ANALYZE. Shows:

  • Status indicator (๐ŸŸข๐ŸŸก๐Ÿ”ด) โ€” overall query health
  • Key metrics โ€” planning/execution time, cost, rows
  • Detected problems โ€” sequential scans on large tables, high-cost ops, sorts spilling to disk, slow sorts (>1s), inaccurate row estimates (>10ร— off), slow execution
  • Recommendations for each issue
  • Color-coded plan โ€” node types tinted by performance impact (green: efficient, blue: normal, yellow: potential issue)
  • Line annotations highlighting problems on specific plan lines

Queries from pg_stat_statements with parameter placeholders ($1, $2) prompt for parameter values before analysis.

Requires config.allow_raw_query_execution = true.

SQL Query Monitor โ€” real-time query capture

Live capture of all SQL executed by your Rails app. Click โ–ถ Start Monitoring, run any operation, watch the queries appear with:

  • SQL with syntax highlighting
  • Duration (color-coded: ๐ŸŸข <10ms, ๐ŸŸก <100ms, ๐Ÿ”ด >100ms)
  • Source location with click-to-IDE
  • Timestamp

Built on ActiveSupport::Notifications (sql.active_record). Filters internal queries (SCHEMA / CACHE / pg_reports' own). Logged to log/pg_reports.log (JSON Lines). Configurable buffer size and backtrace filter:

PgReports.configure do |config|
  config.query_monitor_log_file = Rails.root.join("log", "custom_monitor.log")
  config.query_monitor_max_queries = 200
  config.query_monitor_backtrace_filter = ->(loc) { !loc.path.match?(%r{/(gems|ruby|railties)/}) }
end

Use cases: debugging N+1, identifying slow queries during feature development, tracking down unexpected queries, teaching ActiveRecord behavior.

Connection pool analytics

Four specialized reports under the Connections category:

  • Pool Usage โ€” total/active/idle per database, utilization %, idle-in-transaction count, available capacity
  • Wait Times โ€” queries waiting on locks/IO/network with wait event types and severity
  • Pool Saturation โ€” auto-classified (Normal / Elevated / Warning / Critical) with context-aware recommendations
  • Connection Churn โ€” age distribution by application, short-lived (<10s) detection, churn-rate calculation, missing-pooling diagnosis
PgReports.pool_usage.display
PgReports.pool_saturation.display
PgReports.connection_churn.display
IDE integration & migration generator

Click any source location (file:line) in a report to open it in your IDE. Supported: VS Code, VS Code (WSL), RubyMine, IntelliJ IDEA, Cursor, Cursor (WSL). Use the โš™๏ธ button to set your default and skip the menu.

For unused or invalid indexes, the dashboard generates a Rails migration: expand the row โ†’ ๐Ÿ—‘๏ธ Generate Migration โ†’ copy the code or create the file directly (opens in your default IDE).

Save records for comparison

When optimizing queries, click ๐Ÿ“Œ Save for Comparison on any expanded row. Saved records persist in browser localStorage per report type and appear above the results table for before/after comparison.

AI prompt export

The Export dropdown includes Copy Prompt (visible on actionable reports). It assembles a ready-to-paste prompt with problem description, fix instructions, and the actual report data โ€” formatted for Claude Code, Cursor, Codex, or any code-aware AI assistant.

Grafana / Prometheus exporter

Expose selected reports at <mount_point>/metrics in Prometheus exposition format. The default mount is /pg_reports, so the endpoint is typically /pg_reports/metrics โ€” but it follows whatever path you used in mount PgReports::Engine, at: "...". Severity (ok / warning / critical) is derived automatically from the thresholds defined in Dashboard::ReportsRegistry::REPORT_CONFIG.

PgReports.configure do |config|
  config.grafana_favorites = [
    :slow_queries,
    :unused_indexes,
    :bloated_tables,
    :missing_validations,
    :polymorphic_without_index
  ]
  config.grafana_metrics_token = ENV["PG_REPORTS_METRICS_TOKEN"]  # optional bearer token
  config.grafana_cache_ttl     = 60                                # seconds
end

Scrape with Prometheus:

scrape_configs:
  - job_name: pg_reports
    metrics_path: /pg_reports/metrics    # adjust to your Engine mount point
    scrape_interval: 60s
    authorization: { credentials: "${PG_REPORTS_METRICS_TOKEN}" }
    static_configs:
      - targets: ["app.internal:3000"]

[!WARNING] Reports are cached via Rails.cache for grafana_cache_ttl so frequent scrapes don't hammer the database. Without it, Prometheus' default 15s scrape interval against heavy reports like missing_validations will DDoS your own DB. Always set a TTL โ‰ฅ scrape interval, and consider a longer per-report TTL for expensive reports.

The exporter also emits a pg_reports_row series per report row (each column becomes a Prometheus label), so the auto-generated dashboard can show a table panel with the actual rows that need fixing โ€” not just an aggregate count.

Generate a matching Grafana dashboard from the same favorites:

bundle exec rake pg_reports:grafana:dashboard
# writes pg_reports.json in pwd; then Dashboards โ†’ Import in Grafana

Full Grafana integration guide โ†’ ย ยทย  Local Prometheus + Grafana without Docker โ†’

Telegram delivery

Get a bot token from @BotFather and your chat ID from @userinfobot, then:

PgReports.configure do |config|
  config.telegram_bot_token = "123456:ABC-DEF..."
  config.telegram_chat_id   = "-1001234567890"
end

PgReports.slow_queries.send_to_telegram
PgReports.health_report.send_to_telegram_as_file

Reports under ~50 rows go as a message; larger ones are sent as a file attachment.

Development

git clone https://github.com/yourusername/pg_reports
cd pg_reports
bundle install
bundle exec rspec
bundle exec rubocop

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b feature/my-feature)
  3. Commit your changes
  4. Push to the branch
  5. Create a Pull Request

License

MIT. See LICENSE.txt.

Acknowledgments

Inspired by rails-pg-extras. UI built with Claude by Anthropic.